00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 838 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3498 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.103 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.104 The recommended git tool is: git 00:00:00.104 using credential 00000000-0000-0000-0000-000000000002 00:00:00.106 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.182 Fetching changes from the remote Git repository 00:00:00.184 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.251 Using shallow fetch with depth 1 00:00:00.251 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.251 > git --version # timeout=10 00:00:00.307 > git --version # 'git version 2.39.2' 00:00:00.307 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.334 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.334 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.001 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.015 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.029 Checking out Revision 53a1a621557260e3fbfd1fd32ee65ff11a804d5b (FETCH_HEAD) 00:00:07.029 > git config core.sparsecheckout # timeout=10 00:00:07.041 > git read-tree -mu HEAD # timeout=10 00:00:07.061 > git checkout -f 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=5 00:00:07.080 Commit message: "packer: Merge irdmafedora into main fedora image" 00:00:07.080 > git rev-list --no-walk 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=10 00:00:07.158 [Pipeline] Start of Pipeline 00:00:07.168 [Pipeline] library 00:00:07.170 Loading library shm_lib@master 00:00:07.170 Library shm_lib@master is cached. Copying from home. 00:00:07.185 [Pipeline] node 00:00:07.198 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.200 [Pipeline] { 00:00:07.212 [Pipeline] catchError 00:00:07.214 [Pipeline] { 00:00:07.226 [Pipeline] wrap 00:00:07.234 [Pipeline] { 00:00:07.240 [Pipeline] stage 00:00:07.242 [Pipeline] { (Prologue) 00:00:07.464 [Pipeline] sh 00:00:07.750 + logger -p user.info -t JENKINS-CI 00:00:07.768 [Pipeline] echo 00:00:07.770 Node: CYP9 00:00:07.777 [Pipeline] sh 00:00:08.072 [Pipeline] setCustomBuildProperty 00:00:08.082 [Pipeline] echo 00:00:08.083 Cleanup processes 00:00:08.086 [Pipeline] sh 00:00:08.370 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.370 2670388 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.384 [Pipeline] sh 00:00:08.671 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.671 ++ grep -v 'sudo pgrep' 00:00:08.671 ++ awk '{print $1}' 00:00:08.671 + sudo kill -9 00:00:08.671 + true 00:00:08.686 [Pipeline] cleanWs 00:00:08.694 [WS-CLEANUP] Deleting project workspace... 00:00:08.694 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.701 [WS-CLEANUP] done 00:00:08.707 [Pipeline] setCustomBuildProperty 00:00:08.722 [Pipeline] sh 00:00:09.012 + sudo git config --global --replace-all safe.directory '*' 00:00:09.080 [Pipeline] httpRequest 00:00:09.501 [Pipeline] echo 00:00:09.503 Sorcerer 10.211.164.101 is alive 00:00:09.513 [Pipeline] retry 00:00:09.517 [Pipeline] { 00:00:09.533 [Pipeline] httpRequest 00:00:09.538 HttpMethod: GET 00:00:09.539 URL: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:09.539 Sending request to url: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:09.556 Response Code: HTTP/1.1 200 OK 00:00:09.556 Success: Status code 200 is in the accepted range: 200,404 00:00:09.556 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:15.092 [Pipeline] } 00:00:15.108 [Pipeline] // retry 00:00:15.115 [Pipeline] sh 00:00:15.400 + tar --no-same-owner -xf jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:15.415 [Pipeline] httpRequest 00:00:15.816 [Pipeline] echo 00:00:15.818 Sorcerer 10.211.164.101 is alive 00:00:15.827 [Pipeline] retry 00:00:15.829 [Pipeline] { 00:00:15.844 [Pipeline] httpRequest 00:00:15.848 HttpMethod: GET 00:00:15.848 URL: http://10.211.164.101/packages/spdk_e9b86137823c4255d2b9511d8465fe530a43c489.tar.gz 00:00:15.849 Sending request to url: http://10.211.164.101/packages/spdk_e9b86137823c4255d2b9511d8465fe530a43c489.tar.gz 00:00:15.872 Response Code: HTTP/1.1 200 OK 00:00:15.873 Success: Status code 200 is in the accepted range: 200,404 00:00:15.873 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_e9b86137823c4255d2b9511d8465fe530a43c489.tar.gz 00:01:58.520 [Pipeline] } 00:01:58.537 [Pipeline] // retry 00:01:58.545 [Pipeline] sh 00:01:58.832 + tar --no-same-owner -xf spdk_e9b86137823c4255d2b9511d8465fe530a43c489.tar.gz 00:02:01.392 [Pipeline] sh 00:02:01.676 + git -C spdk log --oneline -n5 00:02:01.676 e9b861378 lib/iscsi: Fix: Unregister logout timer 00:02:01.676 081f43f2b lib/nvmf: Fix memory leak in nvmf_bdev_ctrlr_unmap 00:02:01.676 daeaec816 test/unit: remove unneeded MOCKs from ftl unit tests 00:02:01.676 78f92084e module/bdev: dump more info about compress 00:02:01.676 5e156a6e7 nvmf/rdma: fix last_wqe_reached ctx handling 00:02:01.693 [Pipeline] withCredentials 00:02:01.703 > git --version # timeout=10 00:02:01.716 > git --version # 'git version 2.39.2' 00:02:01.734 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:02:01.736 [Pipeline] { 00:02:01.746 [Pipeline] retry 00:02:01.748 [Pipeline] { 00:02:01.765 [Pipeline] sh 00:02:02.055 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:02:02.068 [Pipeline] } 00:02:02.088 [Pipeline] // retry 00:02:02.093 [Pipeline] } 00:02:02.109 [Pipeline] // withCredentials 00:02:02.118 [Pipeline] httpRequest 00:02:02.657 [Pipeline] echo 00:02:02.659 Sorcerer 10.211.164.101 is alive 00:02:02.669 [Pipeline] retry 00:02:02.671 [Pipeline] { 00:02:02.687 [Pipeline] httpRequest 00:02:02.692 HttpMethod: GET 00:02:02.692 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:02.692 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:02.696 Response Code: HTTP/1.1 200 OK 00:02:02.696 Success: Status code 200 is in the accepted range: 200,404 00:02:02.696 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:07.668 [Pipeline] } 00:02:07.687 [Pipeline] // retry 00:02:07.694 [Pipeline] sh 00:02:07.982 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:09.909 [Pipeline] sh 00:02:10.194 + git -C dpdk log --oneline -n5 00:02:10.194 eeb0605f11 version: 23.11.0 00:02:10.194 238778122a doc: update release notes for 23.11 00:02:10.194 46aa6b3cfc doc: fix description of RSS features 00:02:10.194 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:10.194 7e421ae345 devtools: support skipping forbid rule check 00:02:10.207 [Pipeline] } 00:02:10.220 [Pipeline] // stage 00:02:10.228 [Pipeline] stage 00:02:10.230 [Pipeline] { (Prepare) 00:02:10.247 [Pipeline] writeFile 00:02:10.261 [Pipeline] sh 00:02:10.548 + logger -p user.info -t JENKINS-CI 00:02:10.561 [Pipeline] sh 00:02:10.847 + logger -p user.info -t JENKINS-CI 00:02:10.860 [Pipeline] sh 00:02:11.146 + cat autorun-spdk.conf 00:02:11.146 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:11.146 SPDK_TEST_NVMF=1 00:02:11.146 SPDK_TEST_NVME_CLI=1 00:02:11.146 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:11.146 SPDK_TEST_NVMF_NICS=e810 00:02:11.146 SPDK_TEST_VFIOUSER=1 00:02:11.146 SPDK_RUN_UBSAN=1 00:02:11.146 NET_TYPE=phy 00:02:11.146 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:11.146 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:11.154 RUN_NIGHTLY=1 00:02:11.159 [Pipeline] readFile 00:02:11.184 [Pipeline] withEnv 00:02:11.186 [Pipeline] { 00:02:11.199 [Pipeline] sh 00:02:11.486 + set -ex 00:02:11.486 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:11.486 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:11.486 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:11.486 ++ SPDK_TEST_NVMF=1 00:02:11.486 ++ SPDK_TEST_NVME_CLI=1 00:02:11.486 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:11.486 ++ SPDK_TEST_NVMF_NICS=e810 00:02:11.486 ++ SPDK_TEST_VFIOUSER=1 00:02:11.486 ++ SPDK_RUN_UBSAN=1 00:02:11.486 ++ NET_TYPE=phy 00:02:11.486 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:11.486 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:11.486 ++ RUN_NIGHTLY=1 00:02:11.486 + case $SPDK_TEST_NVMF_NICS in 00:02:11.486 + DRIVERS=ice 00:02:11.486 + [[ tcp == \r\d\m\a ]] 00:02:11.486 + [[ -n ice ]] 00:02:11.486 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:11.486 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:11.486 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:11.486 rmmod: ERROR: Module irdma is not currently loaded 00:02:11.486 rmmod: ERROR: Module i40iw is not currently loaded 00:02:11.486 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:11.486 + true 00:02:11.486 + for D in $DRIVERS 00:02:11.486 + sudo modprobe ice 00:02:11.486 + exit 0 00:02:11.496 [Pipeline] } 00:02:11.507 [Pipeline] // withEnv 00:02:11.510 [Pipeline] } 00:02:11.520 [Pipeline] // stage 00:02:11.527 [Pipeline] catchError 00:02:11.528 [Pipeline] { 00:02:11.540 [Pipeline] timeout 00:02:11.540 Timeout set to expire in 1 hr 0 min 00:02:11.541 [Pipeline] { 00:02:11.554 [Pipeline] stage 00:02:11.555 [Pipeline] { (Tests) 00:02:11.568 [Pipeline] sh 00:02:11.855 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:11.855 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:11.855 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:11.855 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:11.855 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:11.855 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:11.855 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:11.855 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:11.855 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:11.855 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:11.855 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:11.855 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:11.855 + source /etc/os-release 00:02:11.855 ++ NAME='Fedora Linux' 00:02:11.855 ++ VERSION='39 (Cloud Edition)' 00:02:11.855 ++ ID=fedora 00:02:11.855 ++ VERSION_ID=39 00:02:11.855 ++ VERSION_CODENAME= 00:02:11.855 ++ PLATFORM_ID=platform:f39 00:02:11.855 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:11.855 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:11.855 ++ LOGO=fedora-logo-icon 00:02:11.855 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:11.855 ++ HOME_URL=https://fedoraproject.org/ 00:02:11.855 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:11.855 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:11.855 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:11.855 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:11.855 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:11.855 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:11.855 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:11.856 ++ SUPPORT_END=2024-11-12 00:02:11.856 ++ VARIANT='Cloud Edition' 00:02:11.856 ++ VARIANT_ID=cloud 00:02:11.856 + uname -a 00:02:11.856 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:11.856 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:15.156 Hugepages 00:02:15.156 node hugesize free / total 00:02:15.156 node0 1048576kB 0 / 0 00:02:15.156 node0 2048kB 0 / 0 00:02:15.156 node1 1048576kB 0 / 0 00:02:15.156 node1 2048kB 0 / 0 00:02:15.156 00:02:15.156 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:15.156 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:02:15.156 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:02:15.156 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:02:15.156 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:02:15.156 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:02:15.156 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:02:15.156 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:02:15.156 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:02:15.156 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:02:15.156 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:02:15.156 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:02:15.156 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:02:15.156 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:02:15.156 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:02:15.156 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:02:15.156 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:02:15.156 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:02:15.156 + rm -f /tmp/spdk-ld-path 00:02:15.156 + source autorun-spdk.conf 00:02:15.156 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:15.156 ++ SPDK_TEST_NVMF=1 00:02:15.156 ++ SPDK_TEST_NVME_CLI=1 00:02:15.156 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:15.156 ++ SPDK_TEST_NVMF_NICS=e810 00:02:15.156 ++ SPDK_TEST_VFIOUSER=1 00:02:15.156 ++ SPDK_RUN_UBSAN=1 00:02:15.156 ++ NET_TYPE=phy 00:02:15.156 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:15.156 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:15.156 ++ RUN_NIGHTLY=1 00:02:15.156 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:15.156 + [[ -n '' ]] 00:02:15.156 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:15.156 + for M in /var/spdk/build-*-manifest.txt 00:02:15.156 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:15.156 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:15.156 + for M in /var/spdk/build-*-manifest.txt 00:02:15.156 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:15.157 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:15.157 + for M in /var/spdk/build-*-manifest.txt 00:02:15.157 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:15.157 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:15.157 ++ uname 00:02:15.157 + [[ Linux == \L\i\n\u\x ]] 00:02:15.157 + sudo dmesg -T 00:02:15.157 + sudo dmesg --clear 00:02:15.157 + dmesg_pid=2671859 00:02:15.157 + [[ Fedora Linux == FreeBSD ]] 00:02:15.157 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:15.157 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:15.157 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:15.157 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:15.157 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:15.157 + [[ -x /usr/src/fio-static/fio ]] 00:02:15.157 + export FIO_BIN=/usr/src/fio-static/fio 00:02:15.157 + FIO_BIN=/usr/src/fio-static/fio 00:02:15.157 + sudo dmesg -Tw 00:02:15.157 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:15.157 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:15.157 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:15.157 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:15.157 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:15.157 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:15.157 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:15.157 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:15.157 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:15.157 Test configuration: 00:02:15.157 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:15.157 SPDK_TEST_NVMF=1 00:02:15.157 SPDK_TEST_NVME_CLI=1 00:02:15.157 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:15.157 SPDK_TEST_NVMF_NICS=e810 00:02:15.157 SPDK_TEST_VFIOUSER=1 00:02:15.157 SPDK_RUN_UBSAN=1 00:02:15.157 NET_TYPE=phy 00:02:15.157 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:15.157 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:15.157 RUN_NIGHTLY=1 17:02:13 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:15.157 17:02:13 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:15.157 17:02:13 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:15.157 17:02:13 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:15.157 17:02:13 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:15.157 17:02:13 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:15.157 17:02:13 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:15.157 17:02:13 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:15.157 17:02:13 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:15.157 17:02:13 -- paths/export.sh@5 -- $ export PATH 00:02:15.157 17:02:13 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:15.157 17:02:13 -- common/autobuild_common.sh@478 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:15.157 17:02:13 -- common/autobuild_common.sh@479 -- $ date +%s 00:02:15.157 17:02:13 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727794933.XXXXXX 00:02:15.157 17:02:13 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727794933.KcvEaT 00:02:15.157 17:02:13 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:02:15.157 17:02:13 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:02:15.157 17:02:13 -- common/autobuild_common.sh@486 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:15.157 17:02:13 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:02:15.157 17:02:13 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:15.157 17:02:13 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:15.157 17:02:13 -- common/autobuild_common.sh@495 -- $ get_config_params 00:02:15.157 17:02:13 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:15.157 17:02:13 -- common/autotest_common.sh@10 -- $ set +x 00:02:15.157 17:02:13 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:02:15.157 17:02:13 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:02:15.157 17:02:13 -- pm/common@17 -- $ local monitor 00:02:15.157 17:02:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:15.157 17:02:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:15.157 17:02:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:15.157 17:02:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:15.157 17:02:13 -- pm/common@21 -- $ date +%s 00:02:15.157 17:02:13 -- pm/common@21 -- $ date +%s 00:02:15.157 17:02:13 -- pm/common@25 -- $ sleep 1 00:02:15.157 17:02:13 -- pm/common@21 -- $ date +%s 00:02:15.157 17:02:13 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727794933 00:02:15.157 17:02:13 -- pm/common@21 -- $ date +%s 00:02:15.157 17:02:13 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727794933 00:02:15.157 17:02:13 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727794933 00:02:15.157 17:02:13 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727794933 00:02:15.157 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727794933_collect-cpu-load.pm.log 00:02:15.157 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727794933_collect-vmstat.pm.log 00:02:15.157 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727794933_collect-cpu-temp.pm.log 00:02:15.157 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727794933_collect-bmc-pm.bmc.pm.log 00:02:16.102 17:02:14 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:02:16.102 17:02:14 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:16.102 17:02:14 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:16.102 17:02:14 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:16.102 17:02:14 -- spdk/autobuild.sh@16 -- $ date -u 00:02:16.102 Tue Oct 1 03:02:14 PM UTC 2024 00:02:16.102 17:02:14 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:16.102 v25.01-pre-23-ge9b861378 00:02:16.102 17:02:14 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:16.102 17:02:14 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:16.102 17:02:14 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:16.102 17:02:14 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:16.102 17:02:14 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:16.102 17:02:14 -- common/autotest_common.sh@10 -- $ set +x 00:02:16.102 ************************************ 00:02:16.102 START TEST ubsan 00:02:16.102 ************************************ 00:02:16.102 17:02:14 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:16.102 using ubsan 00:02:16.102 00:02:16.102 real 0m0.001s 00:02:16.102 user 0m0.000s 00:02:16.102 sys 0m0.001s 00:02:16.102 17:02:14 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:16.102 17:02:14 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:16.102 ************************************ 00:02:16.102 END TEST ubsan 00:02:16.102 ************************************ 00:02:16.363 17:02:14 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:16.363 17:02:14 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:16.363 17:02:14 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:16.363 17:02:14 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:02:16.363 17:02:14 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:16.363 17:02:14 -- common/autotest_common.sh@10 -- $ set +x 00:02:16.363 ************************************ 00:02:16.363 START TEST build_native_dpdk 00:02:16.363 ************************************ 00:02:16.363 17:02:14 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:02:16.363 17:02:14 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:16.363 17:02:14 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:16.363 17:02:14 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:16.363 17:02:14 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:16.363 17:02:14 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:16.363 17:02:14 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:16.363 17:02:14 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:16.363 17:02:14 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:16.363 17:02:14 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:16.363 17:02:14 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:16.363 17:02:14 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:16.363 17:02:14 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:16.363 17:02:14 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:16.363 17:02:14 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:16.363 17:02:14 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:16.363 17:02:14 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:16.363 17:02:14 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:16.363 17:02:14 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:02:16.363 17:02:14 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:16.363 17:02:14 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:02:16.363 eeb0605f11 version: 23.11.0 00:02:16.363 238778122a doc: update release notes for 23.11 00:02:16.363 46aa6b3cfc doc: fix description of RSS features 00:02:16.363 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:16.363 7e421ae345 devtools: support skipping forbid rule check 00:02:16.363 17:02:14 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:16.364 17:02:14 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:16.364 17:02:14 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:16.364 17:02:14 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:16.364 17:02:14 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:16.364 17:02:14 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:16.364 17:02:14 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:16.364 17:02:14 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:16.364 17:02:14 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:16.364 17:02:14 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:16.364 17:02:14 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:16.364 17:02:14 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:16.364 17:02:14 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:16.364 17:02:14 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:16.364 17:02:14 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:16.364 17:02:14 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:16.364 17:02:14 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:16.364 17:02:14 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:16.364 17:02:14 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:16.364 patching file config/rte_config.h 00:02:16.364 Hunk #1 succeeded at 60 (offset 1 line). 00:02:16.364 17:02:14 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:16.364 17:02:14 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:16.364 patching file lib/pcapng/rte_pcapng.c 00:02:16.364 17:02:14 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:16.364 17:02:14 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:16.364 17:02:14 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:16.364 17:02:14 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:16.364 17:02:14 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:16.364 17:02:14 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:16.364 17:02:14 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:21.655 The Meson build system 00:02:21.655 Version: 1.5.0 00:02:21.655 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:21.655 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:02:21.655 Build type: native build 00:02:21.655 Program cat found: YES (/usr/bin/cat) 00:02:21.655 Project name: DPDK 00:02:21.655 Project version: 23.11.0 00:02:21.655 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:21.655 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:21.655 Host machine cpu family: x86_64 00:02:21.655 Host machine cpu: x86_64 00:02:21.655 Message: ## Building in Developer Mode ## 00:02:21.655 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:21.655 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:02:21.655 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:02:21.655 Program python3 found: YES (/usr/bin/python3) 00:02:21.655 Program cat found: YES (/usr/bin/cat) 00:02:21.655 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:21.655 Compiler for C supports arguments -march=native: YES 00:02:21.655 Checking for size of "void *" : 8 00:02:21.655 Checking for size of "void *" : 8 (cached) 00:02:21.655 Library m found: YES 00:02:21.655 Library numa found: YES 00:02:21.655 Has header "numaif.h" : YES 00:02:21.655 Library fdt found: NO 00:02:21.655 Library execinfo found: NO 00:02:21.655 Has header "execinfo.h" : YES 00:02:21.655 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:21.655 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:21.655 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:21.655 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:21.655 Run-time dependency openssl found: YES 3.1.1 00:02:21.655 Run-time dependency libpcap found: YES 1.10.4 00:02:21.655 Has header "pcap.h" with dependency libpcap: YES 00:02:21.655 Compiler for C supports arguments -Wcast-qual: YES 00:02:21.655 Compiler for C supports arguments -Wdeprecated: YES 00:02:21.655 Compiler for C supports arguments -Wformat: YES 00:02:21.655 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:21.655 Compiler for C supports arguments -Wformat-security: NO 00:02:21.655 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:21.655 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:21.655 Compiler for C supports arguments -Wnested-externs: YES 00:02:21.655 Compiler for C supports arguments -Wold-style-definition: YES 00:02:21.655 Compiler for C supports arguments -Wpointer-arith: YES 00:02:21.655 Compiler for C supports arguments -Wsign-compare: YES 00:02:21.655 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:21.655 Compiler for C supports arguments -Wundef: YES 00:02:21.655 Compiler for C supports arguments -Wwrite-strings: YES 00:02:21.655 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:21.655 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:21.655 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:21.655 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:21.655 Program objdump found: YES (/usr/bin/objdump) 00:02:21.655 Compiler for C supports arguments -mavx512f: YES 00:02:21.655 Checking if "AVX512 checking" compiles: YES 00:02:21.655 Fetching value of define "__SSE4_2__" : 1 00:02:21.655 Fetching value of define "__AES__" : 1 00:02:21.655 Fetching value of define "__AVX__" : 1 00:02:21.655 Fetching value of define "__AVX2__" : 1 00:02:21.655 Fetching value of define "__AVX512BW__" : 1 00:02:21.655 Fetching value of define "__AVX512CD__" : 1 00:02:21.655 Fetching value of define "__AVX512DQ__" : 1 00:02:21.655 Fetching value of define "__AVX512F__" : 1 00:02:21.655 Fetching value of define "__AVX512VL__" : 1 00:02:21.655 Fetching value of define "__PCLMUL__" : 1 00:02:21.655 Fetching value of define "__RDRND__" : 1 00:02:21.655 Fetching value of define "__RDSEED__" : 1 00:02:21.655 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:21.655 Fetching value of define "__znver1__" : (undefined) 00:02:21.655 Fetching value of define "__znver2__" : (undefined) 00:02:21.655 Fetching value of define "__znver3__" : (undefined) 00:02:21.655 Fetching value of define "__znver4__" : (undefined) 00:02:21.655 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:21.655 Message: lib/log: Defining dependency "log" 00:02:21.655 Message: lib/kvargs: Defining dependency "kvargs" 00:02:21.655 Message: lib/telemetry: Defining dependency "telemetry" 00:02:21.655 Checking for function "getentropy" : NO 00:02:21.655 Message: lib/eal: Defining dependency "eal" 00:02:21.655 Message: lib/ring: Defining dependency "ring" 00:02:21.655 Message: lib/rcu: Defining dependency "rcu" 00:02:21.655 Message: lib/mempool: Defining dependency "mempool" 00:02:21.655 Message: lib/mbuf: Defining dependency "mbuf" 00:02:21.655 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:21.655 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:21.655 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:21.655 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:21.655 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:21.655 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:21.655 Compiler for C supports arguments -mpclmul: YES 00:02:21.655 Compiler for C supports arguments -maes: YES 00:02:21.655 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:21.655 Compiler for C supports arguments -mavx512bw: YES 00:02:21.655 Compiler for C supports arguments -mavx512dq: YES 00:02:21.655 Compiler for C supports arguments -mavx512vl: YES 00:02:21.655 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:21.655 Compiler for C supports arguments -mavx2: YES 00:02:21.655 Compiler for C supports arguments -mavx: YES 00:02:21.655 Message: lib/net: Defining dependency "net" 00:02:21.655 Message: lib/meter: Defining dependency "meter" 00:02:21.655 Message: lib/ethdev: Defining dependency "ethdev" 00:02:21.655 Message: lib/pci: Defining dependency "pci" 00:02:21.655 Message: lib/cmdline: Defining dependency "cmdline" 00:02:21.655 Message: lib/metrics: Defining dependency "metrics" 00:02:21.655 Message: lib/hash: Defining dependency "hash" 00:02:21.655 Message: lib/timer: Defining dependency "timer" 00:02:21.655 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:21.655 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:21.655 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:21.655 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:21.655 Message: lib/acl: Defining dependency "acl" 00:02:21.655 Message: lib/bbdev: Defining dependency "bbdev" 00:02:21.655 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:21.655 Run-time dependency libelf found: YES 0.191 00:02:21.655 Message: lib/bpf: Defining dependency "bpf" 00:02:21.655 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:21.655 Message: lib/compressdev: Defining dependency "compressdev" 00:02:21.655 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:21.655 Message: lib/distributor: Defining dependency "distributor" 00:02:21.655 Message: lib/dmadev: Defining dependency "dmadev" 00:02:21.655 Message: lib/efd: Defining dependency "efd" 00:02:21.655 Message: lib/eventdev: Defining dependency "eventdev" 00:02:21.655 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:21.655 Message: lib/gpudev: Defining dependency "gpudev" 00:02:21.655 Message: lib/gro: Defining dependency "gro" 00:02:21.655 Message: lib/gso: Defining dependency "gso" 00:02:21.655 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:21.655 Message: lib/jobstats: Defining dependency "jobstats" 00:02:21.655 Message: lib/latencystats: Defining dependency "latencystats" 00:02:21.655 Message: lib/lpm: Defining dependency "lpm" 00:02:21.655 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:21.655 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:21.655 Fetching value of define "__AVX512IFMA__" : 1 00:02:21.655 Message: lib/member: Defining dependency "member" 00:02:21.655 Message: lib/pcapng: Defining dependency "pcapng" 00:02:21.655 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:21.655 Message: lib/power: Defining dependency "power" 00:02:21.655 Message: lib/rawdev: Defining dependency "rawdev" 00:02:21.655 Message: lib/regexdev: Defining dependency "regexdev" 00:02:21.655 Message: lib/mldev: Defining dependency "mldev" 00:02:21.655 Message: lib/rib: Defining dependency "rib" 00:02:21.655 Message: lib/reorder: Defining dependency "reorder" 00:02:21.655 Message: lib/sched: Defining dependency "sched" 00:02:21.655 Message: lib/security: Defining dependency "security" 00:02:21.655 Message: lib/stack: Defining dependency "stack" 00:02:21.655 Has header "linux/userfaultfd.h" : YES 00:02:21.655 Has header "linux/vduse.h" : YES 00:02:21.655 Message: lib/vhost: Defining dependency "vhost" 00:02:21.655 Message: lib/ipsec: Defining dependency "ipsec" 00:02:21.655 Message: lib/pdcp: Defining dependency "pdcp" 00:02:21.655 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:21.655 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:21.655 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:21.655 Message: lib/fib: Defining dependency "fib" 00:02:21.655 Message: lib/port: Defining dependency "port" 00:02:21.655 Message: lib/pdump: Defining dependency "pdump" 00:02:21.655 Message: lib/table: Defining dependency "table" 00:02:21.655 Message: lib/pipeline: Defining dependency "pipeline" 00:02:21.655 Message: lib/graph: Defining dependency "graph" 00:02:21.655 Message: lib/node: Defining dependency "node" 00:02:21.655 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:21.655 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:21.655 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:23.574 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:23.574 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:23.574 Compiler for C supports arguments -Wno-unused-value: YES 00:02:23.574 Compiler for C supports arguments -Wno-format: YES 00:02:23.574 Compiler for C supports arguments -Wno-format-security: YES 00:02:23.574 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:23.574 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:23.574 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:23.574 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:23.574 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:23.574 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:23.575 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:23.575 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:23.575 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:23.575 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:23.575 Has header "sys/epoll.h" : YES 00:02:23.575 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:23.575 Configuring doxy-api-html.conf using configuration 00:02:23.575 Configuring doxy-api-man.conf using configuration 00:02:23.575 Program mandb found: YES (/usr/bin/mandb) 00:02:23.575 Program sphinx-build found: NO 00:02:23.575 Configuring rte_build_config.h using configuration 00:02:23.575 Message: 00:02:23.575 ================= 00:02:23.575 Applications Enabled 00:02:23.575 ================= 00:02:23.575 00:02:23.575 apps: 00:02:23.575 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:23.575 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:23.575 test-pmd, test-regex, test-sad, test-security-perf, 00:02:23.575 00:02:23.575 Message: 00:02:23.575 ================= 00:02:23.575 Libraries Enabled 00:02:23.575 ================= 00:02:23.575 00:02:23.575 libs: 00:02:23.575 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:23.575 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:23.575 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:23.575 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:23.575 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:23.575 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:23.575 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:23.575 00:02:23.575 00:02:23.575 Message: 00:02:23.575 =============== 00:02:23.575 Drivers Enabled 00:02:23.575 =============== 00:02:23.575 00:02:23.575 common: 00:02:23.575 00:02:23.575 bus: 00:02:23.575 pci, vdev, 00:02:23.575 mempool: 00:02:23.575 ring, 00:02:23.575 dma: 00:02:23.575 00:02:23.575 net: 00:02:23.575 i40e, 00:02:23.575 raw: 00:02:23.575 00:02:23.575 crypto: 00:02:23.575 00:02:23.575 compress: 00:02:23.575 00:02:23.575 regex: 00:02:23.575 00:02:23.575 ml: 00:02:23.575 00:02:23.575 vdpa: 00:02:23.575 00:02:23.575 event: 00:02:23.575 00:02:23.575 baseband: 00:02:23.575 00:02:23.575 gpu: 00:02:23.575 00:02:23.575 00:02:23.575 Message: 00:02:23.575 ================= 00:02:23.575 Content Skipped 00:02:23.575 ================= 00:02:23.575 00:02:23.575 apps: 00:02:23.575 00:02:23.575 libs: 00:02:23.575 00:02:23.575 drivers: 00:02:23.575 common/cpt: not in enabled drivers build config 00:02:23.575 common/dpaax: not in enabled drivers build config 00:02:23.575 common/iavf: not in enabled drivers build config 00:02:23.575 common/idpf: not in enabled drivers build config 00:02:23.575 common/mvep: not in enabled drivers build config 00:02:23.575 common/octeontx: not in enabled drivers build config 00:02:23.575 bus/auxiliary: not in enabled drivers build config 00:02:23.575 bus/cdx: not in enabled drivers build config 00:02:23.575 bus/dpaa: not in enabled drivers build config 00:02:23.575 bus/fslmc: not in enabled drivers build config 00:02:23.575 bus/ifpga: not in enabled drivers build config 00:02:23.575 bus/platform: not in enabled drivers build config 00:02:23.575 bus/vmbus: not in enabled drivers build config 00:02:23.575 common/cnxk: not in enabled drivers build config 00:02:23.575 common/mlx5: not in enabled drivers build config 00:02:23.575 common/nfp: not in enabled drivers build config 00:02:23.575 common/qat: not in enabled drivers build config 00:02:23.575 common/sfc_efx: not in enabled drivers build config 00:02:23.575 mempool/bucket: not in enabled drivers build config 00:02:23.575 mempool/cnxk: not in enabled drivers build config 00:02:23.575 mempool/dpaa: not in enabled drivers build config 00:02:23.575 mempool/dpaa2: not in enabled drivers build config 00:02:23.575 mempool/octeontx: not in enabled drivers build config 00:02:23.575 mempool/stack: not in enabled drivers build config 00:02:23.575 dma/cnxk: not in enabled drivers build config 00:02:23.575 dma/dpaa: not in enabled drivers build config 00:02:23.575 dma/dpaa2: not in enabled drivers build config 00:02:23.575 dma/hisilicon: not in enabled drivers build config 00:02:23.575 dma/idxd: not in enabled drivers build config 00:02:23.575 dma/ioat: not in enabled drivers build config 00:02:23.575 dma/skeleton: not in enabled drivers build config 00:02:23.575 net/af_packet: not in enabled drivers build config 00:02:23.575 net/af_xdp: not in enabled drivers build config 00:02:23.575 net/ark: not in enabled drivers build config 00:02:23.575 net/atlantic: not in enabled drivers build config 00:02:23.575 net/avp: not in enabled drivers build config 00:02:23.575 net/axgbe: not in enabled drivers build config 00:02:23.575 net/bnx2x: not in enabled drivers build config 00:02:23.575 net/bnxt: not in enabled drivers build config 00:02:23.575 net/bonding: not in enabled drivers build config 00:02:23.575 net/cnxk: not in enabled drivers build config 00:02:23.575 net/cpfl: not in enabled drivers build config 00:02:23.575 net/cxgbe: not in enabled drivers build config 00:02:23.575 net/dpaa: not in enabled drivers build config 00:02:23.575 net/dpaa2: not in enabled drivers build config 00:02:23.575 net/e1000: not in enabled drivers build config 00:02:23.575 net/ena: not in enabled drivers build config 00:02:23.575 net/enetc: not in enabled drivers build config 00:02:23.575 net/enetfec: not in enabled drivers build config 00:02:23.575 net/enic: not in enabled drivers build config 00:02:23.575 net/failsafe: not in enabled drivers build config 00:02:23.575 net/fm10k: not in enabled drivers build config 00:02:23.575 net/gve: not in enabled drivers build config 00:02:23.575 net/hinic: not in enabled drivers build config 00:02:23.575 net/hns3: not in enabled drivers build config 00:02:23.575 net/iavf: not in enabled drivers build config 00:02:23.575 net/ice: not in enabled drivers build config 00:02:23.575 net/idpf: not in enabled drivers build config 00:02:23.575 net/igc: not in enabled drivers build config 00:02:23.575 net/ionic: not in enabled drivers build config 00:02:23.575 net/ipn3ke: not in enabled drivers build config 00:02:23.575 net/ixgbe: not in enabled drivers build config 00:02:23.575 net/mana: not in enabled drivers build config 00:02:23.575 net/memif: not in enabled drivers build config 00:02:23.575 net/mlx4: not in enabled drivers build config 00:02:23.575 net/mlx5: not in enabled drivers build config 00:02:23.575 net/mvneta: not in enabled drivers build config 00:02:23.575 net/mvpp2: not in enabled drivers build config 00:02:23.575 net/netvsc: not in enabled drivers build config 00:02:23.575 net/nfb: not in enabled drivers build config 00:02:23.575 net/nfp: not in enabled drivers build config 00:02:23.575 net/ngbe: not in enabled drivers build config 00:02:23.575 net/null: not in enabled drivers build config 00:02:23.575 net/octeontx: not in enabled drivers build config 00:02:23.575 net/octeon_ep: not in enabled drivers build config 00:02:23.575 net/pcap: not in enabled drivers build config 00:02:23.575 net/pfe: not in enabled drivers build config 00:02:23.575 net/qede: not in enabled drivers build config 00:02:23.575 net/ring: not in enabled drivers build config 00:02:23.575 net/sfc: not in enabled drivers build config 00:02:23.575 net/softnic: not in enabled drivers build config 00:02:23.575 net/tap: not in enabled drivers build config 00:02:23.575 net/thunderx: not in enabled drivers build config 00:02:23.575 net/txgbe: not in enabled drivers build config 00:02:23.575 net/vdev_netvsc: not in enabled drivers build config 00:02:23.575 net/vhost: not in enabled drivers build config 00:02:23.575 net/virtio: not in enabled drivers build config 00:02:23.575 net/vmxnet3: not in enabled drivers build config 00:02:23.575 raw/cnxk_bphy: not in enabled drivers build config 00:02:23.575 raw/cnxk_gpio: not in enabled drivers build config 00:02:23.575 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:23.575 raw/ifpga: not in enabled drivers build config 00:02:23.575 raw/ntb: not in enabled drivers build config 00:02:23.575 raw/skeleton: not in enabled drivers build config 00:02:23.575 crypto/armv8: not in enabled drivers build config 00:02:23.575 crypto/bcmfs: not in enabled drivers build config 00:02:23.575 crypto/caam_jr: not in enabled drivers build config 00:02:23.575 crypto/ccp: not in enabled drivers build config 00:02:23.575 crypto/cnxk: not in enabled drivers build config 00:02:23.575 crypto/dpaa_sec: not in enabled drivers build config 00:02:23.575 crypto/dpaa2_sec: not in enabled drivers build config 00:02:23.575 crypto/ipsec_mb: not in enabled drivers build config 00:02:23.575 crypto/mlx5: not in enabled drivers build config 00:02:23.575 crypto/mvsam: not in enabled drivers build config 00:02:23.575 crypto/nitrox: not in enabled drivers build config 00:02:23.575 crypto/null: not in enabled drivers build config 00:02:23.575 crypto/octeontx: not in enabled drivers build config 00:02:23.575 crypto/openssl: not in enabled drivers build config 00:02:23.575 crypto/scheduler: not in enabled drivers build config 00:02:23.575 crypto/uadk: not in enabled drivers build config 00:02:23.575 crypto/virtio: not in enabled drivers build config 00:02:23.575 compress/isal: not in enabled drivers build config 00:02:23.575 compress/mlx5: not in enabled drivers build config 00:02:23.575 compress/octeontx: not in enabled drivers build config 00:02:23.575 compress/zlib: not in enabled drivers build config 00:02:23.575 regex/mlx5: not in enabled drivers build config 00:02:23.575 regex/cn9k: not in enabled drivers build config 00:02:23.575 ml/cnxk: not in enabled drivers build config 00:02:23.575 vdpa/ifc: not in enabled drivers build config 00:02:23.575 vdpa/mlx5: not in enabled drivers build config 00:02:23.575 vdpa/nfp: not in enabled drivers build config 00:02:23.575 vdpa/sfc: not in enabled drivers build config 00:02:23.575 event/cnxk: not in enabled drivers build config 00:02:23.575 event/dlb2: not in enabled drivers build config 00:02:23.575 event/dpaa: not in enabled drivers build config 00:02:23.576 event/dpaa2: not in enabled drivers build config 00:02:23.576 event/dsw: not in enabled drivers build config 00:02:23.576 event/opdl: not in enabled drivers build config 00:02:23.576 event/skeleton: not in enabled drivers build config 00:02:23.576 event/sw: not in enabled drivers build config 00:02:23.576 event/octeontx: not in enabled drivers build config 00:02:23.576 baseband/acc: not in enabled drivers build config 00:02:23.576 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:23.576 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:23.576 baseband/la12xx: not in enabled drivers build config 00:02:23.576 baseband/null: not in enabled drivers build config 00:02:23.576 baseband/turbo_sw: not in enabled drivers build config 00:02:23.576 gpu/cuda: not in enabled drivers build config 00:02:23.576 00:02:23.576 00:02:23.576 Build targets in project: 215 00:02:23.576 00:02:23.576 DPDK 23.11.0 00:02:23.576 00:02:23.576 User defined options 00:02:23.576 libdir : lib 00:02:23.576 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:23.576 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:23.576 c_link_args : 00:02:23.576 enable_docs : false 00:02:23.576 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:23.576 enable_kmods : false 00:02:23.576 machine : native 00:02:23.576 tests : false 00:02:23.576 00:02:23.576 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:23.576 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:23.576 17:02:21 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j144 00:02:23.576 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:23.576 [1/705] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:23.576 [2/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:23.576 [3/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:23.576 [4/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:23.576 [5/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:23.576 [6/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:23.576 [7/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:23.576 [8/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:23.576 [9/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:23.576 [10/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:23.576 [11/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:23.576 [12/705] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:23.576 [13/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:23.576 [14/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:23.576 [15/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:23.836 [16/705] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:23.836 [17/705] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:23.836 [18/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:23.836 [19/705] Linking static target lib/librte_log.a 00:02:23.836 [20/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:23.836 [21/705] Linking static target lib/librte_pci.a 00:02:23.836 [22/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:23.836 [23/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:23.836 [24/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:23.836 [25/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:23.836 [26/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:23.836 [27/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:23.836 [28/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:23.836 [29/705] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:23.836 [30/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:23.836 [31/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:23.836 [32/705] Linking static target lib/librte_kvargs.a 00:02:23.836 [33/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:24.100 [34/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:24.100 [35/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:24.100 [36/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:24.100 [37/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:24.100 [38/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:24.100 [39/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:24.100 [40/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:24.100 [41/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:24.100 [42/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:24.100 [43/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:24.100 [44/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:24.100 [45/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:24.100 [46/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:24.100 [47/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:24.100 [48/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:24.100 [49/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:24.100 [50/705] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:24.100 [51/705] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.100 [52/705] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:24.100 [53/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:24.100 [54/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:24.100 [55/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:24.100 [56/705] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:24.364 [57/705] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:24.364 [58/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:24.364 [59/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:24.364 [60/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:24.364 [61/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:24.364 [62/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:24.364 [63/705] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:24.364 [64/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:24.364 [65/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:24.364 [66/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:24.364 [67/705] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:24.364 [68/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:24.364 [69/705] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:24.364 [70/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:24.364 [71/705] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.364 [72/705] Linking static target lib/librte_cfgfile.a 00:02:24.364 [73/705] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:24.364 [74/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:24.364 [75/705] Linking static target lib/librte_bitratestats.a 00:02:24.364 [76/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:24.364 [77/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:24.364 [78/705] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:24.364 [79/705] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:24.364 [80/705] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:24.364 [81/705] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:24.364 [82/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:24.364 [83/705] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:24.364 [84/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:24.364 [85/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:24.364 [86/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:24.364 [87/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:24.364 [88/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:24.364 [89/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:24.364 [90/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:24.364 [91/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:24.622 [92/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:24.622 [93/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:24.622 [94/705] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:24.622 [95/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:24.622 [96/705] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:24.622 [97/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:24.622 [98/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:24.622 [99/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:24.622 [100/705] Linking static target lib/librte_meter.a 00:02:24.623 [101/705] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:24.623 [102/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:24.623 [103/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:24.623 [104/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:24.623 [105/705] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.623 [106/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:24.623 [107/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:24.623 [108/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:24.623 [109/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:24.623 [110/705] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:24.623 [111/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:24.623 [112/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:24.623 [113/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:24.623 [114/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:24.623 [115/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:24.623 [116/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:24.623 [117/705] Linking target lib/librte_log.so.24.0 00:02:24.623 [118/705] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:24.623 [119/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:24.623 [120/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:24.623 [121/705] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:24.623 [122/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:24.623 [123/705] Linking static target lib/librte_ring.a 00:02:24.623 [124/705] Linking static target lib/librte_cmdline.a 00:02:24.623 [125/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:24.623 [126/705] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:24.623 [127/705] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:24.623 [128/705] Linking static target lib/librte_jobstats.a 00:02:24.623 [129/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:24.623 [130/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:24.623 [131/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:24.623 [132/705] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:24.623 [133/705] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:24.623 [134/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:24.623 [135/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:24.623 [136/705] Linking static target lib/librte_timer.a 00:02:24.623 [137/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:24.623 [138/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:24.623 [139/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:24.623 [140/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:24.623 [141/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:24.623 [142/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:24.623 [143/705] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:24.623 [144/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:24.623 [145/705] Linking static target lib/librte_metrics.a 00:02:24.882 [146/705] Linking static target lib/librte_net.a 00:02:24.882 [147/705] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:24.882 [148/705] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.882 [149/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:24.882 [150/705] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:24.882 [151/705] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:24.882 [152/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:24.882 [153/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:24.882 [154/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:24.882 [155/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:24.882 [156/705] Linking static target lib/librte_bbdev.a 00:02:24.882 [157/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:24.882 [158/705] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:24.882 [159/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:24.882 [160/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:24.882 [161/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:24.882 [162/705] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:24.882 [163/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:24.882 [164/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:24.882 [165/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:24.882 [166/705] Linking static target lib/librte_dmadev.a 00:02:24.882 [167/705] Linking static target lib/librte_gso.a 00:02:24.882 [168/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:24.882 [169/705] Linking target lib/librte_kvargs.so.24.0 00:02:24.882 [170/705] Linking static target lib/librte_compressdev.a 00:02:24.882 [171/705] Linking static target lib/librte_distributor.a 00:02:24.882 [172/705] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:24.882 [173/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:24.882 [174/705] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:24.882 [175/705] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:24.882 [176/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:24.882 [177/705] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:24.882 [178/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:24.882 [179/705] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.882 [180/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:24.882 [181/705] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:24.882 [182/705] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:24.882 [183/705] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.882 [184/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:25.141 [185/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:25.141 [186/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:25.141 [187/705] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:25.141 [188/705] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:25.141 [189/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:25.141 [190/705] Linking static target lib/librte_latencystats.a 00:02:25.141 [191/705] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:25.141 [192/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:25.141 [193/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:25.141 [194/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:25.141 [195/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:02:25.141 [196/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:25.141 [197/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:25.141 [198/705] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:25.141 [199/705] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:25.141 [200/705] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:25.141 [201/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:25.141 [202/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:25.141 [203/705] Linking static target lib/librte_dispatcher.a 00:02:25.141 [204/705] Linking static target lib/librte_telemetry.a 00:02:25.141 [205/705] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:25.141 [206/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:25.141 [207/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:25.141 [208/705] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.141 [209/705] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:25.141 [210/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:25.141 [211/705] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.141 [212/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:25.141 [213/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:25.141 [214/705] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.141 [215/705] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:25.141 [216/705] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:25.141 [217/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:25.141 [218/705] Linking static target lib/librte_gro.a 00:02:25.141 [219/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:25.141 [220/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:25.141 [221/705] Linking static target lib/librte_rcu.a 00:02:25.141 [222/705] Linking static target lib/librte_stack.a 00:02:25.141 [223/705] Linking static target lib/librte_gpudev.a 00:02:25.141 [224/705] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:25.141 [225/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:25.141 [226/705] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:25.141 [227/705] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:25.141 [228/705] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.404 [229/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:25.404 [230/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:25.404 [231/705] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:25.404 [232/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:25.404 [233/705] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:25.404 [234/705] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:25.404 [235/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:25.404 [236/705] Linking static target lib/librte_power.a 00:02:25.404 [237/705] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:25.404 [238/705] Linking static target lib/librte_mempool.a 00:02:25.404 [239/705] Linking static target lib/librte_regexdev.a 00:02:25.404 [240/705] Linking static target lib/librte_ip_frag.a 00:02:25.404 [241/705] Linking static target lib/librte_rawdev.a 00:02:25.404 [242/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:25.404 [243/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:25.404 [244/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:25.404 [245/705] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:25.404 [246/705] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.404 [247/705] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.404 [248/705] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:25.404 [249/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:25.404 [250/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:25.404 [251/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:25.404 [252/705] Linking static target lib/librte_pcapng.a 00:02:25.404 [253/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:25.404 [254/705] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.404 [255/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:25.404 [256/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:25.404 [257/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:25.404 [258/705] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:25.404 [259/705] Linking static target lib/librte_bpf.a 00:02:25.404 [260/705] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:25.404 [261/705] Linking static target lib/librte_mbuf.a 00:02:25.404 [262/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:25.404 [263/705] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:25.404 [264/705] Linking static target lib/librte_reorder.a 00:02:25.404 [265/705] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:25.404 [266/705] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.404 [267/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:25.404 [268/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:25.404 [269/705] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:25.404 [270/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:25.404 [271/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:25.404 [272/705] Linking static target lib/librte_mldev.a 00:02:25.404 [273/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:25.404 [274/705] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.665 [275/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:25.665 [276/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:25.665 [277/705] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.665 [278/705] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:25.665 [279/705] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:25.665 [280/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:25.665 [281/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:25.665 [282/705] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:25.665 [283/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:25.665 [284/705] Linking static target lib/librte_eal.a 00:02:25.665 [285/705] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.665 [286/705] Linking static target lib/librte_lpm.a 00:02:25.665 [287/705] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:25.665 [288/705] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:25.665 [289/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:25.665 [290/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:25.665 [291/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:25.665 [292/705] Linking static target lib/librte_security.a 00:02:25.665 [293/705] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:25.665 [294/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:25.665 [295/705] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.665 [296/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:25.665 [297/705] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:25.665 [298/705] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:25.665 [299/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:25.665 [300/705] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:25.665 [301/705] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:25.665 [302/705] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:25.665 [303/705] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:25.665 [304/705] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:25.927 [305/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:25.927 [306/705] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:25.927 [307/705] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.927 [308/705] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:25.927 [309/705] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:25.927 [310/705] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.927 [311/705] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:25.927 [312/705] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:25.927 [313/705] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:25.927 [314/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:25.927 [315/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:25.927 [316/705] Linking static target lib/librte_rib.a 00:02:25.927 [317/705] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.928 [318/705] Linking target lib/librte_telemetry.so.24.0 00:02:25.928 [319/705] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.928 [320/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:25.928 [321/705] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.928 [322/705] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:25.928 [323/705] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:25.928 [324/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:25.928 [325/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:25.928 [326/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:25.928 [327/705] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:25.928 [328/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:25.928 [329/705] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.928 [330/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:25.928 [331/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:25.928 [332/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:25.928 [333/705] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:25.928 [334/705] Linking static target lib/librte_efd.a 00:02:25.928 [335/705] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:25.928 [336/705] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:25.928 [337/705] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.928 [338/705] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:25.928 [339/705] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:25.928 [340/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:25.928 [341/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:26.188 [342/705] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:26.188 [343/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:26.188 [344/705] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:26.188 [345/705] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.188 [346/705] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:26.188 [347/705] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:26.188 [348/705] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:26.189 [349/705] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:26.189 [350/705] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.189 [351/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:26.189 [352/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:26.189 [353/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:26.189 [354/705] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:26.189 [355/705] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:26.189 [356/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:26.189 [357/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:26.189 [358/705] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:26.189 [359/705] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:26.189 [360/705] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:26.189 [361/705] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:26.189 [362/705] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.189 [363/705] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:26.189 [364/705] Linking static target lib/librte_graph.a 00:02:26.189 [365/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:26.189 [366/705] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:26.189 [367/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:26.189 [368/705] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:26.189 [369/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:26.189 [370/705] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:26.189 [371/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:26.189 [372/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:26.189 [373/705] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.189 [374/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:26.451 [375/705] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:26.451 [376/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:26.451 [377/705] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:26.451 [378/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:26.451 [379/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:26.451 [380/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:26.451 [381/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:26.451 [382/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:26.451 [383/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:26.451 [384/705] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:26.451 [385/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:26.451 [386/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:26.451 [387/705] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:26.451 [388/705] Linking static target lib/librte_fib.a 00:02:26.451 [389/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:26.451 [390/705] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.451 [391/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:26.451 [392/705] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:26.451 [393/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:26.451 [394/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:26.451 [395/705] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:26.451 [396/705] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.451 [397/705] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.451 [398/705] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.451 [399/705] Linking static target lib/librte_pdump.a 00:02:26.451 [400/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:26.451 [401/705] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.452 [402/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:26.452 [403/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:26.452 [404/705] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:26.452 [405/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:26.452 [406/705] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:26.452 [407/705] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:26.452 [408/705] Linking static target drivers/librte_bus_vdev.a 00:02:26.711 [409/705] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.711 [410/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:26.711 [411/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:26.711 [412/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:26.711 [413/705] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.711 [414/705] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:26.711 [415/705] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:26.711 [416/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:26.711 [417/705] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:26.711 [418/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:26.711 [419/705] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:26.711 [420/705] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:26.711 [421/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:26.711 [422/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:26.711 [423/705] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:26.711 [424/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:26.711 [425/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:26.711 [426/705] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:26.711 [427/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:26.711 [428/705] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:26.711 [429/705] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:26.711 [430/705] Linking static target lib/librte_sched.a 00:02:26.711 [431/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:26.711 [432/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:26.711 [433/705] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:26.711 [434/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:26.711 [435/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:26.711 [436/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:26.711 [437/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:26.711 [438/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:26.711 [439/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:26.711 [440/705] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.971 [441/705] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:26.971 [442/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:26.971 [443/705] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.971 [444/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:26.971 [445/705] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.971 [446/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:26.971 [447/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:26.971 [448/705] Linking static target lib/librte_table.a 00:02:26.971 [449/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:26.971 [450/705] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:26.971 [451/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:26.971 [452/705] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:26.971 [453/705] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.971 [454/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:26.971 [455/705] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:26.971 [456/705] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:26.971 [457/705] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:26.971 [458/705] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:26.971 [459/705] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:26.971 [460/705] Linking static target drivers/librte_bus_pci.a 00:02:26.971 [461/705] Linking static target drivers/librte_mempool_ring.a 00:02:26.971 [462/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:26.971 [463/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:26.971 [464/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:26.971 [465/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:26.971 [466/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:26.971 [467/705] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:26.971 [468/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:26.971 [469/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:26.971 [470/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:26.971 [471/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:26.971 [472/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:26.971 [473/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:26.971 [474/705] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:26.971 [475/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:26.971 [476/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:26.971 [477/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:26.971 [478/705] Linking static target lib/librte_cryptodev.a 00:02:26.971 [479/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:26.971 [480/705] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:26.971 [481/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:26.971 [482/705] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:26.971 [483/705] Linking static target lib/librte_member.a 00:02:26.971 [484/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:26.971 [485/705] Linking static target lib/librte_node.a 00:02:26.971 [486/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:27.230 [487/705] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:27.230 [488/705] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:27.230 [489/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:27.230 [490/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:27.230 [491/705] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:27.230 [492/705] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:27.230 [493/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:27.230 [494/705] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.230 [495/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:27.230 [496/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:27.230 [497/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:27.230 [498/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:27.230 [499/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:27.230 [500/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:27.230 [501/705] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:27.230 [502/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:27.230 [503/705] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:27.230 [504/705] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:27.230 [505/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:27.230 [506/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:27.230 [507/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:27.230 [508/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:27.230 [509/705] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.230 [510/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:27.230 [511/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:27.230 [512/705] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:27.231 [513/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:27.231 [514/705] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:27.231 [515/705] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:27.231 [516/705] Linking static target lib/librte_port.a 00:02:27.231 [517/705] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:27.231 [518/705] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:27.231 [519/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:27.231 [520/705] Linking static target lib/acl/libavx2_tmp.a 00:02:27.231 [521/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:27.231 [522/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:27.231 [523/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:27.231 [524/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:27.231 [525/705] Linking static target lib/librte_pdcp.a 00:02:27.231 [526/705] Linking static target lib/librte_ipsec.a 00:02:27.231 [527/705] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:27.231 [528/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:27.231 [529/705] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.231 [530/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:27.231 [531/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:27.489 [532/705] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:27.489 [533/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:27.489 [534/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:27.489 [535/705] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:27.489 [536/705] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:27.489 [537/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:27.489 [538/705] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.489 [539/705] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:27.489 [540/705] Linking static target lib/librte_hash.a 00:02:27.489 [541/705] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:27.489 [542/705] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.489 [543/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:27.489 [544/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:27.489 [545/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:27.489 [546/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:27.489 [547/705] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:27.489 [548/705] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:27.489 [549/705] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:27.748 [550/705] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.748 [551/705] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:27.748 [552/705] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:27.748 [553/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:27.748 [554/705] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.748 [555/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:27.748 [556/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:27.748 [557/705] Linking static target lib/librte_eventdev.a 00:02:27.748 [558/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:27.748 [559/705] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:27.748 [560/705] Linking static target lib/librte_acl.a 00:02:27.748 [561/705] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.748 [562/705] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:27.748 [563/705] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:27.748 [564/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:28.009 [565/705] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:28.009 [566/705] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.009 [567/705] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:28.009 [568/705] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.269 [569/705] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.269 [570/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:28.269 [571/705] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:28.269 [572/705] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.531 [573/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:28.531 [574/705] Linking static target lib/librte_ethdev.a 00:02:28.531 [575/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:28.531 [576/705] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:28.792 [577/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:29.052 [578/705] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.313 [579/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:29.573 [580/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:29.573 [581/705] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:29.573 [582/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:29.834 [583/705] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:29.834 [584/705] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:29.834 [585/705] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:29.834 [586/705] Linking static target drivers/librte_net_i40e.a 00:02:30.408 [587/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:30.978 [588/705] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.978 [589/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:31.239 [590/705] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.448 [591/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:35.448 [592/705] Linking static target lib/librte_pipeline.a 00:02:36.391 [593/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:36.391 [594/705] Linking static target lib/librte_vhost.a 00:02:36.652 [595/705] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.652 [596/705] Linking target lib/librte_eal.so.24.0 00:02:36.911 [597/705] Linking target app/dpdk-pdump 00:02:36.911 [598/705] Linking target app/dpdk-dumpcap 00:02:36.911 [599/705] Linking target app/dpdk-test-pipeline 00:02:36.911 [600/705] Linking target app/dpdk-test-regex 00:02:36.911 [601/705] Linking target app/dpdk-test-security-perf 00:02:36.911 [602/705] Linking target app/dpdk-test-compress-perf 00:02:36.911 [603/705] Linking target app/dpdk-test-acl 00:02:36.911 [604/705] Linking target app/dpdk-test-fib 00:02:36.911 [605/705] Linking target app/dpdk-test-gpudev 00:02:36.911 [606/705] Linking target app/dpdk-test-sad 00:02:36.911 [607/705] Linking target app/dpdk-test-eventdev 00:02:36.911 [608/705] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:36.911 [609/705] Linking target app/dpdk-test-flow-perf 00:02:36.911 [610/705] Linking target app/dpdk-proc-info 00:02:36.911 [611/705] Linking target app/dpdk-graph 00:02:36.911 [612/705] Linking target lib/librte_cfgfile.so.24.0 00:02:36.911 [613/705] Linking target lib/librte_dmadev.so.24.0 00:02:36.911 [614/705] Linking target app/dpdk-test-crypto-perf 00:02:36.911 [615/705] Linking target app/dpdk-test-cmdline 00:02:36.911 [616/705] Linking target lib/librte_ring.so.24.0 00:02:36.911 [617/705] Linking target lib/librte_acl.so.24.0 00:02:36.911 [618/705] Linking target app/dpdk-test-dma-perf 00:02:36.911 [619/705] Linking target lib/librte_meter.so.24.0 00:02:36.911 [620/705] Linking target lib/librte_timer.so.24.0 00:02:36.911 [621/705] Linking target lib/librte_pci.so.24.0 00:02:36.911 [622/705] Linking target lib/librte_stack.so.24.0 00:02:36.911 [623/705] Linking target app/dpdk-test-bbdev 00:02:36.911 [624/705] Linking target app/dpdk-test-mldev 00:02:36.911 [625/705] Linking target lib/librte_jobstats.so.24.0 00:02:36.911 [626/705] Linking target lib/librte_rawdev.so.24.0 00:02:36.911 [627/705] Linking target drivers/librte_bus_vdev.so.24.0 00:02:36.911 [628/705] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.911 [629/705] Linking target app/dpdk-testpmd 00:02:37.171 [630/705] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:37.171 [631/705] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:37.171 [632/705] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:37.171 [633/705] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:37.171 [634/705] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:37.171 [635/705] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:37.171 [636/705] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:37.171 [637/705] Linking target lib/librte_mempool.so.24.0 00:02:37.171 [638/705] Linking target drivers/librte_bus_pci.so.24.0 00:02:37.171 [639/705] Linking target lib/librte_rcu.so.24.0 00:02:37.171 [640/705] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:37.171 [641/705] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:37.171 [642/705] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:37.171 [643/705] Linking target drivers/librte_mempool_ring.so.24.0 00:02:37.171 [644/705] Linking target lib/librte_rib.so.24.0 00:02:37.171 [645/705] Linking target lib/librte_mbuf.so.24.0 00:02:37.431 [646/705] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:37.431 [647/705] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:37.431 [648/705] Linking target lib/librte_cryptodev.so.24.0 00:02:37.431 [649/705] Linking target lib/librte_bbdev.so.24.0 00:02:37.431 [650/705] Linking target lib/librte_net.so.24.0 00:02:37.431 [651/705] Linking target lib/librte_distributor.so.24.0 00:02:37.431 [652/705] Linking target lib/librte_compressdev.so.24.0 00:02:37.431 [653/705] Linking target lib/librte_regexdev.so.24.0 00:02:37.431 [654/705] Linking target lib/librte_gpudev.so.24.0 00:02:37.431 [655/705] Linking target lib/librte_reorder.so.24.0 00:02:37.431 [656/705] Linking target lib/librte_mldev.so.24.0 00:02:37.431 [657/705] Linking target lib/librte_sched.so.24.0 00:02:37.431 [658/705] Linking target lib/librte_fib.so.24.0 00:02:37.692 [659/705] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:37.692 [660/705] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:37.692 [661/705] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:37.692 [662/705] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:37.692 [663/705] Linking target lib/librte_ethdev.so.24.0 00:02:37.692 [664/705] Linking target lib/librte_cmdline.so.24.0 00:02:37.692 [665/705] Linking target lib/librte_hash.so.24.0 00:02:37.692 [666/705] Linking target lib/librte_security.so.24.0 00:02:37.692 [667/705] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:37.692 [668/705] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:37.692 [669/705] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:37.952 [670/705] Linking target lib/librte_metrics.so.24.0 00:02:37.952 [671/705] Linking target lib/librte_gro.so.24.0 00:02:37.952 [672/705] Linking target lib/librte_pcapng.so.24.0 00:02:37.952 [673/705] Linking target lib/librte_gso.so.24.0 00:02:37.952 [674/705] Linking target lib/librte_power.so.24.0 00:02:37.952 [675/705] Linking target lib/librte_bpf.so.24.0 00:02:37.952 [676/705] Linking target lib/librte_ipsec.so.24.0 00:02:37.952 [677/705] Linking target lib/librte_lpm.so.24.0 00:02:37.952 [678/705] Linking target lib/librte_efd.so.24.0 00:02:37.952 [679/705] Linking target lib/librte_ip_frag.so.24.0 00:02:37.952 [680/705] Linking target lib/librte_member.so.24.0 00:02:37.952 [681/705] Linking target lib/librte_pdcp.so.24.0 00:02:37.952 [682/705] Linking target lib/librte_eventdev.so.24.0 00:02:37.952 [683/705] Linking target drivers/librte_net_i40e.so.24.0 00:02:37.952 [684/705] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:37.952 [685/705] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:37.952 [686/705] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:37.952 [687/705] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:37.952 [688/705] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:37.952 [689/705] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:37.952 [690/705] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:37.952 [691/705] Linking target lib/librte_graph.so.24.0 00:02:37.952 [692/705] Linking target lib/librte_pdump.so.24.0 00:02:37.952 [693/705] Linking target lib/librte_bitratestats.so.24.0 00:02:37.952 [694/705] Linking target lib/librte_latencystats.so.24.0 00:02:37.952 [695/705] Linking target lib/librte_dispatcher.so.24.0 00:02:37.952 [696/705] Linking target lib/librte_port.so.24.0 00:02:38.213 [697/705] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:38.213 [698/705] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:38.213 [699/705] Linking target lib/librte_node.so.24.0 00:02:38.213 [700/705] Linking target lib/librte_table.so.24.0 00:02:38.472 [701/705] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:38.472 [702/705] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.472 [703/705] Linking target lib/librte_vhost.so.24.0 00:02:40.385 [704/705] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.654 [705/705] Linking target lib/librte_pipeline.so.24.0 00:02:40.654 17:02:38 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:02:40.654 17:02:38 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:40.654 17:02:38 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j144 install 00:02:40.654 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:40.654 [0/1] Installing files. 00:02:40.923 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:40.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:40.929 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.930 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.196 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.196 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.196 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.196 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.196 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.196 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.196 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.196 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.196 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.196 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.196 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.196 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.196 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.196 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.196 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.196 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.196 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.196 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.196 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.196 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.196 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.196 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.196 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.196 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.196 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.196 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.196 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.196 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.196 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:41.196 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.196 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:41.196 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.196 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:41.196 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.196 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:41.196 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.196 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.196 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.196 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.196 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.196 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.196 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.196 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.196 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.196 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.196 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.196 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.196 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.196 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.196 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.196 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.196 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.196 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.196 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.196 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.196 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.196 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.196 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.196 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.196 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.197 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.198 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.199 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:41.200 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:41.200 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:02:41.200 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:02:41.200 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:02:41.200 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:41.200 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:02:41.200 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:41.200 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:02:41.201 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:41.201 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:02:41.201 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:41.201 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:02:41.201 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:41.201 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:02:41.201 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:41.201 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:02:41.201 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:41.201 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:02:41.201 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:41.201 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:02:41.201 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:41.201 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:02:41.201 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:41.201 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:02:41.201 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:41.201 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:02:41.201 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:41.201 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:02:41.201 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:41.201 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:02:41.201 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:41.201 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:02:41.201 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:41.201 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:02:41.201 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:41.201 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:02:41.201 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:41.201 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:02:41.201 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:41.201 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:02:41.201 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:41.201 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:02:41.201 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:41.201 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:02:41.201 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:41.201 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:02:41.201 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:41.201 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:02:41.201 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:41.201 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:02:41.201 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:41.201 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:02:41.201 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:41.201 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:02:41.201 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:41.201 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:02:41.201 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:41.201 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:02:41.201 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:41.201 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:02:41.201 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:41.201 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:02:41.201 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:41.201 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:02:41.201 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:41.201 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:02:41.201 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:41.201 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:02:41.201 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:41.201 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:02:41.201 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:41.201 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:02:41.201 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:41.201 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:02:41.201 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:41.201 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:02:41.201 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:41.201 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:41.201 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:41.201 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:41.201 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:41.201 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:41.201 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:41.201 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:41.201 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:41.201 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:41.201 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:41.201 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:41.201 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:41.201 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:02:41.201 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:41.201 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:02:41.201 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:41.201 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:02:41.201 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:41.201 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:02:41.201 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:41.201 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:02:41.201 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:41.201 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:02:41.201 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:41.201 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:02:41.201 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:41.201 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:02:41.201 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:41.201 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:02:41.201 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:41.202 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:02:41.202 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:41.202 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:02:41.202 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:41.202 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:02:41.202 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:41.202 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:02:41.202 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:41.202 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:02:41.202 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:41.202 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:02:41.202 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:41.202 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:02:41.202 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:41.202 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:02:41.202 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:41.202 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:02:41.202 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:41.202 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:41.202 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:41.202 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:41.202 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:41.202 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:41.202 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:41.202 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:41.202 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:41.202 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:41.202 17:02:39 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:02:41.202 17:02:39 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:41.202 00:02:41.202 real 0m24.967s 00:02:41.202 user 7m13.470s 00:02:41.202 sys 3m6.235s 00:02:41.202 17:02:39 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:41.202 17:02:39 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:41.202 ************************************ 00:02:41.202 END TEST build_native_dpdk 00:02:41.202 ************************************ 00:02:41.202 17:02:39 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:41.202 17:02:39 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:41.202 17:02:39 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:41.202 17:02:39 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:41.202 17:02:39 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:41.202 17:02:39 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:41.202 17:02:39 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:41.202 17:02:39 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:41.464 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:41.726 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.726 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.726 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:41.987 Using 'verbs' RDMA provider 00:02:58.007 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:10.252 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:10.252 Creating mk/config.mk...done. 00:03:10.252 Creating mk/cc.flags.mk...done. 00:03:10.252 Type 'make' to build. 00:03:10.252 17:03:08 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:03:10.252 17:03:08 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:10.252 17:03:08 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:10.252 17:03:08 -- common/autotest_common.sh@10 -- $ set +x 00:03:10.252 ************************************ 00:03:10.252 START TEST make 00:03:10.252 ************************************ 00:03:10.252 17:03:08 make -- common/autotest_common.sh@1125 -- $ make -j144 00:03:10.513 make[1]: Nothing to be done for 'all'. 00:03:11.899 The Meson build system 00:03:11.899 Version: 1.5.0 00:03:11.899 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:11.899 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:11.899 Build type: native build 00:03:11.899 Project name: libvfio-user 00:03:11.899 Project version: 0.0.1 00:03:11.899 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:11.899 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:11.899 Host machine cpu family: x86_64 00:03:11.899 Host machine cpu: x86_64 00:03:11.899 Run-time dependency threads found: YES 00:03:11.899 Library dl found: YES 00:03:11.899 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:11.899 Run-time dependency json-c found: YES 0.17 00:03:11.899 Run-time dependency cmocka found: YES 1.1.7 00:03:11.899 Program pytest-3 found: NO 00:03:11.899 Program flake8 found: NO 00:03:11.899 Program misspell-fixer found: NO 00:03:11.899 Program restructuredtext-lint found: NO 00:03:11.899 Program valgrind found: YES (/usr/bin/valgrind) 00:03:11.899 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:11.899 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:11.899 Compiler for C supports arguments -Wwrite-strings: YES 00:03:11.899 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:11.899 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:11.899 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:11.899 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:11.899 Build targets in project: 8 00:03:11.899 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:11.899 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:11.899 00:03:11.899 libvfio-user 0.0.1 00:03:11.899 00:03:11.899 User defined options 00:03:11.899 buildtype : debug 00:03:11.899 default_library: shared 00:03:11.899 libdir : /usr/local/lib 00:03:11.899 00:03:11.899 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:12.157 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:12.157 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:12.157 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:12.415 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:12.415 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:12.415 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:12.415 [6/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:12.416 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:12.416 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:12.416 [9/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:12.416 [10/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:12.416 [11/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:12.416 [12/37] Compiling C object samples/null.p/null.c.o 00:03:12.416 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:12.416 [14/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:12.416 [15/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:12.416 [16/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:12.416 [17/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:12.416 [18/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:12.416 [19/37] Compiling C object samples/server.p/server.c.o 00:03:12.416 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:12.416 [21/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:12.416 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:12.416 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:12.416 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:12.416 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:12.416 [26/37] Compiling C object samples/client.p/client.c.o 00:03:12.416 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:12.416 [28/37] Linking target lib/libvfio-user.so.0.0.1 00:03:12.416 [29/37] Linking target samples/client 00:03:12.416 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:12.675 [31/37] Linking target test/unit_tests 00:03:12.675 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:12.675 [33/37] Linking target samples/server 00:03:12.675 [34/37] Linking target samples/shadow_ioeventfd_server 00:03:12.675 [35/37] Linking target samples/null 00:03:12.675 [36/37] Linking target samples/lspci 00:03:12.675 [37/37] Linking target samples/gpio-pci-idio-16 00:03:12.676 INFO: autodetecting backend as ninja 00:03:12.676 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:12.676 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:12.936 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:12.936 ninja: no work to do. 00:03:34.897 CC lib/ut/ut.o 00:03:34.897 CC lib/ut_mock/mock.o 00:03:34.897 CC lib/log/log.o 00:03:34.897 CC lib/log/log_flags.o 00:03:34.897 CC lib/log/log_deprecated.o 00:03:35.159 LIB libspdk_ut.a 00:03:35.159 LIB libspdk_log.a 00:03:35.159 LIB libspdk_ut_mock.a 00:03:35.159 SO libspdk_ut.so.2.0 00:03:35.159 SO libspdk_log.so.7.0 00:03:35.159 SO libspdk_ut_mock.so.6.0 00:03:35.159 SYMLINK libspdk_ut.so 00:03:35.159 SYMLINK libspdk_log.so 00:03:35.159 SYMLINK libspdk_ut_mock.so 00:03:35.730 CC lib/util/base64.o 00:03:35.730 CC lib/util/bit_array.o 00:03:35.730 CC lib/util/cpuset.o 00:03:35.731 CC lib/util/crc16.o 00:03:35.731 CC lib/util/crc32.o 00:03:35.731 CC lib/dma/dma.o 00:03:35.731 CC lib/util/crc32c.o 00:03:35.731 CC lib/util/crc32_ieee.o 00:03:35.731 CXX lib/trace_parser/trace.o 00:03:35.731 CC lib/util/crc64.o 00:03:35.731 CC lib/util/dif.o 00:03:35.731 CC lib/util/fd.o 00:03:35.731 CC lib/ioat/ioat.o 00:03:35.731 CC lib/util/fd_group.o 00:03:35.731 CC lib/util/file.o 00:03:35.731 CC lib/util/hexlify.o 00:03:35.731 CC lib/util/iov.o 00:03:35.731 CC lib/util/math.o 00:03:35.731 CC lib/util/net.o 00:03:35.731 CC lib/util/pipe.o 00:03:35.731 CC lib/util/strerror_tls.o 00:03:35.731 CC lib/util/string.o 00:03:35.731 CC lib/util/uuid.o 00:03:35.731 CC lib/util/xor.o 00:03:35.731 CC lib/util/zipf.o 00:03:35.731 CC lib/util/md5.o 00:03:35.991 CC lib/vfio_user/host/vfio_user_pci.o 00:03:35.991 CC lib/vfio_user/host/vfio_user.o 00:03:35.991 LIB libspdk_dma.a 00:03:35.991 SO libspdk_dma.so.5.0 00:03:35.991 LIB libspdk_ioat.a 00:03:35.991 SYMLINK libspdk_dma.so 00:03:35.991 SO libspdk_ioat.so.7.0 00:03:35.991 SYMLINK libspdk_ioat.so 00:03:36.251 LIB libspdk_vfio_user.a 00:03:36.251 LIB libspdk_util.a 00:03:36.251 SO libspdk_vfio_user.so.5.0 00:03:36.251 SYMLINK libspdk_vfio_user.so 00:03:36.251 SO libspdk_util.so.10.0 00:03:36.251 SYMLINK libspdk_util.so 00:03:36.513 LIB libspdk_trace_parser.a 00:03:36.513 SO libspdk_trace_parser.so.6.0 00:03:36.513 SYMLINK libspdk_trace_parser.so 00:03:36.774 CC lib/vmd/vmd.o 00:03:36.774 CC lib/json/json_parse.o 00:03:36.774 CC lib/json/json_write.o 00:03:36.774 CC lib/vmd/led.o 00:03:36.774 CC lib/json/json_util.o 00:03:36.774 CC lib/env_dpdk/env.o 00:03:36.774 CC lib/conf/conf.o 00:03:36.774 CC lib/env_dpdk/memory.o 00:03:36.774 CC lib/env_dpdk/pci.o 00:03:36.774 CC lib/env_dpdk/init.o 00:03:36.774 CC lib/idxd/idxd.o 00:03:36.774 CC lib/env_dpdk/threads.o 00:03:36.774 CC lib/idxd/idxd_user.o 00:03:36.774 CC lib/env_dpdk/pci_ioat.o 00:03:36.774 CC lib/idxd/idxd_kernel.o 00:03:36.774 CC lib/env_dpdk/pci_virtio.o 00:03:36.774 CC lib/rdma_utils/rdma_utils.o 00:03:36.774 CC lib/rdma_provider/common.o 00:03:36.774 CC lib/env_dpdk/pci_vmd.o 00:03:36.774 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:36.774 CC lib/env_dpdk/pci_idxd.o 00:03:36.774 CC lib/env_dpdk/pci_event.o 00:03:36.774 CC lib/env_dpdk/sigbus_handler.o 00:03:36.774 CC lib/env_dpdk/pci_dpdk.o 00:03:36.774 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:36.774 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:37.035 LIB libspdk_rdma_provider.a 00:03:37.035 LIB libspdk_conf.a 00:03:37.035 SO libspdk_rdma_provider.so.6.0 00:03:37.035 SO libspdk_conf.so.6.0 00:03:37.035 LIB libspdk_json.a 00:03:37.035 LIB libspdk_rdma_utils.a 00:03:37.035 SYMLINK libspdk_rdma_provider.so 00:03:37.035 SO libspdk_rdma_utils.so.1.0 00:03:37.035 SO libspdk_json.so.6.0 00:03:37.035 SYMLINK libspdk_conf.so 00:03:37.035 SYMLINK libspdk_rdma_utils.so 00:03:37.035 SYMLINK libspdk_json.so 00:03:37.296 LIB libspdk_idxd.a 00:03:37.296 LIB libspdk_vmd.a 00:03:37.296 SO libspdk_idxd.so.12.1 00:03:37.296 SO libspdk_vmd.so.6.0 00:03:37.296 SYMLINK libspdk_idxd.so 00:03:37.296 SYMLINK libspdk_vmd.so 00:03:37.558 CC lib/jsonrpc/jsonrpc_server.o 00:03:37.558 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:37.558 CC lib/jsonrpc/jsonrpc_client.o 00:03:37.558 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:37.819 LIB libspdk_jsonrpc.a 00:03:37.819 SO libspdk_jsonrpc.so.6.0 00:03:37.819 SYMLINK libspdk_jsonrpc.so 00:03:37.819 LIB libspdk_env_dpdk.a 00:03:38.080 SO libspdk_env_dpdk.so.15.0 00:03:38.080 SYMLINK libspdk_env_dpdk.so 00:03:38.080 CC lib/rpc/rpc.o 00:03:38.342 LIB libspdk_rpc.a 00:03:38.342 SO libspdk_rpc.so.6.0 00:03:38.604 SYMLINK libspdk_rpc.so 00:03:38.864 CC lib/notify/notify.o 00:03:38.864 CC lib/trace/trace.o 00:03:38.864 CC lib/notify/notify_rpc.o 00:03:38.864 CC lib/trace/trace_flags.o 00:03:38.864 CC lib/trace/trace_rpc.o 00:03:38.864 CC lib/keyring/keyring.o 00:03:38.865 CC lib/keyring/keyring_rpc.o 00:03:39.125 LIB libspdk_notify.a 00:03:39.125 SO libspdk_notify.so.6.0 00:03:39.125 LIB libspdk_keyring.a 00:03:39.125 LIB libspdk_trace.a 00:03:39.125 SYMLINK libspdk_notify.so 00:03:39.125 SO libspdk_keyring.so.2.0 00:03:39.125 SO libspdk_trace.so.11.0 00:03:39.387 SYMLINK libspdk_keyring.so 00:03:39.387 SYMLINK libspdk_trace.so 00:03:39.648 CC lib/thread/thread.o 00:03:39.648 CC lib/thread/iobuf.o 00:03:39.648 CC lib/sock/sock.o 00:03:39.648 CC lib/sock/sock_rpc.o 00:03:39.909 LIB libspdk_sock.a 00:03:40.169 SO libspdk_sock.so.10.0 00:03:40.169 SYMLINK libspdk_sock.so 00:03:40.430 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:40.430 CC lib/nvme/nvme_ctrlr.o 00:03:40.430 CC lib/nvme/nvme_fabric.o 00:03:40.430 CC lib/nvme/nvme_ns_cmd.o 00:03:40.430 CC lib/nvme/nvme_ns.o 00:03:40.430 CC lib/nvme/nvme_pcie_common.o 00:03:40.430 CC lib/nvme/nvme_pcie.o 00:03:40.430 CC lib/nvme/nvme_quirks.o 00:03:40.430 CC lib/nvme/nvme_qpair.o 00:03:40.430 CC lib/nvme/nvme.o 00:03:40.430 CC lib/nvme/nvme_transport.o 00:03:40.430 CC lib/nvme/nvme_discovery.o 00:03:40.430 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:40.430 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:40.430 CC lib/nvme/nvme_tcp.o 00:03:40.430 CC lib/nvme/nvme_opal.o 00:03:40.430 CC lib/nvme/nvme_io_msg.o 00:03:40.430 CC lib/nvme/nvme_poll_group.o 00:03:40.430 CC lib/nvme/nvme_zns.o 00:03:40.430 CC lib/nvme/nvme_stubs.o 00:03:40.430 CC lib/nvme/nvme_auth.o 00:03:40.430 CC lib/nvme/nvme_cuse.o 00:03:40.430 CC lib/nvme/nvme_vfio_user.o 00:03:40.430 CC lib/nvme/nvme_rdma.o 00:03:40.999 LIB libspdk_thread.a 00:03:40.999 SO libspdk_thread.so.10.1 00:03:40.999 SYMLINK libspdk_thread.so 00:03:41.569 CC lib/vfu_tgt/tgt_endpoint.o 00:03:41.569 CC lib/vfu_tgt/tgt_rpc.o 00:03:41.569 CC lib/accel/accel.o 00:03:41.569 CC lib/accel/accel_rpc.o 00:03:41.569 CC lib/accel/accel_sw.o 00:03:41.569 CC lib/init/json_config.o 00:03:41.569 CC lib/blob/blobstore.o 00:03:41.569 CC lib/init/subsystem.o 00:03:41.569 CC lib/blob/request.o 00:03:41.569 CC lib/blob/zeroes.o 00:03:41.569 CC lib/init/subsystem_rpc.o 00:03:41.569 CC lib/blob/blob_bs_dev.o 00:03:41.569 CC lib/init/rpc.o 00:03:41.569 CC lib/fsdev/fsdev.o 00:03:41.569 CC lib/fsdev/fsdev_io.o 00:03:41.569 CC lib/fsdev/fsdev_rpc.o 00:03:41.569 CC lib/virtio/virtio.o 00:03:41.569 CC lib/virtio/virtio_vhost_user.o 00:03:41.569 CC lib/virtio/virtio_vfio_user.o 00:03:41.569 CC lib/virtio/virtio_pci.o 00:03:41.569 LIB libspdk_init.a 00:03:41.829 SO libspdk_init.so.6.0 00:03:41.829 LIB libspdk_vfu_tgt.a 00:03:41.829 LIB libspdk_virtio.a 00:03:41.829 SO libspdk_vfu_tgt.so.3.0 00:03:41.829 SYMLINK libspdk_init.so 00:03:41.829 SO libspdk_virtio.so.7.0 00:03:41.829 SYMLINK libspdk_vfu_tgt.so 00:03:41.829 SYMLINK libspdk_virtio.so 00:03:42.090 LIB libspdk_fsdev.a 00:03:42.090 SO libspdk_fsdev.so.1.0 00:03:42.090 CC lib/event/app.o 00:03:42.090 CC lib/event/reactor.o 00:03:42.090 CC lib/event/log_rpc.o 00:03:42.090 CC lib/event/app_rpc.o 00:03:42.090 CC lib/event/scheduler_static.o 00:03:42.090 SYMLINK libspdk_fsdev.so 00:03:42.090 LIB libspdk_accel.a 00:03:42.090 SO libspdk_accel.so.16.0 00:03:42.351 SYMLINK libspdk_accel.so 00:03:42.351 LIB libspdk_nvme.a 00:03:42.351 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:42.611 LIB libspdk_event.a 00:03:42.611 SO libspdk_nvme.so.14.0 00:03:42.611 SO libspdk_event.so.14.0 00:03:42.611 CC lib/bdev/bdev.o 00:03:42.611 CC lib/bdev/bdev_rpc.o 00:03:42.611 CC lib/bdev/bdev_zone.o 00:03:42.611 CC lib/bdev/part.o 00:03:42.611 CC lib/bdev/scsi_nvme.o 00:03:42.611 SYMLINK libspdk_event.so 00:03:42.871 SYMLINK libspdk_nvme.so 00:03:43.132 LIB libspdk_fuse_dispatcher.a 00:03:43.132 SO libspdk_fuse_dispatcher.so.1.0 00:03:43.132 SYMLINK libspdk_fuse_dispatcher.so 00:03:44.072 LIB libspdk_blob.a 00:03:44.072 SO libspdk_blob.so.11.0 00:03:44.072 SYMLINK libspdk_blob.so 00:03:44.332 CC lib/blobfs/blobfs.o 00:03:44.332 CC lib/blobfs/tree.o 00:03:44.332 CC lib/lvol/lvol.o 00:03:44.593 LIB libspdk_bdev.a 00:03:44.593 SO libspdk_bdev.so.16.0 00:03:44.593 SYMLINK libspdk_bdev.so 00:03:44.854 CC lib/nvmf/ctrlr.o 00:03:44.854 CC lib/nvmf/ctrlr_discovery.o 00:03:44.854 CC lib/nvmf/ctrlr_bdev.o 00:03:44.854 CC lib/nvmf/subsystem.o 00:03:44.854 CC lib/nvmf/nvmf.o 00:03:44.854 CC lib/nvmf/nvmf_rpc.o 00:03:44.854 CC lib/ftl/ftl_core.o 00:03:44.854 CC lib/nvmf/transport.o 00:03:44.854 CC lib/ftl/ftl_init.o 00:03:44.854 CC lib/nvmf/tcp.o 00:03:44.854 CC lib/ftl/ftl_layout.o 00:03:44.854 CC lib/nvmf/stubs.o 00:03:44.854 CC lib/ftl/ftl_debug.o 00:03:44.854 CC lib/nvmf/mdns_server.o 00:03:44.854 CC lib/ftl/ftl_io.o 00:03:44.854 CC lib/nvmf/vfio_user.o 00:03:44.854 CC lib/ftl/ftl_sb.o 00:03:44.854 CC lib/nvmf/rdma.o 00:03:44.854 CC lib/ftl/ftl_l2p.o 00:03:44.854 CC lib/nvmf/auth.o 00:03:44.854 CC lib/ftl/ftl_l2p_flat.o 00:03:44.854 CC lib/scsi/lun.o 00:03:44.854 CC lib/ftl/ftl_nv_cache.o 00:03:44.854 CC lib/scsi/dev.o 00:03:45.113 CC lib/nbd/nbd.o 00:03:45.113 CC lib/ftl/ftl_band.o 00:03:45.113 CC lib/nbd/nbd_rpc.o 00:03:45.113 CC lib/scsi/port.o 00:03:45.113 CC lib/ftl/ftl_band_ops.o 00:03:45.113 CC lib/ublk/ublk.o 00:03:45.113 CC lib/scsi/scsi.o 00:03:45.113 CC lib/ftl/ftl_writer.o 00:03:45.113 CC lib/ublk/ublk_rpc.o 00:03:45.113 CC lib/ftl/ftl_rq.o 00:03:45.113 CC lib/scsi/scsi_bdev.o 00:03:45.113 CC lib/ftl/ftl_reloc.o 00:03:45.113 CC lib/scsi/scsi_pr.o 00:03:45.113 CC lib/scsi/scsi_rpc.o 00:03:45.113 CC lib/ftl/ftl_l2p_cache.o 00:03:45.113 CC lib/ftl/ftl_p2l.o 00:03:45.113 CC lib/scsi/task.o 00:03:45.113 CC lib/ftl/ftl_p2l_log.o 00:03:45.113 CC lib/ftl/mngt/ftl_mngt.o 00:03:45.113 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:45.113 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:45.113 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:45.113 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:45.113 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:45.113 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:45.113 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:45.113 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:45.113 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:45.113 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:45.113 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:45.113 CC lib/ftl/utils/ftl_bitmap.o 00:03:45.113 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:45.113 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:45.113 CC lib/ftl/utils/ftl_conf.o 00:03:45.113 CC lib/ftl/utils/ftl_md.o 00:03:45.113 CC lib/ftl/utils/ftl_mempool.o 00:03:45.113 CC lib/ftl/utils/ftl_property.o 00:03:45.113 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:45.113 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:45.113 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:45.113 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:45.113 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:45.113 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:45.113 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:45.113 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:45.113 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:45.113 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:45.113 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:45.113 CC lib/ftl/base/ftl_base_bdev.o 00:03:45.113 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:45.113 CC lib/ftl/base/ftl_base_dev.o 00:03:45.113 CC lib/ftl/ftl_trace.o 00:03:45.113 LIB libspdk_blobfs.a 00:03:45.372 SO libspdk_blobfs.so.10.0 00:03:45.372 LIB libspdk_lvol.a 00:03:45.372 SO libspdk_lvol.so.10.0 00:03:45.372 SYMLINK libspdk_blobfs.so 00:03:45.372 SYMLINK libspdk_lvol.so 00:03:45.632 LIB libspdk_scsi.a 00:03:45.632 SO libspdk_scsi.so.9.0 00:03:45.632 LIB libspdk_ublk.a 00:03:45.632 LIB libspdk_nbd.a 00:03:45.632 SO libspdk_ublk.so.3.0 00:03:45.632 SO libspdk_nbd.so.7.0 00:03:45.632 SYMLINK libspdk_scsi.so 00:03:45.632 SYMLINK libspdk_ublk.so 00:03:45.632 SYMLINK libspdk_nbd.so 00:03:46.204 CC lib/vhost/vhost.o 00:03:46.204 CC lib/vhost/vhost_rpc.o 00:03:46.204 CC lib/vhost/vhost_blk.o 00:03:46.204 CC lib/vhost/vhost_scsi.o 00:03:46.204 CC lib/vhost/rte_vhost_user.o 00:03:46.204 CC lib/iscsi/conn.o 00:03:46.204 CC lib/iscsi/iscsi.o 00:03:46.204 CC lib/iscsi/init_grp.o 00:03:46.204 CC lib/iscsi/param.o 00:03:46.204 CC lib/iscsi/portal_grp.o 00:03:46.204 CC lib/iscsi/tgt_node.o 00:03:46.204 CC lib/iscsi/iscsi_subsystem.o 00:03:46.204 CC lib/iscsi/iscsi_rpc.o 00:03:46.204 CC lib/iscsi/task.o 00:03:46.204 LIB libspdk_ftl.a 00:03:46.204 SO libspdk_ftl.so.9.0 00:03:46.466 SYMLINK libspdk_ftl.so 00:03:47.038 LIB libspdk_nvmf.a 00:03:47.038 LIB libspdk_vhost.a 00:03:47.038 SO libspdk_nvmf.so.19.0 00:03:47.038 SO libspdk_vhost.so.8.0 00:03:47.299 SYMLINK libspdk_vhost.so 00:03:47.299 LIB libspdk_iscsi.a 00:03:47.299 SYMLINK libspdk_nvmf.so 00:03:47.299 SO libspdk_iscsi.so.8.0 00:03:47.561 SYMLINK libspdk_iscsi.so 00:03:48.158 CC module/vfu_device/vfu_virtio.o 00:03:48.158 CC module/vfu_device/vfu_virtio_blk.o 00:03:48.158 CC module/vfu_device/vfu_virtio_scsi.o 00:03:48.158 CC module/vfu_device/vfu_virtio_rpc.o 00:03:48.158 CC module/vfu_device/vfu_virtio_fs.o 00:03:48.158 CC module/env_dpdk/env_dpdk_rpc.o 00:03:48.158 CC module/accel/error/accel_error.o 00:03:48.158 CC module/fsdev/aio/fsdev_aio.o 00:03:48.158 CC module/accel/error/accel_error_rpc.o 00:03:48.158 CC module/keyring/file/keyring.o 00:03:48.158 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:48.158 LIB libspdk_env_dpdk_rpc.a 00:03:48.158 CC module/fsdev/aio/linux_aio_mgr.o 00:03:48.158 CC module/keyring/file/keyring_rpc.o 00:03:48.158 CC module/sock/posix/posix.o 00:03:48.158 CC module/blob/bdev/blob_bdev.o 00:03:48.158 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:48.158 CC module/keyring/linux/keyring.o 00:03:48.158 CC module/keyring/linux/keyring_rpc.o 00:03:48.158 CC module/scheduler/gscheduler/gscheduler.o 00:03:48.158 CC module/accel/ioat/accel_ioat.o 00:03:48.158 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:48.158 CC module/accel/ioat/accel_ioat_rpc.o 00:03:48.158 CC module/accel/iaa/accel_iaa.o 00:03:48.158 CC module/accel/iaa/accel_iaa_rpc.o 00:03:48.158 CC module/accel/dsa/accel_dsa.o 00:03:48.158 CC module/accel/dsa/accel_dsa_rpc.o 00:03:48.158 SO libspdk_env_dpdk_rpc.so.6.0 00:03:48.420 SYMLINK libspdk_env_dpdk_rpc.so 00:03:48.420 LIB libspdk_scheduler_dpdk_governor.a 00:03:48.420 LIB libspdk_keyring_file.a 00:03:48.420 LIB libspdk_scheduler_gscheduler.a 00:03:48.420 LIB libspdk_keyring_linux.a 00:03:48.420 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:48.420 LIB libspdk_accel_error.a 00:03:48.420 SO libspdk_keyring_linux.so.1.0 00:03:48.420 SO libspdk_scheduler_gscheduler.so.4.0 00:03:48.420 SO libspdk_keyring_file.so.2.0 00:03:48.420 LIB libspdk_accel_ioat.a 00:03:48.420 LIB libspdk_scheduler_dynamic.a 00:03:48.420 LIB libspdk_accel_iaa.a 00:03:48.420 SO libspdk_accel_error.so.2.0 00:03:48.420 SO libspdk_accel_iaa.so.3.0 00:03:48.420 SO libspdk_accel_ioat.so.6.0 00:03:48.420 SO libspdk_scheduler_dynamic.so.4.0 00:03:48.420 SYMLINK libspdk_keyring_linux.so 00:03:48.420 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:48.420 SYMLINK libspdk_scheduler_gscheduler.so 00:03:48.682 LIB libspdk_blob_bdev.a 00:03:48.682 SYMLINK libspdk_keyring_file.so 00:03:48.682 SYMLINK libspdk_accel_error.so 00:03:48.682 SYMLINK libspdk_accel_iaa.so 00:03:48.682 SYMLINK libspdk_scheduler_dynamic.so 00:03:48.682 LIB libspdk_accel_dsa.a 00:03:48.682 SO libspdk_blob_bdev.so.11.0 00:03:48.682 SYMLINK libspdk_accel_ioat.so 00:03:48.682 SO libspdk_accel_dsa.so.5.0 00:03:48.682 LIB libspdk_vfu_device.a 00:03:48.682 SYMLINK libspdk_blob_bdev.so 00:03:48.682 SO libspdk_vfu_device.so.3.0 00:03:48.682 SYMLINK libspdk_accel_dsa.so 00:03:48.682 SYMLINK libspdk_vfu_device.so 00:03:48.943 LIB libspdk_fsdev_aio.a 00:03:48.943 SO libspdk_fsdev_aio.so.1.0 00:03:48.943 LIB libspdk_sock_posix.a 00:03:48.943 SYMLINK libspdk_fsdev_aio.so 00:03:48.943 SO libspdk_sock_posix.so.6.0 00:03:49.204 SYMLINK libspdk_sock_posix.so 00:03:49.204 CC module/bdev/delay/vbdev_delay.o 00:03:49.204 CC module/bdev/error/vbdev_error.o 00:03:49.204 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:49.204 CC module/bdev/error/vbdev_error_rpc.o 00:03:49.204 CC module/bdev/passthru/vbdev_passthru.o 00:03:49.204 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:49.204 CC module/bdev/gpt/gpt.o 00:03:49.204 CC module/bdev/gpt/vbdev_gpt.o 00:03:49.204 CC module/bdev/lvol/vbdev_lvol.o 00:03:49.204 CC module/bdev/malloc/bdev_malloc.o 00:03:49.204 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:49.204 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:49.204 CC module/blobfs/bdev/blobfs_bdev.o 00:03:49.204 CC module/bdev/null/bdev_null.o 00:03:49.204 CC module/bdev/nvme/bdev_nvme.o 00:03:49.204 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:49.204 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:49.204 CC module/bdev/null/bdev_null_rpc.o 00:03:49.204 CC module/bdev/nvme/nvme_rpc.o 00:03:49.204 CC module/bdev/nvme/bdev_mdns_client.o 00:03:49.204 CC module/bdev/nvme/vbdev_opal.o 00:03:49.204 CC module/bdev/ftl/bdev_ftl.o 00:03:49.204 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:49.204 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:49.204 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:49.204 CC module/bdev/raid/bdev_raid.o 00:03:49.204 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:49.204 CC module/bdev/raid/bdev_raid_rpc.o 00:03:49.204 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:49.204 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:49.204 CC module/bdev/aio/bdev_aio.o 00:03:49.204 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:49.204 CC module/bdev/raid/raid1.o 00:03:49.204 CC module/bdev/aio/bdev_aio_rpc.o 00:03:49.204 CC module/bdev/raid/bdev_raid_sb.o 00:03:49.204 CC module/bdev/raid/raid0.o 00:03:49.204 CC module/bdev/raid/concat.o 00:03:49.204 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:49.204 CC module/bdev/iscsi/bdev_iscsi.o 00:03:49.204 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:49.204 CC module/bdev/split/vbdev_split.o 00:03:49.204 CC module/bdev/split/vbdev_split_rpc.o 00:03:49.464 LIB libspdk_bdev_null.a 00:03:49.464 LIB libspdk_blobfs_bdev.a 00:03:49.464 LIB libspdk_bdev_gpt.a 00:03:49.464 SO libspdk_blobfs_bdev.so.6.0 00:03:49.464 SO libspdk_bdev_null.so.6.0 00:03:49.464 LIB libspdk_bdev_ftl.a 00:03:49.464 SO libspdk_bdev_gpt.so.6.0 00:03:49.464 LIB libspdk_bdev_error.a 00:03:49.464 LIB libspdk_bdev_delay.a 00:03:49.464 LIB libspdk_bdev_split.a 00:03:49.464 LIB libspdk_bdev_zone_block.a 00:03:49.464 LIB libspdk_bdev_malloc.a 00:03:49.464 LIB libspdk_bdev_aio.a 00:03:49.464 SO libspdk_bdev_ftl.so.6.0 00:03:49.464 LIB libspdk_bdev_passthru.a 00:03:49.726 SO libspdk_bdev_split.so.6.0 00:03:49.726 SO libspdk_bdev_delay.so.6.0 00:03:49.726 SYMLINK libspdk_blobfs_bdev.so 00:03:49.726 SYMLINK libspdk_bdev_null.so 00:03:49.726 SO libspdk_bdev_error.so.6.0 00:03:49.726 SO libspdk_bdev_passthru.so.6.0 00:03:49.726 SO libspdk_bdev_zone_block.so.6.0 00:03:49.726 SO libspdk_bdev_aio.so.6.0 00:03:49.726 SO libspdk_bdev_malloc.so.6.0 00:03:49.726 SYMLINK libspdk_bdev_gpt.so 00:03:49.726 SYMLINK libspdk_bdev_split.so 00:03:49.726 SYMLINK libspdk_bdev_ftl.so 00:03:49.726 SYMLINK libspdk_bdev_delay.so 00:03:49.726 LIB libspdk_bdev_iscsi.a 00:03:49.726 SYMLINK libspdk_bdev_error.so 00:03:49.726 SYMLINK libspdk_bdev_malloc.so 00:03:49.726 SYMLINK libspdk_bdev_aio.so 00:03:49.726 SYMLINK libspdk_bdev_zone_block.so 00:03:49.726 SYMLINK libspdk_bdev_passthru.so 00:03:49.726 SO libspdk_bdev_iscsi.so.6.0 00:03:49.726 LIB libspdk_bdev_lvol.a 00:03:49.726 SO libspdk_bdev_lvol.so.6.0 00:03:49.726 SYMLINK libspdk_bdev_iscsi.so 00:03:49.726 LIB libspdk_bdev_virtio.a 00:03:49.726 SYMLINK libspdk_bdev_lvol.so 00:03:49.726 SO libspdk_bdev_virtio.so.6.0 00:03:49.986 SYMLINK libspdk_bdev_virtio.so 00:03:50.248 LIB libspdk_bdev_raid.a 00:03:50.248 SO libspdk_bdev_raid.so.6.0 00:03:50.248 SYMLINK libspdk_bdev_raid.so 00:03:51.189 LIB libspdk_bdev_nvme.a 00:03:51.450 SO libspdk_bdev_nvme.so.7.0 00:03:51.450 SYMLINK libspdk_bdev_nvme.so 00:03:52.021 CC module/event/subsystems/sock/sock.o 00:03:52.021 CC module/event/subsystems/iobuf/iobuf.o 00:03:52.021 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:52.021 CC module/event/subsystems/vmd/vmd.o 00:03:52.021 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:52.282 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:52.282 CC module/event/subsystems/keyring/keyring.o 00:03:52.282 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:52.282 CC module/event/subsystems/scheduler/scheduler.o 00:03:52.282 CC module/event/subsystems/fsdev/fsdev.o 00:03:52.282 LIB libspdk_event_vmd.a 00:03:52.282 LIB libspdk_event_sock.a 00:03:52.282 LIB libspdk_event_scheduler.a 00:03:52.282 LIB libspdk_event_iobuf.a 00:03:52.282 LIB libspdk_event_keyring.a 00:03:52.282 LIB libspdk_event_vfu_tgt.a 00:03:52.282 LIB libspdk_event_vhost_blk.a 00:03:52.282 LIB libspdk_event_fsdev.a 00:03:52.282 SO libspdk_event_vmd.so.6.0 00:03:52.282 SO libspdk_event_iobuf.so.3.0 00:03:52.282 SO libspdk_event_scheduler.so.4.0 00:03:52.282 SO libspdk_event_sock.so.5.0 00:03:52.282 SO libspdk_event_vhost_blk.so.3.0 00:03:52.282 SO libspdk_event_keyring.so.1.0 00:03:52.282 SO libspdk_event_vfu_tgt.so.3.0 00:03:52.282 SO libspdk_event_fsdev.so.1.0 00:03:52.543 SYMLINK libspdk_event_vmd.so 00:03:52.543 SYMLINK libspdk_event_sock.so 00:03:52.543 SYMLINK libspdk_event_vhost_blk.so 00:03:52.543 SYMLINK libspdk_event_iobuf.so 00:03:52.543 SYMLINK libspdk_event_scheduler.so 00:03:52.543 SYMLINK libspdk_event_vfu_tgt.so 00:03:52.543 SYMLINK libspdk_event_keyring.so 00:03:52.543 SYMLINK libspdk_event_fsdev.so 00:03:52.804 CC module/event/subsystems/accel/accel.o 00:03:53.065 LIB libspdk_event_accel.a 00:03:53.065 SO libspdk_event_accel.so.6.0 00:03:53.065 SYMLINK libspdk_event_accel.so 00:03:53.326 CC module/event/subsystems/bdev/bdev.o 00:03:53.587 LIB libspdk_event_bdev.a 00:03:53.587 SO libspdk_event_bdev.so.6.0 00:03:53.587 SYMLINK libspdk_event_bdev.so 00:03:54.161 CC module/event/subsystems/scsi/scsi.o 00:03:54.161 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:54.161 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:54.161 CC module/event/subsystems/nbd/nbd.o 00:03:54.161 CC module/event/subsystems/ublk/ublk.o 00:03:54.161 LIB libspdk_event_nbd.a 00:03:54.161 LIB libspdk_event_scsi.a 00:03:54.161 LIB libspdk_event_ublk.a 00:03:54.161 SO libspdk_event_nbd.so.6.0 00:03:54.161 SO libspdk_event_scsi.so.6.0 00:03:54.161 SO libspdk_event_ublk.so.3.0 00:03:54.161 SYMLINK libspdk_event_nbd.so 00:03:54.423 LIB libspdk_event_nvmf.a 00:03:54.423 SYMLINK libspdk_event_scsi.so 00:03:54.423 SYMLINK libspdk_event_ublk.so 00:03:54.423 SO libspdk_event_nvmf.so.6.0 00:03:54.423 SYMLINK libspdk_event_nvmf.so 00:03:54.685 CC module/event/subsystems/iscsi/iscsi.o 00:03:54.685 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:54.945 LIB libspdk_event_vhost_scsi.a 00:03:54.945 LIB libspdk_event_iscsi.a 00:03:54.945 SO libspdk_event_vhost_scsi.so.3.0 00:03:54.945 SO libspdk_event_iscsi.so.6.0 00:03:54.945 SYMLINK libspdk_event_vhost_scsi.so 00:03:54.945 SYMLINK libspdk_event_iscsi.so 00:03:55.206 SO libspdk.so.6.0 00:03:55.207 SYMLINK libspdk.so 00:03:55.468 TEST_HEADER include/spdk/accel.h 00:03:55.468 TEST_HEADER include/spdk/accel_module.h 00:03:55.468 TEST_HEADER include/spdk/assert.h 00:03:55.468 TEST_HEADER include/spdk/barrier.h 00:03:55.468 CC test/rpc_client/rpc_client_test.o 00:03:55.468 TEST_HEADER include/spdk/bdev.h 00:03:55.468 TEST_HEADER include/spdk/base64.h 00:03:55.468 TEST_HEADER include/spdk/bdev_module.h 00:03:55.468 TEST_HEADER include/spdk/bdev_zone.h 00:03:55.468 TEST_HEADER include/spdk/bit_pool.h 00:03:55.468 TEST_HEADER include/spdk/bit_array.h 00:03:55.468 TEST_HEADER include/spdk/blob_bdev.h 00:03:55.468 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:55.468 CXX app/trace/trace.o 00:03:55.468 TEST_HEADER include/spdk/blobfs.h 00:03:55.468 TEST_HEADER include/spdk/blob.h 00:03:55.468 TEST_HEADER include/spdk/conf.h 00:03:55.468 TEST_HEADER include/spdk/config.h 00:03:55.468 TEST_HEADER include/spdk/cpuset.h 00:03:55.468 TEST_HEADER include/spdk/crc16.h 00:03:55.468 TEST_HEADER include/spdk/crc32.h 00:03:55.468 TEST_HEADER include/spdk/crc64.h 00:03:55.468 TEST_HEADER include/spdk/dma.h 00:03:55.468 CC app/spdk_top/spdk_top.o 00:03:55.468 TEST_HEADER include/spdk/dif.h 00:03:55.468 TEST_HEADER include/spdk/endian.h 00:03:55.468 CC app/spdk_nvme_identify/identify.o 00:03:55.468 CC app/spdk_lspci/spdk_lspci.o 00:03:55.468 TEST_HEADER include/spdk/env_dpdk.h 00:03:55.468 CC app/trace_record/trace_record.o 00:03:55.468 TEST_HEADER include/spdk/env.h 00:03:55.468 TEST_HEADER include/spdk/event.h 00:03:55.468 CC app/spdk_nvme_perf/perf.o 00:03:55.468 TEST_HEADER include/spdk/fd_group.h 00:03:55.468 TEST_HEADER include/spdk/fd.h 00:03:55.468 TEST_HEADER include/spdk/file.h 00:03:55.468 TEST_HEADER include/spdk/fsdev.h 00:03:55.468 CC app/spdk_nvme_discover/discovery_aer.o 00:03:55.468 TEST_HEADER include/spdk/fsdev_module.h 00:03:55.468 TEST_HEADER include/spdk/ftl.h 00:03:55.468 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:55.468 TEST_HEADER include/spdk/hexlify.h 00:03:55.468 TEST_HEADER include/spdk/gpt_spec.h 00:03:55.468 TEST_HEADER include/spdk/histogram_data.h 00:03:55.468 TEST_HEADER include/spdk/idxd.h 00:03:55.468 TEST_HEADER include/spdk/idxd_spec.h 00:03:55.468 TEST_HEADER include/spdk/init.h 00:03:55.468 TEST_HEADER include/spdk/ioat.h 00:03:55.468 TEST_HEADER include/spdk/ioat_spec.h 00:03:55.468 TEST_HEADER include/spdk/iscsi_spec.h 00:03:55.468 TEST_HEADER include/spdk/json.h 00:03:55.468 TEST_HEADER include/spdk/jsonrpc.h 00:03:55.468 TEST_HEADER include/spdk/keyring.h 00:03:55.468 TEST_HEADER include/spdk/likely.h 00:03:55.468 TEST_HEADER include/spdk/keyring_module.h 00:03:55.468 TEST_HEADER include/spdk/log.h 00:03:55.468 TEST_HEADER include/spdk/lvol.h 00:03:55.468 TEST_HEADER include/spdk/memory.h 00:03:55.468 TEST_HEADER include/spdk/md5.h 00:03:55.468 TEST_HEADER include/spdk/mmio.h 00:03:55.468 TEST_HEADER include/spdk/nbd.h 00:03:55.468 CC app/iscsi_tgt/iscsi_tgt.o 00:03:55.468 TEST_HEADER include/spdk/net.h 00:03:55.468 TEST_HEADER include/spdk/notify.h 00:03:55.468 TEST_HEADER include/spdk/nvme.h 00:03:55.468 TEST_HEADER include/spdk/nvme_intel.h 00:03:55.468 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:55.468 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:55.468 TEST_HEADER include/spdk/nvme_spec.h 00:03:55.468 TEST_HEADER include/spdk/nvme_zns.h 00:03:55.468 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:55.468 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:55.468 TEST_HEADER include/spdk/nvmf.h 00:03:55.468 TEST_HEADER include/spdk/nvmf_transport.h 00:03:55.468 TEST_HEADER include/spdk/nvmf_spec.h 00:03:55.468 TEST_HEADER include/spdk/opal.h 00:03:55.468 TEST_HEADER include/spdk/opal_spec.h 00:03:55.468 TEST_HEADER include/spdk/pci_ids.h 00:03:55.468 TEST_HEADER include/spdk/pipe.h 00:03:55.729 CC app/spdk_tgt/spdk_tgt.o 00:03:55.729 TEST_HEADER include/spdk/queue.h 00:03:55.729 TEST_HEADER include/spdk/scheduler.h 00:03:55.729 TEST_HEADER include/spdk/reduce.h 00:03:55.729 TEST_HEADER include/spdk/rpc.h 00:03:55.729 TEST_HEADER include/spdk/sock.h 00:03:55.729 TEST_HEADER include/spdk/scsi.h 00:03:55.729 TEST_HEADER include/spdk/stdinc.h 00:03:55.729 TEST_HEADER include/spdk/scsi_spec.h 00:03:55.729 TEST_HEADER include/spdk/string.h 00:03:55.729 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:55.729 TEST_HEADER include/spdk/thread.h 00:03:55.729 TEST_HEADER include/spdk/trace.h 00:03:55.729 CC app/nvmf_tgt/nvmf_main.o 00:03:55.729 CC app/spdk_dd/spdk_dd.o 00:03:55.729 TEST_HEADER include/spdk/trace_parser.h 00:03:55.729 TEST_HEADER include/spdk/tree.h 00:03:55.729 TEST_HEADER include/spdk/util.h 00:03:55.729 TEST_HEADER include/spdk/ublk.h 00:03:55.729 TEST_HEADER include/spdk/version.h 00:03:55.729 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:55.729 TEST_HEADER include/spdk/uuid.h 00:03:55.729 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:55.729 TEST_HEADER include/spdk/vhost.h 00:03:55.729 TEST_HEADER include/spdk/vmd.h 00:03:55.729 TEST_HEADER include/spdk/zipf.h 00:03:55.729 TEST_HEADER include/spdk/xor.h 00:03:55.729 CXX test/cpp_headers/accel.o 00:03:55.729 CXX test/cpp_headers/accel_module.o 00:03:55.729 CXX test/cpp_headers/barrier.o 00:03:55.729 CXX test/cpp_headers/assert.o 00:03:55.729 CXX test/cpp_headers/bdev_zone.o 00:03:55.729 CXX test/cpp_headers/base64.o 00:03:55.729 CXX test/cpp_headers/bit_array.o 00:03:55.729 CXX test/cpp_headers/bdev.o 00:03:55.729 CXX test/cpp_headers/bdev_module.o 00:03:55.729 CXX test/cpp_headers/bit_pool.o 00:03:55.729 CXX test/cpp_headers/blobfs_bdev.o 00:03:55.729 CXX test/cpp_headers/blobfs.o 00:03:55.729 CXX test/cpp_headers/blob_bdev.o 00:03:55.729 CXX test/cpp_headers/conf.o 00:03:55.729 CXX test/cpp_headers/config.o 00:03:55.729 CXX test/cpp_headers/blob.o 00:03:55.729 CXX test/cpp_headers/crc16.o 00:03:55.729 CXX test/cpp_headers/cpuset.o 00:03:55.729 CXX test/cpp_headers/crc32.o 00:03:55.729 CXX test/cpp_headers/crc64.o 00:03:55.729 CXX test/cpp_headers/dif.o 00:03:55.729 CXX test/cpp_headers/dma.o 00:03:55.729 CXX test/cpp_headers/env_dpdk.o 00:03:55.729 CXX test/cpp_headers/endian.o 00:03:55.729 CXX test/cpp_headers/fd_group.o 00:03:55.729 CXX test/cpp_headers/fd.o 00:03:55.729 CXX test/cpp_headers/env.o 00:03:55.729 CXX test/cpp_headers/fsdev_module.o 00:03:55.729 CXX test/cpp_headers/fuse_dispatcher.o 00:03:55.729 CXX test/cpp_headers/event.o 00:03:55.729 CC examples/util/zipf/zipf.o 00:03:55.729 CXX test/cpp_headers/file.o 00:03:55.729 CC test/app/stub/stub.o 00:03:55.729 CC test/env/vtophys/vtophys.o 00:03:55.729 CXX test/cpp_headers/fsdev.o 00:03:55.729 CXX test/cpp_headers/ftl.o 00:03:55.729 CXX test/cpp_headers/idxd.o 00:03:55.729 CC test/env/memory/memory_ut.o 00:03:55.729 CXX test/cpp_headers/gpt_spec.o 00:03:55.729 CC test/app/jsoncat/jsoncat.o 00:03:55.729 CC test/env/pci/pci_ut.o 00:03:55.729 CC test/thread/poller_perf/poller_perf.o 00:03:55.729 CXX test/cpp_headers/init.o 00:03:55.729 CXX test/cpp_headers/hexlify.o 00:03:55.729 CXX test/cpp_headers/ioat_spec.o 00:03:55.729 CXX test/cpp_headers/iscsi_spec.o 00:03:55.729 CXX test/cpp_headers/idxd_spec.o 00:03:55.729 CXX test/cpp_headers/histogram_data.o 00:03:55.729 CXX test/cpp_headers/ioat.o 00:03:55.729 CXX test/cpp_headers/json.o 00:03:55.729 CXX test/cpp_headers/jsonrpc.o 00:03:55.729 CC test/app/histogram_perf/histogram_perf.o 00:03:55.729 CXX test/cpp_headers/log.o 00:03:55.729 CC test/dma/test_dma/test_dma.o 00:03:55.729 CXX test/cpp_headers/lvol.o 00:03:55.729 CXX test/cpp_headers/keyring.o 00:03:55.729 CXX test/cpp_headers/keyring_module.o 00:03:55.729 CXX test/cpp_headers/memory.o 00:03:55.729 CXX test/cpp_headers/likely.o 00:03:55.730 CXX test/cpp_headers/nbd.o 00:03:55.730 CXX test/cpp_headers/md5.o 00:03:55.730 CXX test/cpp_headers/mmio.o 00:03:55.730 CXX test/cpp_headers/notify.o 00:03:55.730 LINK rpc_client_test 00:03:55.730 CXX test/cpp_headers/net.o 00:03:55.730 CC test/app/bdev_svc/bdev_svc.o 00:03:55.730 CXX test/cpp_headers/nvme.o 00:03:55.730 CXX test/cpp_headers/nvme_spec.o 00:03:55.730 CXX test/cpp_headers/nvme_intel.o 00:03:55.730 CXX test/cpp_headers/nvme_ocssd.o 00:03:55.730 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:55.730 CXX test/cpp_headers/nvmf.o 00:03:55.988 CXX test/cpp_headers/nvmf_transport.o 00:03:55.988 CXX test/cpp_headers/nvmf_cmd.o 00:03:55.988 CXX test/cpp_headers/nvme_zns.o 00:03:55.988 CC examples/ioat/perf/perf.o 00:03:55.988 CXX test/cpp_headers/nvmf_spec.o 00:03:55.988 CXX test/cpp_headers/opal_spec.o 00:03:55.988 CXX test/cpp_headers/pci_ids.o 00:03:55.988 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:55.988 CXX test/cpp_headers/queue.o 00:03:55.988 CXX test/cpp_headers/opal.o 00:03:55.988 CC app/fio/bdev/fio_plugin.o 00:03:55.988 CXX test/cpp_headers/pipe.o 00:03:55.988 CXX test/cpp_headers/reduce.o 00:03:55.988 CXX test/cpp_headers/scsi_spec.o 00:03:55.988 CXX test/cpp_headers/scheduler.o 00:03:55.988 CXX test/cpp_headers/stdinc.o 00:03:55.988 CXX test/cpp_headers/rpc.o 00:03:55.988 CXX test/cpp_headers/sock.o 00:03:55.988 CXX test/cpp_headers/scsi.o 00:03:55.988 CXX test/cpp_headers/tree.o 00:03:55.988 CXX test/cpp_headers/ublk.o 00:03:55.988 CXX test/cpp_headers/thread.o 00:03:55.988 CXX test/cpp_headers/string.o 00:03:55.988 CXX test/cpp_headers/trace_parser.o 00:03:55.988 CC app/fio/nvme/fio_plugin.o 00:03:55.988 CXX test/cpp_headers/trace.o 00:03:55.988 CXX test/cpp_headers/vfio_user_spec.o 00:03:55.988 CXX test/cpp_headers/util.o 00:03:55.988 CXX test/cpp_headers/uuid.o 00:03:55.988 CXX test/cpp_headers/vmd.o 00:03:55.988 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:55.988 CXX test/cpp_headers/version.o 00:03:55.988 CXX test/cpp_headers/vfio_user_pci.o 00:03:55.988 CXX test/cpp_headers/vhost.o 00:03:55.988 CC examples/ioat/verify/verify.o 00:03:55.988 LINK interrupt_tgt 00:03:55.988 CXX test/cpp_headers/zipf.o 00:03:55.988 CXX test/cpp_headers/xor.o 00:03:55.988 LINK zipf 00:03:56.247 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:56.247 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:56.247 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:56.247 LINK histogram_perf 00:03:56.247 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:56.247 CC test/env/mem_callbacks/mem_callbacks.o 00:03:56.247 LINK env_dpdk_post_init 00:03:56.247 LINK spdk_lspci 00:03:56.247 LINK bdev_svc 00:03:56.247 LINK verify 00:03:56.247 LINK ioat_perf 00:03:56.506 LINK pci_ut 00:03:56.506 LINK spdk_trace_record 00:03:56.506 LINK spdk_nvme_perf 00:03:56.506 CC examples/idxd/perf/perf.o 00:03:56.506 CC examples/sock/hello_world/hello_sock.o 00:03:56.506 LINK nvmf_tgt 00:03:56.506 LINK spdk_nvme_discover 00:03:56.506 LINK jsoncat 00:03:56.506 CC examples/vmd/lsvmd/lsvmd.o 00:03:56.506 LINK spdk_tgt 00:03:56.506 CC examples/vmd/led/led.o 00:03:56.506 LINK spdk_nvme_identify 00:03:56.506 LINK vtophys 00:03:56.506 LINK poller_perf 00:03:56.506 LINK stub 00:03:56.506 CC examples/thread/thread/thread_ex.o 00:03:56.506 LINK vhost_fuzz 00:03:56.506 LINK nvme_fuzz 00:03:56.506 LINK iscsi_tgt 00:03:56.506 LINK spdk_bdev 00:03:56.767 LINK lsvmd 00:03:56.767 LINK led 00:03:56.767 LINK hello_sock 00:03:56.767 LINK idxd_perf 00:03:56.767 LINK spdk_trace 00:03:56.767 LINK mem_callbacks 00:03:56.767 LINK spdk_dd 00:03:56.767 LINK thread 00:03:57.028 CC test/event/reactor_perf/reactor_perf.o 00:03:57.028 CC test/event/reactor/reactor.o 00:03:57.028 CC test/event/event_perf/event_perf.o 00:03:57.028 CC test/event/app_repeat/app_repeat.o 00:03:57.028 LINK memory_ut 00:03:57.028 CC test/event/scheduler/scheduler.o 00:03:57.028 LINK test_dma 00:03:57.028 LINK spdk_nvme 00:03:57.028 LINK reactor_perf 00:03:57.028 LINK reactor 00:03:57.028 LINK event_perf 00:03:57.028 LINK app_repeat 00:03:57.028 LINK spdk_top 00:03:57.289 CC app/vhost/vhost.o 00:03:57.289 CC examples/nvme/arbitration/arbitration.o 00:03:57.289 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:57.289 CC examples/nvme/reconnect/reconnect.o 00:03:57.289 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:57.289 CC examples/nvme/abort/abort.o 00:03:57.289 CC examples/nvme/hello_world/hello_world.o 00:03:57.289 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:57.289 CC examples/nvme/hotplug/hotplug.o 00:03:57.289 LINK scheduler 00:03:57.549 LINK vhost 00:03:57.549 CC examples/accel/perf/accel_perf.o 00:03:57.549 CC examples/blob/hello_world/hello_blob.o 00:03:57.549 LINK pmr_persistence 00:03:57.549 CC examples/blob/cli/blobcli.o 00:03:57.549 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:57.549 LINK cmb_copy 00:03:57.549 LINK hello_world 00:03:57.549 LINK hotplug 00:03:57.549 LINK reconnect 00:03:57.549 LINK arbitration 00:03:57.549 CC test/nvme/aer/aer.o 00:03:57.549 LINK abort 00:03:57.549 CC test/nvme/reserve/reserve.o 00:03:57.549 CC test/nvme/startup/startup.o 00:03:57.549 CC test/nvme/err_injection/err_injection.o 00:03:57.549 CC test/nvme/sgl/sgl.o 00:03:57.549 CC test/nvme/cuse/cuse.o 00:03:57.549 CC test/nvme/overhead/overhead.o 00:03:57.549 CC test/nvme/fdp/fdp.o 00:03:57.549 CC test/nvme/e2edp/nvme_dp.o 00:03:57.549 CC test/nvme/simple_copy/simple_copy.o 00:03:57.549 CC test/nvme/reset/reset.o 00:03:57.549 CC test/nvme/boot_partition/boot_partition.o 00:03:57.549 CC test/accel/dif/dif.o 00:03:57.549 CC test/nvme/connect_stress/connect_stress.o 00:03:57.549 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:57.549 CC test/nvme/compliance/nvme_compliance.o 00:03:57.549 CC test/nvme/fused_ordering/fused_ordering.o 00:03:57.549 CC test/blobfs/mkfs/mkfs.o 00:03:57.809 LINK nvme_manage 00:03:57.809 LINK iscsi_fuzz 00:03:57.809 CC test/lvol/esnap/esnap.o 00:03:57.809 LINK hello_blob 00:03:57.809 LINK startup 00:03:57.809 LINK hello_fsdev 00:03:57.809 LINK boot_partition 00:03:57.809 LINK reserve 00:03:57.809 LINK connect_stress 00:03:57.809 LINK accel_perf 00:03:57.809 LINK doorbell_aers 00:03:57.809 LINK fused_ordering 00:03:57.809 LINK err_injection 00:03:57.809 LINK simple_copy 00:03:57.809 LINK aer 00:03:57.809 LINK reset 00:03:57.809 LINK mkfs 00:03:57.809 LINK sgl 00:03:57.809 LINK fdp 00:03:57.809 LINK overhead 00:03:57.809 LINK nvme_dp 00:03:57.809 LINK blobcli 00:03:58.070 LINK nvme_compliance 00:03:58.331 LINK dif 00:03:58.331 CC examples/bdev/hello_world/hello_bdev.o 00:03:58.331 CC examples/bdev/bdevperf/bdevperf.o 00:03:58.592 LINK hello_bdev 00:03:58.853 LINK cuse 00:03:58.853 CC test/bdev/bdevio/bdevio.o 00:03:59.138 LINK bdevperf 00:03:59.138 LINK bdevio 00:03:59.815 CC examples/nvmf/nvmf/nvmf.o 00:04:00.092 LINK nvmf 00:04:02.007 LINK esnap 00:04:02.268 00:04:02.268 real 0m52.210s 00:04:02.268 user 6m12.152s 00:04:02.268 sys 3m4.745s 00:04:02.268 17:04:00 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:02.268 17:04:00 make -- common/autotest_common.sh@10 -- $ set +x 00:04:02.268 ************************************ 00:04:02.268 END TEST make 00:04:02.268 ************************************ 00:04:02.530 17:04:00 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:02.530 17:04:00 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:02.530 17:04:00 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:02.530 17:04:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:02.530 17:04:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:02.530 17:04:00 -- pm/common@44 -- $ pid=2671892 00:04:02.530 17:04:00 -- pm/common@50 -- $ kill -TERM 2671892 00:04:02.530 17:04:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:02.530 17:04:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:02.530 17:04:00 -- pm/common@44 -- $ pid=2671893 00:04:02.530 17:04:00 -- pm/common@50 -- $ kill -TERM 2671893 00:04:02.530 17:04:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:02.530 17:04:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:02.530 17:04:00 -- pm/common@44 -- $ pid=2671895 00:04:02.530 17:04:00 -- pm/common@50 -- $ kill -TERM 2671895 00:04:02.530 17:04:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:02.530 17:04:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:02.530 17:04:00 -- pm/common@44 -- $ pid=2671920 00:04:02.530 17:04:00 -- pm/common@50 -- $ sudo -E kill -TERM 2671920 00:04:02.530 17:04:00 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:02.530 17:04:00 -- common/autotest_common.sh@1681 -- # lcov --version 00:04:02.530 17:04:00 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:02.530 17:04:01 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:02.530 17:04:01 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:02.530 17:04:01 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:02.530 17:04:01 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:02.530 17:04:01 -- scripts/common.sh@336 -- # IFS=.-: 00:04:02.530 17:04:01 -- scripts/common.sh@336 -- # read -ra ver1 00:04:02.530 17:04:01 -- scripts/common.sh@337 -- # IFS=.-: 00:04:02.530 17:04:01 -- scripts/common.sh@337 -- # read -ra ver2 00:04:02.530 17:04:01 -- scripts/common.sh@338 -- # local 'op=<' 00:04:02.530 17:04:01 -- scripts/common.sh@340 -- # ver1_l=2 00:04:02.530 17:04:01 -- scripts/common.sh@341 -- # ver2_l=1 00:04:02.530 17:04:01 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:02.530 17:04:01 -- scripts/common.sh@344 -- # case "$op" in 00:04:02.530 17:04:01 -- scripts/common.sh@345 -- # : 1 00:04:02.530 17:04:01 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:02.530 17:04:01 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:02.530 17:04:01 -- scripts/common.sh@365 -- # decimal 1 00:04:02.530 17:04:01 -- scripts/common.sh@353 -- # local d=1 00:04:02.530 17:04:01 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:02.530 17:04:01 -- scripts/common.sh@355 -- # echo 1 00:04:02.530 17:04:01 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:02.530 17:04:01 -- scripts/common.sh@366 -- # decimal 2 00:04:02.530 17:04:01 -- scripts/common.sh@353 -- # local d=2 00:04:02.530 17:04:01 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:02.530 17:04:01 -- scripts/common.sh@355 -- # echo 2 00:04:02.530 17:04:01 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:02.530 17:04:01 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:02.530 17:04:01 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:02.530 17:04:01 -- scripts/common.sh@368 -- # return 0 00:04:02.530 17:04:01 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:02.530 17:04:01 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:02.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.530 --rc genhtml_branch_coverage=1 00:04:02.530 --rc genhtml_function_coverage=1 00:04:02.530 --rc genhtml_legend=1 00:04:02.530 --rc geninfo_all_blocks=1 00:04:02.530 --rc geninfo_unexecuted_blocks=1 00:04:02.530 00:04:02.530 ' 00:04:02.530 17:04:01 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:02.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.530 --rc genhtml_branch_coverage=1 00:04:02.530 --rc genhtml_function_coverage=1 00:04:02.530 --rc genhtml_legend=1 00:04:02.530 --rc geninfo_all_blocks=1 00:04:02.530 --rc geninfo_unexecuted_blocks=1 00:04:02.530 00:04:02.530 ' 00:04:02.530 17:04:01 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:02.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.530 --rc genhtml_branch_coverage=1 00:04:02.530 --rc genhtml_function_coverage=1 00:04:02.530 --rc genhtml_legend=1 00:04:02.530 --rc geninfo_all_blocks=1 00:04:02.530 --rc geninfo_unexecuted_blocks=1 00:04:02.530 00:04:02.530 ' 00:04:02.530 17:04:01 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:02.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.530 --rc genhtml_branch_coverage=1 00:04:02.530 --rc genhtml_function_coverage=1 00:04:02.530 --rc genhtml_legend=1 00:04:02.530 --rc geninfo_all_blocks=1 00:04:02.530 --rc geninfo_unexecuted_blocks=1 00:04:02.530 00:04:02.530 ' 00:04:02.530 17:04:01 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:02.530 17:04:01 -- nvmf/common.sh@7 -- # uname -s 00:04:02.530 17:04:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:02.530 17:04:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:02.530 17:04:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:02.530 17:04:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:02.530 17:04:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:02.530 17:04:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:02.530 17:04:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:02.530 17:04:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:02.530 17:04:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:02.530 17:04:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:02.530 17:04:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:02.530 17:04:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:02.531 17:04:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:02.531 17:04:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:02.531 17:04:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:02.531 17:04:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:02.531 17:04:01 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:02.531 17:04:01 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:02.531 17:04:01 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:02.531 17:04:01 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:02.531 17:04:01 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:02.531 17:04:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.531 17:04:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.531 17:04:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.531 17:04:01 -- paths/export.sh@5 -- # export PATH 00:04:02.531 17:04:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.531 17:04:01 -- nvmf/common.sh@51 -- # : 0 00:04:02.531 17:04:01 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:02.531 17:04:01 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:02.531 17:04:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:02.531 17:04:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:02.531 17:04:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:02.531 17:04:01 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:02.531 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:02.531 17:04:01 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:02.531 17:04:01 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:02.531 17:04:01 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:02.531 17:04:01 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:02.531 17:04:01 -- spdk/autotest.sh@32 -- # uname -s 00:04:02.792 17:04:01 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:02.792 17:04:01 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:02.792 17:04:01 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:02.792 17:04:01 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:02.792 17:04:01 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:02.792 17:04:01 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:02.792 17:04:01 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:02.792 17:04:01 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:02.792 17:04:01 -- spdk/autotest.sh@48 -- # udevadm_pid=2753698 00:04:02.792 17:04:01 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:02.792 17:04:01 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:02.792 17:04:01 -- pm/common@17 -- # local monitor 00:04:02.792 17:04:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:02.792 17:04:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:02.792 17:04:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:02.792 17:04:01 -- pm/common@21 -- # date +%s 00:04:02.792 17:04:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:02.792 17:04:01 -- pm/common@21 -- # date +%s 00:04:02.792 17:04:01 -- pm/common@25 -- # sleep 1 00:04:02.792 17:04:01 -- pm/common@21 -- # date +%s 00:04:02.792 17:04:01 -- pm/common@21 -- # date +%s 00:04:02.792 17:04:01 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727795041 00:04:02.792 17:04:01 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727795041 00:04:02.792 17:04:01 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727795041 00:04:02.792 17:04:01 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727795041 00:04:02.792 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727795041_collect-cpu-temp.pm.log 00:04:02.792 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727795041_collect-cpu-load.pm.log 00:04:02.792 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727795041_collect-vmstat.pm.log 00:04:02.792 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727795041_collect-bmc-pm.bmc.pm.log 00:04:03.737 17:04:02 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:03.737 17:04:02 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:03.737 17:04:02 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:03.737 17:04:02 -- common/autotest_common.sh@10 -- # set +x 00:04:03.737 17:04:02 -- spdk/autotest.sh@59 -- # create_test_list 00:04:03.738 17:04:02 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:03.738 17:04:02 -- common/autotest_common.sh@10 -- # set +x 00:04:03.738 17:04:02 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:03.738 17:04:02 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:03.738 17:04:02 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:03.738 17:04:02 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:03.738 17:04:02 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:03.738 17:04:02 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:03.738 17:04:02 -- common/autotest_common.sh@1455 -- # uname 00:04:03.738 17:04:02 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:03.738 17:04:02 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:03.738 17:04:02 -- common/autotest_common.sh@1475 -- # uname 00:04:03.738 17:04:02 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:03.738 17:04:02 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:03.738 17:04:02 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:03.738 lcov: LCOV version 1.15 00:04:03.738 17:04:02 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:30.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:30.323 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:33.625 17:04:31 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:33.625 17:04:31 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:33.625 17:04:31 -- common/autotest_common.sh@10 -- # set +x 00:04:33.625 17:04:31 -- spdk/autotest.sh@78 -- # rm -f 00:04:33.625 17:04:31 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:36.929 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:04:36.929 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:04:36.929 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:04:36.929 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:04:36.929 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:04:36.929 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:04:36.929 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:04:36.929 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:04:36.929 0000:65:00.0 (144d a80a): Already using the nvme driver 00:04:36.929 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:04:36.929 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:04:36.929 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:04:36.929 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:04:36.929 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:04:36.929 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:04:36.929 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:04:37.190 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:04:37.451 17:04:35 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:37.451 17:04:35 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:37.452 17:04:35 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:37.452 17:04:35 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:37.452 17:04:35 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:37.452 17:04:35 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:37.452 17:04:35 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:37.452 17:04:35 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:37.452 17:04:35 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:37.452 17:04:35 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:37.452 17:04:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:37.452 17:04:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:37.452 17:04:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:37.452 17:04:35 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:37.452 17:04:35 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:37.452 No valid GPT data, bailing 00:04:37.452 17:04:35 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:37.452 17:04:35 -- scripts/common.sh@394 -- # pt= 00:04:37.452 17:04:35 -- scripts/common.sh@395 -- # return 1 00:04:37.452 17:04:35 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:37.452 1+0 records in 00:04:37.452 1+0 records out 00:04:37.452 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00162126 s, 647 MB/s 00:04:37.452 17:04:35 -- spdk/autotest.sh@105 -- # sync 00:04:37.452 17:04:35 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:37.452 17:04:35 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:37.452 17:04:35 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:47.453 17:04:44 -- spdk/autotest.sh@111 -- # uname -s 00:04:47.453 17:04:44 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:47.453 17:04:44 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:47.453 17:04:44 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:49.371 Hugepages 00:04:49.371 node hugesize free / total 00:04:49.371 node0 1048576kB 0 / 0 00:04:49.371 node0 2048kB 0 / 0 00:04:49.371 node1 1048576kB 0 / 0 00:04:49.371 node1 2048kB 0 / 0 00:04:49.371 00:04:49.371 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:49.371 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:49.371 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:49.371 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:49.371 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:49.371 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:49.371 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:49.371 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:49.371 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:49.371 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:49.371 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:49.371 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:49.371 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:49.371 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:49.371 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:49.371 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:49.371 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:49.371 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:49.371 17:04:47 -- spdk/autotest.sh@117 -- # uname -s 00:04:49.371 17:04:47 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:49.371 17:04:47 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:49.371 17:04:47 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:52.675 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:52.675 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:52.675 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:52.675 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:52.675 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:52.675 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:52.675 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:52.675 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:52.675 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:52.675 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:52.675 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:52.675 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:52.675 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:52.675 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:52.675 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:52.675 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:54.589 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:54.850 17:04:53 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:55.791 17:04:54 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:55.791 17:04:54 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:55.792 17:04:54 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:55.792 17:04:54 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:55.792 17:04:54 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:55.792 17:04:54 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:55.792 17:04:54 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:55.792 17:04:54 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:55.792 17:04:54 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:56.052 17:04:54 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:56.052 17:04:54 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:04:56.052 17:04:54 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:59.346 Waiting for block devices as requested 00:04:59.346 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:59.346 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:59.346 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:59.346 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:59.346 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:59.346 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:59.346 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:59.346 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:59.346 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:04:59.604 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:59.604 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:59.864 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:59.864 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:59.864 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:00.123 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:00.123 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:00.123 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:00.382 17:04:58 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:00.382 17:04:58 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:05:00.382 17:04:58 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:05:00.382 17:04:58 -- common/autotest_common.sh@1485 -- # grep 0000:65:00.0/nvme/nvme 00:05:00.382 17:04:58 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:00.382 17:04:58 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:05:00.382 17:04:58 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:00.382 17:04:58 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:00.382 17:04:58 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:00.382 17:04:58 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:00.382 17:04:58 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:00.382 17:04:58 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:00.382 17:04:58 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:00.382 17:04:58 -- common/autotest_common.sh@1529 -- # oacs=' 0x5f' 00:05:00.382 17:04:58 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:00.382 17:04:58 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:00.382 17:04:58 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:00.382 17:04:58 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:00.382 17:04:58 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:00.382 17:04:58 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:00.382 17:04:58 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:00.382 17:04:58 -- common/autotest_common.sh@1541 -- # continue 00:05:00.382 17:04:58 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:00.383 17:04:58 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:00.383 17:04:58 -- common/autotest_common.sh@10 -- # set +x 00:05:00.641 17:04:58 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:00.641 17:04:58 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:00.641 17:04:58 -- common/autotest_common.sh@10 -- # set +x 00:05:00.641 17:04:58 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:03.938 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:03.938 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:03.938 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:03.938 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:03.938 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:03.938 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:03.938 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:03.938 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:03.938 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:04.198 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:04.198 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:04.198 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:04.198 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:04.198 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:04.198 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:04.198 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:04.198 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:04.457 17:05:02 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:04.457 17:05:02 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:04.457 17:05:02 -- common/autotest_common.sh@10 -- # set +x 00:05:04.457 17:05:02 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:04.457 17:05:02 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:04.457 17:05:02 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:04.457 17:05:02 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:04.457 17:05:02 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:04.457 17:05:02 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:04.457 17:05:02 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:04.457 17:05:02 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:04.457 17:05:02 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:04.457 17:05:02 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:04.457 17:05:02 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:04.457 17:05:02 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:04.457 17:05:02 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:04.717 17:05:03 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:05:04.717 17:05:03 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:05:04.717 17:05:03 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:04.717 17:05:03 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:04.717 17:05:03 -- common/autotest_common.sh@1564 -- # device=0xa80a 00:05:04.717 17:05:03 -- common/autotest_common.sh@1565 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:04.717 17:05:03 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:04.717 17:05:03 -- common/autotest_common.sh@1570 -- # return 0 00:05:04.717 17:05:03 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:04.717 17:05:03 -- common/autotest_common.sh@1578 -- # return 0 00:05:04.717 17:05:03 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:04.717 17:05:03 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:04.717 17:05:03 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:04.717 17:05:03 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:04.717 17:05:03 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:04.717 17:05:03 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:04.717 17:05:03 -- common/autotest_common.sh@10 -- # set +x 00:05:04.718 17:05:03 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:04.718 17:05:03 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:04.718 17:05:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:04.718 17:05:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:04.718 17:05:03 -- common/autotest_common.sh@10 -- # set +x 00:05:04.718 ************************************ 00:05:04.718 START TEST env 00:05:04.718 ************************************ 00:05:04.718 17:05:03 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:04.718 * Looking for test storage... 00:05:04.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:04.718 17:05:03 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:04.718 17:05:03 env -- common/autotest_common.sh@1681 -- # lcov --version 00:05:04.718 17:05:03 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:04.978 17:05:03 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:04.978 17:05:03 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.978 17:05:03 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.978 17:05:03 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.978 17:05:03 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.978 17:05:03 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.978 17:05:03 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.979 17:05:03 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.979 17:05:03 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.979 17:05:03 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.979 17:05:03 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.979 17:05:03 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.979 17:05:03 env -- scripts/common.sh@344 -- # case "$op" in 00:05:04.979 17:05:03 env -- scripts/common.sh@345 -- # : 1 00:05:04.979 17:05:03 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.979 17:05:03 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.979 17:05:03 env -- scripts/common.sh@365 -- # decimal 1 00:05:04.979 17:05:03 env -- scripts/common.sh@353 -- # local d=1 00:05:04.979 17:05:03 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.979 17:05:03 env -- scripts/common.sh@355 -- # echo 1 00:05:04.979 17:05:03 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.979 17:05:03 env -- scripts/common.sh@366 -- # decimal 2 00:05:04.979 17:05:03 env -- scripts/common.sh@353 -- # local d=2 00:05:04.979 17:05:03 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.979 17:05:03 env -- scripts/common.sh@355 -- # echo 2 00:05:04.979 17:05:03 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.979 17:05:03 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.979 17:05:03 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.979 17:05:03 env -- scripts/common.sh@368 -- # return 0 00:05:04.979 17:05:03 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.979 17:05:03 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:04.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.979 --rc genhtml_branch_coverage=1 00:05:04.979 --rc genhtml_function_coverage=1 00:05:04.979 --rc genhtml_legend=1 00:05:04.979 --rc geninfo_all_blocks=1 00:05:04.979 --rc geninfo_unexecuted_blocks=1 00:05:04.979 00:05:04.979 ' 00:05:04.979 17:05:03 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:04.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.979 --rc genhtml_branch_coverage=1 00:05:04.979 --rc genhtml_function_coverage=1 00:05:04.979 --rc genhtml_legend=1 00:05:04.979 --rc geninfo_all_blocks=1 00:05:04.979 --rc geninfo_unexecuted_blocks=1 00:05:04.979 00:05:04.979 ' 00:05:04.979 17:05:03 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:04.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.979 --rc genhtml_branch_coverage=1 00:05:04.979 --rc genhtml_function_coverage=1 00:05:04.979 --rc genhtml_legend=1 00:05:04.979 --rc geninfo_all_blocks=1 00:05:04.979 --rc geninfo_unexecuted_blocks=1 00:05:04.979 00:05:04.979 ' 00:05:04.979 17:05:03 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:04.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.979 --rc genhtml_branch_coverage=1 00:05:04.979 --rc genhtml_function_coverage=1 00:05:04.979 --rc genhtml_legend=1 00:05:04.979 --rc geninfo_all_blocks=1 00:05:04.979 --rc geninfo_unexecuted_blocks=1 00:05:04.979 00:05:04.979 ' 00:05:04.979 17:05:03 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:04.979 17:05:03 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:04.979 17:05:03 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:04.979 17:05:03 env -- common/autotest_common.sh@10 -- # set +x 00:05:04.979 ************************************ 00:05:04.979 START TEST env_memory 00:05:04.979 ************************************ 00:05:04.979 17:05:03 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:04.979 00:05:04.979 00:05:04.979 CUnit - A unit testing framework for C - Version 2.1-3 00:05:04.979 http://cunit.sourceforge.net/ 00:05:04.979 00:05:04.979 00:05:04.979 Suite: memory 00:05:04.979 Test: alloc and free memory map ...[2024-10-01 17:05:03.413688] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:04.979 passed 00:05:04.979 Test: mem map translation ...[2024-10-01 17:05:03.439121] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:04.979 [2024-10-01 17:05:03.439144] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:04.979 [2024-10-01 17:05:03.439190] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:04.979 [2024-10-01 17:05:03.439198] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:04.979 passed 00:05:04.979 Test: mem map registration ...[2024-10-01 17:05:03.494442] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:04.979 [2024-10-01 17:05:03.494463] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:04.979 passed 00:05:05.241 Test: mem map adjacent registrations ...passed 00:05:05.241 00:05:05.241 Run Summary: Type Total Ran Passed Failed Inactive 00:05:05.241 suites 1 1 n/a 0 0 00:05:05.241 tests 4 4 4 0 0 00:05:05.241 asserts 152 152 152 0 n/a 00:05:05.241 00:05:05.241 Elapsed time = 0.194 seconds 00:05:05.241 00:05:05.241 real 0m0.208s 00:05:05.241 user 0m0.194s 00:05:05.241 sys 0m0.013s 00:05:05.241 17:05:03 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:05.241 17:05:03 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:05.241 ************************************ 00:05:05.241 END TEST env_memory 00:05:05.241 ************************************ 00:05:05.241 17:05:03 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:05.241 17:05:03 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.241 17:05:03 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.241 17:05:03 env -- common/autotest_common.sh@10 -- # set +x 00:05:05.241 ************************************ 00:05:05.241 START TEST env_vtophys 00:05:05.241 ************************************ 00:05:05.241 17:05:03 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:05.241 EAL: lib.eal log level changed from notice to debug 00:05:05.241 EAL: Detected lcore 0 as core 0 on socket 0 00:05:05.241 EAL: Detected lcore 1 as core 1 on socket 0 00:05:05.241 EAL: Detected lcore 2 as core 2 on socket 0 00:05:05.241 EAL: Detected lcore 3 as core 3 on socket 0 00:05:05.241 EAL: Detected lcore 4 as core 4 on socket 0 00:05:05.241 EAL: Detected lcore 5 as core 5 on socket 0 00:05:05.241 EAL: Detected lcore 6 as core 6 on socket 0 00:05:05.241 EAL: Detected lcore 7 as core 7 on socket 0 00:05:05.241 EAL: Detected lcore 8 as core 8 on socket 0 00:05:05.241 EAL: Detected lcore 9 as core 9 on socket 0 00:05:05.241 EAL: Detected lcore 10 as core 10 on socket 0 00:05:05.241 EAL: Detected lcore 11 as core 11 on socket 0 00:05:05.241 EAL: Detected lcore 12 as core 12 on socket 0 00:05:05.241 EAL: Detected lcore 13 as core 13 on socket 0 00:05:05.241 EAL: Detected lcore 14 as core 14 on socket 0 00:05:05.241 EAL: Detected lcore 15 as core 15 on socket 0 00:05:05.241 EAL: Detected lcore 16 as core 16 on socket 0 00:05:05.241 EAL: Detected lcore 17 as core 17 on socket 0 00:05:05.241 EAL: Detected lcore 18 as core 18 on socket 0 00:05:05.241 EAL: Detected lcore 19 as core 19 on socket 0 00:05:05.241 EAL: Detected lcore 20 as core 20 on socket 0 00:05:05.241 EAL: Detected lcore 21 as core 21 on socket 0 00:05:05.241 EAL: Detected lcore 22 as core 22 on socket 0 00:05:05.241 EAL: Detected lcore 23 as core 23 on socket 0 00:05:05.241 EAL: Detected lcore 24 as core 24 on socket 0 00:05:05.241 EAL: Detected lcore 25 as core 25 on socket 0 00:05:05.241 EAL: Detected lcore 26 as core 26 on socket 0 00:05:05.241 EAL: Detected lcore 27 as core 27 on socket 0 00:05:05.241 EAL: Detected lcore 28 as core 28 on socket 0 00:05:05.241 EAL: Detected lcore 29 as core 29 on socket 0 00:05:05.241 EAL: Detected lcore 30 as core 30 on socket 0 00:05:05.241 EAL: Detected lcore 31 as core 31 on socket 0 00:05:05.241 EAL: Detected lcore 32 as core 32 on socket 0 00:05:05.241 EAL: Detected lcore 33 as core 33 on socket 0 00:05:05.241 EAL: Detected lcore 34 as core 34 on socket 0 00:05:05.241 EAL: Detected lcore 35 as core 35 on socket 0 00:05:05.241 EAL: Detected lcore 36 as core 0 on socket 1 00:05:05.241 EAL: Detected lcore 37 as core 1 on socket 1 00:05:05.241 EAL: Detected lcore 38 as core 2 on socket 1 00:05:05.241 EAL: Detected lcore 39 as core 3 on socket 1 00:05:05.241 EAL: Detected lcore 40 as core 4 on socket 1 00:05:05.241 EAL: Detected lcore 41 as core 5 on socket 1 00:05:05.241 EAL: Detected lcore 42 as core 6 on socket 1 00:05:05.241 EAL: Detected lcore 43 as core 7 on socket 1 00:05:05.241 EAL: Detected lcore 44 as core 8 on socket 1 00:05:05.241 EAL: Detected lcore 45 as core 9 on socket 1 00:05:05.241 EAL: Detected lcore 46 as core 10 on socket 1 00:05:05.241 EAL: Detected lcore 47 as core 11 on socket 1 00:05:05.241 EAL: Detected lcore 48 as core 12 on socket 1 00:05:05.241 EAL: Detected lcore 49 as core 13 on socket 1 00:05:05.241 EAL: Detected lcore 50 as core 14 on socket 1 00:05:05.241 EAL: Detected lcore 51 as core 15 on socket 1 00:05:05.241 EAL: Detected lcore 52 as core 16 on socket 1 00:05:05.241 EAL: Detected lcore 53 as core 17 on socket 1 00:05:05.241 EAL: Detected lcore 54 as core 18 on socket 1 00:05:05.241 EAL: Detected lcore 55 as core 19 on socket 1 00:05:05.241 EAL: Detected lcore 56 as core 20 on socket 1 00:05:05.241 EAL: Detected lcore 57 as core 21 on socket 1 00:05:05.241 EAL: Detected lcore 58 as core 22 on socket 1 00:05:05.241 EAL: Detected lcore 59 as core 23 on socket 1 00:05:05.241 EAL: Detected lcore 60 as core 24 on socket 1 00:05:05.241 EAL: Detected lcore 61 as core 25 on socket 1 00:05:05.241 EAL: Detected lcore 62 as core 26 on socket 1 00:05:05.241 EAL: Detected lcore 63 as core 27 on socket 1 00:05:05.241 EAL: Detected lcore 64 as core 28 on socket 1 00:05:05.241 EAL: Detected lcore 65 as core 29 on socket 1 00:05:05.241 EAL: Detected lcore 66 as core 30 on socket 1 00:05:05.241 EAL: Detected lcore 67 as core 31 on socket 1 00:05:05.242 EAL: Detected lcore 68 as core 32 on socket 1 00:05:05.242 EAL: Detected lcore 69 as core 33 on socket 1 00:05:05.242 EAL: Detected lcore 70 as core 34 on socket 1 00:05:05.242 EAL: Detected lcore 71 as core 35 on socket 1 00:05:05.242 EAL: Detected lcore 72 as core 0 on socket 0 00:05:05.242 EAL: Detected lcore 73 as core 1 on socket 0 00:05:05.242 EAL: Detected lcore 74 as core 2 on socket 0 00:05:05.242 EAL: Detected lcore 75 as core 3 on socket 0 00:05:05.242 EAL: Detected lcore 76 as core 4 on socket 0 00:05:05.242 EAL: Detected lcore 77 as core 5 on socket 0 00:05:05.242 EAL: Detected lcore 78 as core 6 on socket 0 00:05:05.242 EAL: Detected lcore 79 as core 7 on socket 0 00:05:05.242 EAL: Detected lcore 80 as core 8 on socket 0 00:05:05.242 EAL: Detected lcore 81 as core 9 on socket 0 00:05:05.242 EAL: Detected lcore 82 as core 10 on socket 0 00:05:05.242 EAL: Detected lcore 83 as core 11 on socket 0 00:05:05.242 EAL: Detected lcore 84 as core 12 on socket 0 00:05:05.242 EAL: Detected lcore 85 as core 13 on socket 0 00:05:05.242 EAL: Detected lcore 86 as core 14 on socket 0 00:05:05.242 EAL: Detected lcore 87 as core 15 on socket 0 00:05:05.242 EAL: Detected lcore 88 as core 16 on socket 0 00:05:05.242 EAL: Detected lcore 89 as core 17 on socket 0 00:05:05.242 EAL: Detected lcore 90 as core 18 on socket 0 00:05:05.242 EAL: Detected lcore 91 as core 19 on socket 0 00:05:05.242 EAL: Detected lcore 92 as core 20 on socket 0 00:05:05.242 EAL: Detected lcore 93 as core 21 on socket 0 00:05:05.242 EAL: Detected lcore 94 as core 22 on socket 0 00:05:05.242 EAL: Detected lcore 95 as core 23 on socket 0 00:05:05.242 EAL: Detected lcore 96 as core 24 on socket 0 00:05:05.242 EAL: Detected lcore 97 as core 25 on socket 0 00:05:05.242 EAL: Detected lcore 98 as core 26 on socket 0 00:05:05.242 EAL: Detected lcore 99 as core 27 on socket 0 00:05:05.242 EAL: Detected lcore 100 as core 28 on socket 0 00:05:05.242 EAL: Detected lcore 101 as core 29 on socket 0 00:05:05.242 EAL: Detected lcore 102 as core 30 on socket 0 00:05:05.242 EAL: Detected lcore 103 as core 31 on socket 0 00:05:05.242 EAL: Detected lcore 104 as core 32 on socket 0 00:05:05.242 EAL: Detected lcore 105 as core 33 on socket 0 00:05:05.242 EAL: Detected lcore 106 as core 34 on socket 0 00:05:05.242 EAL: Detected lcore 107 as core 35 on socket 0 00:05:05.242 EAL: Detected lcore 108 as core 0 on socket 1 00:05:05.242 EAL: Detected lcore 109 as core 1 on socket 1 00:05:05.242 EAL: Detected lcore 110 as core 2 on socket 1 00:05:05.242 EAL: Detected lcore 111 as core 3 on socket 1 00:05:05.242 EAL: Detected lcore 112 as core 4 on socket 1 00:05:05.242 EAL: Detected lcore 113 as core 5 on socket 1 00:05:05.242 EAL: Detected lcore 114 as core 6 on socket 1 00:05:05.242 EAL: Detected lcore 115 as core 7 on socket 1 00:05:05.242 EAL: Detected lcore 116 as core 8 on socket 1 00:05:05.242 EAL: Detected lcore 117 as core 9 on socket 1 00:05:05.242 EAL: Detected lcore 118 as core 10 on socket 1 00:05:05.242 EAL: Detected lcore 119 as core 11 on socket 1 00:05:05.242 EAL: Detected lcore 120 as core 12 on socket 1 00:05:05.242 EAL: Detected lcore 121 as core 13 on socket 1 00:05:05.242 EAL: Detected lcore 122 as core 14 on socket 1 00:05:05.242 EAL: Detected lcore 123 as core 15 on socket 1 00:05:05.242 EAL: Detected lcore 124 as core 16 on socket 1 00:05:05.242 EAL: Detected lcore 125 as core 17 on socket 1 00:05:05.242 EAL: Detected lcore 126 as core 18 on socket 1 00:05:05.242 EAL: Detected lcore 127 as core 19 on socket 1 00:05:05.242 EAL: Skipped lcore 128 as core 20 on socket 1 00:05:05.242 EAL: Skipped lcore 129 as core 21 on socket 1 00:05:05.242 EAL: Skipped lcore 130 as core 22 on socket 1 00:05:05.242 EAL: Skipped lcore 131 as core 23 on socket 1 00:05:05.242 EAL: Skipped lcore 132 as core 24 on socket 1 00:05:05.242 EAL: Skipped lcore 133 as core 25 on socket 1 00:05:05.242 EAL: Skipped lcore 134 as core 26 on socket 1 00:05:05.242 EAL: Skipped lcore 135 as core 27 on socket 1 00:05:05.242 EAL: Skipped lcore 136 as core 28 on socket 1 00:05:05.242 EAL: Skipped lcore 137 as core 29 on socket 1 00:05:05.242 EAL: Skipped lcore 138 as core 30 on socket 1 00:05:05.242 EAL: Skipped lcore 139 as core 31 on socket 1 00:05:05.242 EAL: Skipped lcore 140 as core 32 on socket 1 00:05:05.242 EAL: Skipped lcore 141 as core 33 on socket 1 00:05:05.242 EAL: Skipped lcore 142 as core 34 on socket 1 00:05:05.242 EAL: Skipped lcore 143 as core 35 on socket 1 00:05:05.242 EAL: Maximum logical cores by configuration: 128 00:05:05.242 EAL: Detected CPU lcores: 128 00:05:05.242 EAL: Detected NUMA nodes: 2 00:05:05.242 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:05.242 EAL: Detected shared linkage of DPDK 00:05:05.242 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:05.242 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:05.242 EAL: Registered [vdev] bus. 00:05:05.242 EAL: bus.vdev log level changed from disabled to notice 00:05:05.242 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:05.242 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:05.242 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:05.242 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:05.242 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:05.242 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:05.242 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:05.242 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:05.242 EAL: No shared files mode enabled, IPC will be disabled 00:05:05.242 EAL: No shared files mode enabled, IPC is disabled 00:05:05.242 EAL: Bus pci wants IOVA as 'DC' 00:05:05.242 EAL: Bus vdev wants IOVA as 'DC' 00:05:05.242 EAL: Buses did not request a specific IOVA mode. 00:05:05.242 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:05.242 EAL: Selected IOVA mode 'VA' 00:05:05.242 EAL: Probing VFIO support... 00:05:05.242 EAL: IOMMU type 1 (Type 1) is supported 00:05:05.242 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:05.242 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:05.242 EAL: VFIO support initialized 00:05:05.242 EAL: Ask a virtual area of 0x2e000 bytes 00:05:05.242 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:05.242 EAL: Setting up physically contiguous memory... 00:05:05.242 EAL: Setting maximum number of open files to 524288 00:05:05.242 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:05.242 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:05.242 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:05.242 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.242 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:05.242 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.242 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.242 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:05.242 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:05.242 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.242 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:05.242 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.242 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.242 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:05.242 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:05.242 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.242 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:05.242 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.242 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.242 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:05.242 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:05.242 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.242 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:05.242 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.242 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.242 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:05.242 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:05.242 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:05.242 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.242 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:05.242 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:05.242 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.242 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:05.242 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:05.242 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.242 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:05.242 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:05.242 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.242 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:05.242 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:05.242 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.242 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:05.242 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:05.242 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.242 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:05.242 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:05.242 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.242 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:05.242 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:05.242 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.242 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:05.242 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:05.242 EAL: Hugepages will be freed exactly as allocated. 00:05:05.242 EAL: No shared files mode enabled, IPC is disabled 00:05:05.242 EAL: No shared files mode enabled, IPC is disabled 00:05:05.242 EAL: TSC frequency is ~2400000 KHz 00:05:05.242 EAL: Main lcore 0 is ready (tid=7fc77a46aa00;cpuset=[0]) 00:05:05.242 EAL: Trying to obtain current memory policy. 00:05:05.242 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.242 EAL: Restoring previous memory policy: 0 00:05:05.242 EAL: request: mp_malloc_sync 00:05:05.242 EAL: No shared files mode enabled, IPC is disabled 00:05:05.242 EAL: Heap on socket 0 was expanded by 2MB 00:05:05.242 EAL: No shared files mode enabled, IPC is disabled 00:05:05.242 EAL: No shared files mode enabled, IPC is disabled 00:05:05.242 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:05.243 EAL: Mem event callback 'spdk:(nil)' registered 00:05:05.243 00:05:05.243 00:05:05.243 CUnit - A unit testing framework for C - Version 2.1-3 00:05:05.243 http://cunit.sourceforge.net/ 00:05:05.243 00:05:05.243 00:05:05.243 Suite: components_suite 00:05:05.243 Test: vtophys_malloc_test ...passed 00:05:05.243 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:05.243 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.243 EAL: Restoring previous memory policy: 4 00:05:05.243 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.243 EAL: request: mp_malloc_sync 00:05:05.243 EAL: No shared files mode enabled, IPC is disabled 00:05:05.243 EAL: Heap on socket 0 was expanded by 4MB 00:05:05.243 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.243 EAL: request: mp_malloc_sync 00:05:05.243 EAL: No shared files mode enabled, IPC is disabled 00:05:05.243 EAL: Heap on socket 0 was shrunk by 4MB 00:05:05.243 EAL: Trying to obtain current memory policy. 00:05:05.243 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.243 EAL: Restoring previous memory policy: 4 00:05:05.243 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.243 EAL: request: mp_malloc_sync 00:05:05.243 EAL: No shared files mode enabled, IPC is disabled 00:05:05.243 EAL: Heap on socket 0 was expanded by 6MB 00:05:05.243 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.243 EAL: request: mp_malloc_sync 00:05:05.243 EAL: No shared files mode enabled, IPC is disabled 00:05:05.243 EAL: Heap on socket 0 was shrunk by 6MB 00:05:05.243 EAL: Trying to obtain current memory policy. 00:05:05.243 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.243 EAL: Restoring previous memory policy: 4 00:05:05.243 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.243 EAL: request: mp_malloc_sync 00:05:05.243 EAL: No shared files mode enabled, IPC is disabled 00:05:05.243 EAL: Heap on socket 0 was expanded by 10MB 00:05:05.243 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.243 EAL: request: mp_malloc_sync 00:05:05.243 EAL: No shared files mode enabled, IPC is disabled 00:05:05.243 EAL: Heap on socket 0 was shrunk by 10MB 00:05:05.243 EAL: Trying to obtain current memory policy. 00:05:05.243 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.243 EAL: Restoring previous memory policy: 4 00:05:05.243 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.243 EAL: request: mp_malloc_sync 00:05:05.243 EAL: No shared files mode enabled, IPC is disabled 00:05:05.243 EAL: Heap on socket 0 was expanded by 18MB 00:05:05.243 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.243 EAL: request: mp_malloc_sync 00:05:05.243 EAL: No shared files mode enabled, IPC is disabled 00:05:05.243 EAL: Heap on socket 0 was shrunk by 18MB 00:05:05.243 EAL: Trying to obtain current memory policy. 00:05:05.243 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.243 EAL: Restoring previous memory policy: 4 00:05:05.243 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.243 EAL: request: mp_malloc_sync 00:05:05.243 EAL: No shared files mode enabled, IPC is disabled 00:05:05.243 EAL: Heap on socket 0 was expanded by 34MB 00:05:05.243 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.243 EAL: request: mp_malloc_sync 00:05:05.243 EAL: No shared files mode enabled, IPC is disabled 00:05:05.243 EAL: Heap on socket 0 was shrunk by 34MB 00:05:05.243 EAL: Trying to obtain current memory policy. 00:05:05.243 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.243 EAL: Restoring previous memory policy: 4 00:05:05.243 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.243 EAL: request: mp_malloc_sync 00:05:05.243 EAL: No shared files mode enabled, IPC is disabled 00:05:05.243 EAL: Heap on socket 0 was expanded by 66MB 00:05:05.243 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.243 EAL: request: mp_malloc_sync 00:05:05.243 EAL: No shared files mode enabled, IPC is disabled 00:05:05.243 EAL: Heap on socket 0 was shrunk by 66MB 00:05:05.243 EAL: Trying to obtain current memory policy. 00:05:05.243 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.504 EAL: Restoring previous memory policy: 4 00:05:05.504 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.504 EAL: request: mp_malloc_sync 00:05:05.504 EAL: No shared files mode enabled, IPC is disabled 00:05:05.504 EAL: Heap on socket 0 was expanded by 130MB 00:05:05.504 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.504 EAL: request: mp_malloc_sync 00:05:05.504 EAL: No shared files mode enabled, IPC is disabled 00:05:05.504 EAL: Heap on socket 0 was shrunk by 130MB 00:05:05.504 EAL: Trying to obtain current memory policy. 00:05:05.504 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.504 EAL: Restoring previous memory policy: 4 00:05:05.504 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.504 EAL: request: mp_malloc_sync 00:05:05.504 EAL: No shared files mode enabled, IPC is disabled 00:05:05.504 EAL: Heap on socket 0 was expanded by 258MB 00:05:05.504 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.504 EAL: request: mp_malloc_sync 00:05:05.504 EAL: No shared files mode enabled, IPC is disabled 00:05:05.504 EAL: Heap on socket 0 was shrunk by 258MB 00:05:05.504 EAL: Trying to obtain current memory policy. 00:05:05.504 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.504 EAL: Restoring previous memory policy: 4 00:05:05.504 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.504 EAL: request: mp_malloc_sync 00:05:05.504 EAL: No shared files mode enabled, IPC is disabled 00:05:05.504 EAL: Heap on socket 0 was expanded by 514MB 00:05:05.504 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.765 EAL: request: mp_malloc_sync 00:05:05.765 EAL: No shared files mode enabled, IPC is disabled 00:05:05.765 EAL: Heap on socket 0 was shrunk by 514MB 00:05:05.765 EAL: Trying to obtain current memory policy. 00:05:05.765 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.765 EAL: Restoring previous memory policy: 4 00:05:05.765 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.765 EAL: request: mp_malloc_sync 00:05:05.765 EAL: No shared files mode enabled, IPC is disabled 00:05:05.765 EAL: Heap on socket 0 was expanded by 1026MB 00:05:06.025 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.025 EAL: request: mp_malloc_sync 00:05:06.025 EAL: No shared files mode enabled, IPC is disabled 00:05:06.025 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:06.025 passed 00:05:06.025 00:05:06.025 Run Summary: Type Total Ran Passed Failed Inactive 00:05:06.025 suites 1 1 n/a 0 0 00:05:06.025 tests 2 2 2 0 0 00:05:06.025 asserts 497 497 497 0 n/a 00:05:06.025 00:05:06.025 Elapsed time = 0.659 seconds 00:05:06.025 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.025 EAL: request: mp_malloc_sync 00:05:06.025 EAL: No shared files mode enabled, IPC is disabled 00:05:06.025 EAL: Heap on socket 0 was shrunk by 2MB 00:05:06.025 EAL: No shared files mode enabled, IPC is disabled 00:05:06.025 EAL: No shared files mode enabled, IPC is disabled 00:05:06.025 EAL: No shared files mode enabled, IPC is disabled 00:05:06.025 00:05:06.025 real 0m0.791s 00:05:06.025 user 0m0.398s 00:05:06.025 sys 0m0.356s 00:05:06.025 17:05:04 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:06.025 17:05:04 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:06.025 ************************************ 00:05:06.025 END TEST env_vtophys 00:05:06.025 ************************************ 00:05:06.025 17:05:04 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:06.025 17:05:04 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:06.025 17:05:04 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:06.025 17:05:04 env -- common/autotest_common.sh@10 -- # set +x 00:05:06.026 ************************************ 00:05:06.026 START TEST env_pci 00:05:06.026 ************************************ 00:05:06.026 17:05:04 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:06.026 00:05:06.026 00:05:06.026 CUnit - A unit testing framework for C - Version 2.1-3 00:05:06.026 http://cunit.sourceforge.net/ 00:05:06.026 00:05:06.026 00:05:06.026 Suite: pci 00:05:06.026 Test: pci_hook ...[2024-10-01 17:05:04.535543] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2772678 has claimed it 00:05:06.026 EAL: Cannot find device (10000:00:01.0) 00:05:06.026 EAL: Failed to attach device on primary process 00:05:06.026 passed 00:05:06.026 00:05:06.026 Run Summary: Type Total Ran Passed Failed Inactive 00:05:06.026 suites 1 1 n/a 0 0 00:05:06.026 tests 1 1 1 0 0 00:05:06.026 asserts 25 25 25 0 n/a 00:05:06.026 00:05:06.026 Elapsed time = 0.031 seconds 00:05:06.026 00:05:06.026 real 0m0.050s 00:05:06.026 user 0m0.014s 00:05:06.026 sys 0m0.036s 00:05:06.026 17:05:04 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:06.026 17:05:04 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:06.026 ************************************ 00:05:06.026 END TEST env_pci 00:05:06.026 ************************************ 00:05:06.286 17:05:04 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:06.286 17:05:04 env -- env/env.sh@15 -- # uname 00:05:06.286 17:05:04 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:06.286 17:05:04 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:06.286 17:05:04 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:06.286 17:05:04 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:06.286 17:05:04 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:06.286 17:05:04 env -- common/autotest_common.sh@10 -- # set +x 00:05:06.286 ************************************ 00:05:06.286 START TEST env_dpdk_post_init 00:05:06.286 ************************************ 00:05:06.286 17:05:04 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:06.286 EAL: Detected CPU lcores: 128 00:05:06.286 EAL: Detected NUMA nodes: 2 00:05:06.286 EAL: Detected shared linkage of DPDK 00:05:06.286 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:06.286 EAL: Selected IOVA mode 'VA' 00:05:06.286 EAL: VFIO support initialized 00:05:06.286 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:06.286 EAL: Using IOMMU type 1 (Type 1) 00:05:06.546 EAL: Ignore mapping IO port bar(1) 00:05:06.546 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:06.806 EAL: Ignore mapping IO port bar(1) 00:05:06.806 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:06.806 EAL: Ignore mapping IO port bar(1) 00:05:07.065 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:07.065 EAL: Ignore mapping IO port bar(1) 00:05:07.326 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:07.326 EAL: Ignore mapping IO port bar(1) 00:05:07.587 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:07.587 EAL: Ignore mapping IO port bar(1) 00:05:07.587 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:07.847 EAL: Ignore mapping IO port bar(1) 00:05:07.847 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:08.106 EAL: Ignore mapping IO port bar(1) 00:05:08.106 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:08.366 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:08.366 EAL: Ignore mapping IO port bar(1) 00:05:08.627 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:08.627 EAL: Ignore mapping IO port bar(1) 00:05:08.887 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:08.887 EAL: Ignore mapping IO port bar(1) 00:05:09.147 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:09.147 EAL: Ignore mapping IO port bar(1) 00:05:09.147 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:09.407 EAL: Ignore mapping IO port bar(1) 00:05:09.407 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:09.667 EAL: Ignore mapping IO port bar(1) 00:05:09.667 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:09.927 EAL: Ignore mapping IO port bar(1) 00:05:09.927 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:09.927 EAL: Ignore mapping IO port bar(1) 00:05:10.187 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:10.187 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:10.187 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:10.187 Starting DPDK initialization... 00:05:10.187 Starting SPDK post initialization... 00:05:10.187 SPDK NVMe probe 00:05:10.187 Attaching to 0000:65:00.0 00:05:10.187 Attached to 0000:65:00.0 00:05:10.187 Cleaning up... 00:05:12.100 00:05:12.100 real 0m5.706s 00:05:12.100 user 0m0.179s 00:05:12.100 sys 0m0.074s 00:05:12.100 17:05:10 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.100 17:05:10 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:12.100 ************************************ 00:05:12.100 END TEST env_dpdk_post_init 00:05:12.100 ************************************ 00:05:12.100 17:05:10 env -- env/env.sh@26 -- # uname 00:05:12.100 17:05:10 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:12.100 17:05:10 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:12.100 17:05:10 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.100 17:05:10 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.100 17:05:10 env -- common/autotest_common.sh@10 -- # set +x 00:05:12.100 ************************************ 00:05:12.100 START TEST env_mem_callbacks 00:05:12.100 ************************************ 00:05:12.100 17:05:10 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:12.100 EAL: Detected CPU lcores: 128 00:05:12.100 EAL: Detected NUMA nodes: 2 00:05:12.100 EAL: Detected shared linkage of DPDK 00:05:12.100 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:12.100 EAL: Selected IOVA mode 'VA' 00:05:12.100 EAL: VFIO support initialized 00:05:12.100 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:12.100 00:05:12.100 00:05:12.100 CUnit - A unit testing framework for C - Version 2.1-3 00:05:12.100 http://cunit.sourceforge.net/ 00:05:12.100 00:05:12.100 00:05:12.100 Suite: memory 00:05:12.100 Test: test ... 00:05:12.100 register 0x200000200000 2097152 00:05:12.100 malloc 3145728 00:05:12.100 register 0x200000400000 4194304 00:05:12.100 buf 0x200000500000 len 3145728 PASSED 00:05:12.100 malloc 64 00:05:12.100 buf 0x2000004fff40 len 64 PASSED 00:05:12.100 malloc 4194304 00:05:12.100 register 0x200000800000 6291456 00:05:12.100 buf 0x200000a00000 len 4194304 PASSED 00:05:12.100 free 0x200000500000 3145728 00:05:12.100 free 0x2000004fff40 64 00:05:12.100 unregister 0x200000400000 4194304 PASSED 00:05:12.100 free 0x200000a00000 4194304 00:05:12.100 unregister 0x200000800000 6291456 PASSED 00:05:12.100 malloc 8388608 00:05:12.100 register 0x200000400000 10485760 00:05:12.100 buf 0x200000600000 len 8388608 PASSED 00:05:12.100 free 0x200000600000 8388608 00:05:12.100 unregister 0x200000400000 10485760 PASSED 00:05:12.100 passed 00:05:12.100 00:05:12.100 Run Summary: Type Total Ran Passed Failed Inactive 00:05:12.100 suites 1 1 n/a 0 0 00:05:12.100 tests 1 1 1 0 0 00:05:12.100 asserts 15 15 15 0 n/a 00:05:12.100 00:05:12.100 Elapsed time = 0.006 seconds 00:05:12.100 00:05:12.100 real 0m0.062s 00:05:12.100 user 0m0.016s 00:05:12.100 sys 0m0.046s 00:05:12.100 17:05:10 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.100 17:05:10 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:12.100 ************************************ 00:05:12.100 END TEST env_mem_callbacks 00:05:12.100 ************************************ 00:05:12.100 00:05:12.100 real 0m7.410s 00:05:12.100 user 0m1.081s 00:05:12.100 sys 0m0.870s 00:05:12.100 17:05:10 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.100 17:05:10 env -- common/autotest_common.sh@10 -- # set +x 00:05:12.100 ************************************ 00:05:12.100 END TEST env 00:05:12.100 ************************************ 00:05:12.100 17:05:10 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:12.100 17:05:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.100 17:05:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.100 17:05:10 -- common/autotest_common.sh@10 -- # set +x 00:05:12.100 ************************************ 00:05:12.100 START TEST rpc 00:05:12.100 ************************************ 00:05:12.100 17:05:10 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:12.361 * Looking for test storage... 00:05:12.361 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:12.361 17:05:10 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:12.361 17:05:10 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:12.361 17:05:10 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:12.361 17:05:10 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:12.361 17:05:10 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.361 17:05:10 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.361 17:05:10 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.361 17:05:10 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.361 17:05:10 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.361 17:05:10 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.361 17:05:10 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.361 17:05:10 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.361 17:05:10 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.361 17:05:10 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.361 17:05:10 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.361 17:05:10 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:12.361 17:05:10 rpc -- scripts/common.sh@345 -- # : 1 00:05:12.361 17:05:10 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.361 17:05:10 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.361 17:05:10 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:12.361 17:05:10 rpc -- scripts/common.sh@353 -- # local d=1 00:05:12.361 17:05:10 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.361 17:05:10 rpc -- scripts/common.sh@355 -- # echo 1 00:05:12.361 17:05:10 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.361 17:05:10 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:12.361 17:05:10 rpc -- scripts/common.sh@353 -- # local d=2 00:05:12.361 17:05:10 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.361 17:05:10 rpc -- scripts/common.sh@355 -- # echo 2 00:05:12.361 17:05:10 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.361 17:05:10 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.361 17:05:10 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.361 17:05:10 rpc -- scripts/common.sh@368 -- # return 0 00:05:12.361 17:05:10 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.361 17:05:10 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:12.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.361 --rc genhtml_branch_coverage=1 00:05:12.361 --rc genhtml_function_coverage=1 00:05:12.361 --rc genhtml_legend=1 00:05:12.361 --rc geninfo_all_blocks=1 00:05:12.361 --rc geninfo_unexecuted_blocks=1 00:05:12.361 00:05:12.361 ' 00:05:12.361 17:05:10 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:12.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.361 --rc genhtml_branch_coverage=1 00:05:12.361 --rc genhtml_function_coverage=1 00:05:12.361 --rc genhtml_legend=1 00:05:12.361 --rc geninfo_all_blocks=1 00:05:12.361 --rc geninfo_unexecuted_blocks=1 00:05:12.361 00:05:12.361 ' 00:05:12.361 17:05:10 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:12.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.361 --rc genhtml_branch_coverage=1 00:05:12.361 --rc genhtml_function_coverage=1 00:05:12.361 --rc genhtml_legend=1 00:05:12.361 --rc geninfo_all_blocks=1 00:05:12.361 --rc geninfo_unexecuted_blocks=1 00:05:12.361 00:05:12.361 ' 00:05:12.361 17:05:10 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:12.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.361 --rc genhtml_branch_coverage=1 00:05:12.361 --rc genhtml_function_coverage=1 00:05:12.361 --rc genhtml_legend=1 00:05:12.361 --rc geninfo_all_blocks=1 00:05:12.361 --rc geninfo_unexecuted_blocks=1 00:05:12.361 00:05:12.361 ' 00:05:12.361 17:05:10 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:12.361 17:05:10 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2774130 00:05:12.361 17:05:10 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:12.361 17:05:10 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2774130 00:05:12.362 17:05:10 rpc -- common/autotest_common.sh@831 -- # '[' -z 2774130 ']' 00:05:12.362 17:05:10 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.362 17:05:10 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:12.362 17:05:10 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.362 17:05:10 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:12.362 17:05:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.362 [2024-10-01 17:05:10.871757] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:05:12.362 [2024-10-01 17:05:10.871828] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2774130 ] 00:05:12.622 [2024-10-01 17:05:10.933481] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.623 [2024-10-01 17:05:10.965604] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:12.623 [2024-10-01 17:05:10.965644] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2774130' to capture a snapshot of events at runtime. 00:05:12.623 [2024-10-01 17:05:10.965652] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:12.623 [2024-10-01 17:05:10.965659] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:12.623 [2024-10-01 17:05:10.965665] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2774130 for offline analysis/debug. 00:05:12.623 [2024-10-01 17:05:10.965683] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.623 17:05:11 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:12.623 17:05:11 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:12.623 17:05:11 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:12.623 17:05:11 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:12.623 17:05:11 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:12.623 17:05:11 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:12.623 17:05:11 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.623 17:05:11 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.623 17:05:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.623 ************************************ 00:05:12.623 START TEST rpc_integrity 00:05:12.623 ************************************ 00:05:12.623 17:05:11 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:12.623 17:05:11 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:12.623 17:05:11 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.623 17:05:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.884 17:05:11 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.884 17:05:11 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:12.884 17:05:11 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:12.884 17:05:11 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:12.884 17:05:11 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:12.884 17:05:11 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.884 17:05:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.884 17:05:11 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.884 17:05:11 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:12.884 17:05:11 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:12.884 17:05:11 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.884 17:05:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.884 17:05:11 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.884 17:05:11 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:12.884 { 00:05:12.884 "name": "Malloc0", 00:05:12.884 "aliases": [ 00:05:12.884 "b5acd793-b3bd-4f4a-93e1-fc3ec6fe28d9" 00:05:12.884 ], 00:05:12.884 "product_name": "Malloc disk", 00:05:12.884 "block_size": 512, 00:05:12.884 "num_blocks": 16384, 00:05:12.884 "uuid": "b5acd793-b3bd-4f4a-93e1-fc3ec6fe28d9", 00:05:12.884 "assigned_rate_limits": { 00:05:12.884 "rw_ios_per_sec": 0, 00:05:12.884 "rw_mbytes_per_sec": 0, 00:05:12.884 "r_mbytes_per_sec": 0, 00:05:12.884 "w_mbytes_per_sec": 0 00:05:12.884 }, 00:05:12.884 "claimed": false, 00:05:12.884 "zoned": false, 00:05:12.884 "supported_io_types": { 00:05:12.884 "read": true, 00:05:12.884 "write": true, 00:05:12.884 "unmap": true, 00:05:12.884 "flush": true, 00:05:12.884 "reset": true, 00:05:12.884 "nvme_admin": false, 00:05:12.884 "nvme_io": false, 00:05:12.884 "nvme_io_md": false, 00:05:12.884 "write_zeroes": true, 00:05:12.884 "zcopy": true, 00:05:12.884 "get_zone_info": false, 00:05:12.884 "zone_management": false, 00:05:12.884 "zone_append": false, 00:05:12.884 "compare": false, 00:05:12.884 "compare_and_write": false, 00:05:12.884 "abort": true, 00:05:12.884 "seek_hole": false, 00:05:12.884 "seek_data": false, 00:05:12.884 "copy": true, 00:05:12.884 "nvme_iov_md": false 00:05:12.884 }, 00:05:12.884 "memory_domains": [ 00:05:12.884 { 00:05:12.884 "dma_device_id": "system", 00:05:12.884 "dma_device_type": 1 00:05:12.884 }, 00:05:12.884 { 00:05:12.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.884 "dma_device_type": 2 00:05:12.884 } 00:05:12.884 ], 00:05:12.884 "driver_specific": {} 00:05:12.884 } 00:05:12.884 ]' 00:05:12.884 17:05:11 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:12.884 17:05:11 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:12.884 17:05:11 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:12.884 17:05:11 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.884 17:05:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.884 [2024-10-01 17:05:11.296396] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:12.884 [2024-10-01 17:05:11.296428] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:12.884 [2024-10-01 17:05:11.296441] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17d2050 00:05:12.884 [2024-10-01 17:05:11.296448] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:12.884 [2024-10-01 17:05:11.297788] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:12.884 [2024-10-01 17:05:11.297810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:12.884 Passthru0 00:05:12.884 17:05:11 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.884 17:05:11 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:12.884 17:05:11 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.884 17:05:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.884 17:05:11 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.884 17:05:11 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:12.884 { 00:05:12.884 "name": "Malloc0", 00:05:12.884 "aliases": [ 00:05:12.884 "b5acd793-b3bd-4f4a-93e1-fc3ec6fe28d9" 00:05:12.884 ], 00:05:12.884 "product_name": "Malloc disk", 00:05:12.884 "block_size": 512, 00:05:12.884 "num_blocks": 16384, 00:05:12.884 "uuid": "b5acd793-b3bd-4f4a-93e1-fc3ec6fe28d9", 00:05:12.884 "assigned_rate_limits": { 00:05:12.884 "rw_ios_per_sec": 0, 00:05:12.884 "rw_mbytes_per_sec": 0, 00:05:12.884 "r_mbytes_per_sec": 0, 00:05:12.884 "w_mbytes_per_sec": 0 00:05:12.884 }, 00:05:12.884 "claimed": true, 00:05:12.884 "claim_type": "exclusive_write", 00:05:12.884 "zoned": false, 00:05:12.884 "supported_io_types": { 00:05:12.884 "read": true, 00:05:12.884 "write": true, 00:05:12.884 "unmap": true, 00:05:12.884 "flush": true, 00:05:12.884 "reset": true, 00:05:12.884 "nvme_admin": false, 00:05:12.884 "nvme_io": false, 00:05:12.884 "nvme_io_md": false, 00:05:12.884 "write_zeroes": true, 00:05:12.884 "zcopy": true, 00:05:12.884 "get_zone_info": false, 00:05:12.884 "zone_management": false, 00:05:12.884 "zone_append": false, 00:05:12.884 "compare": false, 00:05:12.884 "compare_and_write": false, 00:05:12.884 "abort": true, 00:05:12.884 "seek_hole": false, 00:05:12.884 "seek_data": false, 00:05:12.884 "copy": true, 00:05:12.884 "nvme_iov_md": false 00:05:12.884 }, 00:05:12.884 "memory_domains": [ 00:05:12.884 { 00:05:12.884 "dma_device_id": "system", 00:05:12.884 "dma_device_type": 1 00:05:12.884 }, 00:05:12.884 { 00:05:12.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.884 "dma_device_type": 2 00:05:12.884 } 00:05:12.884 ], 00:05:12.884 "driver_specific": {} 00:05:12.884 }, 00:05:12.884 { 00:05:12.884 "name": "Passthru0", 00:05:12.884 "aliases": [ 00:05:12.884 "735f9328-ceb8-50ed-bde3-9fa0b3bc0fbf" 00:05:12.884 ], 00:05:12.884 "product_name": "passthru", 00:05:12.884 "block_size": 512, 00:05:12.884 "num_blocks": 16384, 00:05:12.884 "uuid": "735f9328-ceb8-50ed-bde3-9fa0b3bc0fbf", 00:05:12.884 "assigned_rate_limits": { 00:05:12.884 "rw_ios_per_sec": 0, 00:05:12.884 "rw_mbytes_per_sec": 0, 00:05:12.884 "r_mbytes_per_sec": 0, 00:05:12.884 "w_mbytes_per_sec": 0 00:05:12.884 }, 00:05:12.884 "claimed": false, 00:05:12.884 "zoned": false, 00:05:12.884 "supported_io_types": { 00:05:12.884 "read": true, 00:05:12.884 "write": true, 00:05:12.884 "unmap": true, 00:05:12.884 "flush": true, 00:05:12.884 "reset": true, 00:05:12.884 "nvme_admin": false, 00:05:12.884 "nvme_io": false, 00:05:12.884 "nvme_io_md": false, 00:05:12.884 "write_zeroes": true, 00:05:12.884 "zcopy": true, 00:05:12.884 "get_zone_info": false, 00:05:12.884 "zone_management": false, 00:05:12.884 "zone_append": false, 00:05:12.884 "compare": false, 00:05:12.884 "compare_and_write": false, 00:05:12.884 "abort": true, 00:05:12.884 "seek_hole": false, 00:05:12.884 "seek_data": false, 00:05:12.884 "copy": true, 00:05:12.884 "nvme_iov_md": false 00:05:12.884 }, 00:05:12.884 "memory_domains": [ 00:05:12.884 { 00:05:12.884 "dma_device_id": "system", 00:05:12.884 "dma_device_type": 1 00:05:12.884 }, 00:05:12.884 { 00:05:12.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.884 "dma_device_type": 2 00:05:12.884 } 00:05:12.884 ], 00:05:12.884 "driver_specific": { 00:05:12.884 "passthru": { 00:05:12.885 "name": "Passthru0", 00:05:12.885 "base_bdev_name": "Malloc0" 00:05:12.885 } 00:05:12.885 } 00:05:12.885 } 00:05:12.885 ]' 00:05:12.885 17:05:11 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:12.885 17:05:11 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:12.885 17:05:11 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:12.885 17:05:11 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.885 17:05:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.885 17:05:11 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.885 17:05:11 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:12.885 17:05:11 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.885 17:05:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.885 17:05:11 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.885 17:05:11 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:12.885 17:05:11 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.885 17:05:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.885 17:05:11 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.885 17:05:11 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:12.885 17:05:11 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:12.885 17:05:11 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:12.885 00:05:12.885 real 0m0.247s 00:05:12.885 user 0m0.151s 00:05:12.885 sys 0m0.036s 00:05:12.885 17:05:11 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.885 17:05:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.885 ************************************ 00:05:12.885 END TEST rpc_integrity 00:05:12.885 ************************************ 00:05:13.146 17:05:11 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:13.146 17:05:11 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.146 17:05:11 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.146 17:05:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.146 ************************************ 00:05:13.146 START TEST rpc_plugins 00:05:13.146 ************************************ 00:05:13.146 17:05:11 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:13.146 17:05:11 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:13.146 17:05:11 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.146 17:05:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:13.146 17:05:11 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.146 17:05:11 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:13.146 17:05:11 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:13.146 17:05:11 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.146 17:05:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:13.146 17:05:11 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.146 17:05:11 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:13.146 { 00:05:13.146 "name": "Malloc1", 00:05:13.146 "aliases": [ 00:05:13.146 "a5eba24c-9c73-4726-b77c-b84536c0dc54" 00:05:13.146 ], 00:05:13.146 "product_name": "Malloc disk", 00:05:13.146 "block_size": 4096, 00:05:13.146 "num_blocks": 256, 00:05:13.146 "uuid": "a5eba24c-9c73-4726-b77c-b84536c0dc54", 00:05:13.146 "assigned_rate_limits": { 00:05:13.146 "rw_ios_per_sec": 0, 00:05:13.146 "rw_mbytes_per_sec": 0, 00:05:13.146 "r_mbytes_per_sec": 0, 00:05:13.146 "w_mbytes_per_sec": 0 00:05:13.146 }, 00:05:13.146 "claimed": false, 00:05:13.146 "zoned": false, 00:05:13.146 "supported_io_types": { 00:05:13.146 "read": true, 00:05:13.146 "write": true, 00:05:13.146 "unmap": true, 00:05:13.146 "flush": true, 00:05:13.146 "reset": true, 00:05:13.146 "nvme_admin": false, 00:05:13.146 "nvme_io": false, 00:05:13.146 "nvme_io_md": false, 00:05:13.146 "write_zeroes": true, 00:05:13.146 "zcopy": true, 00:05:13.146 "get_zone_info": false, 00:05:13.146 "zone_management": false, 00:05:13.146 "zone_append": false, 00:05:13.146 "compare": false, 00:05:13.146 "compare_and_write": false, 00:05:13.146 "abort": true, 00:05:13.146 "seek_hole": false, 00:05:13.146 "seek_data": false, 00:05:13.146 "copy": true, 00:05:13.146 "nvme_iov_md": false 00:05:13.146 }, 00:05:13.146 "memory_domains": [ 00:05:13.146 { 00:05:13.146 "dma_device_id": "system", 00:05:13.146 "dma_device_type": 1 00:05:13.146 }, 00:05:13.146 { 00:05:13.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.146 "dma_device_type": 2 00:05:13.146 } 00:05:13.146 ], 00:05:13.146 "driver_specific": {} 00:05:13.146 } 00:05:13.146 ]' 00:05:13.146 17:05:11 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:13.146 17:05:11 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:13.146 17:05:11 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:13.146 17:05:11 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.146 17:05:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:13.146 17:05:11 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.146 17:05:11 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:13.146 17:05:11 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.146 17:05:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:13.146 17:05:11 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.146 17:05:11 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:13.146 17:05:11 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:13.146 17:05:11 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:13.146 00:05:13.146 real 0m0.150s 00:05:13.146 user 0m0.097s 00:05:13.146 sys 0m0.015s 00:05:13.146 17:05:11 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.146 17:05:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:13.146 ************************************ 00:05:13.146 END TEST rpc_plugins 00:05:13.146 ************************************ 00:05:13.146 17:05:11 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:13.146 17:05:11 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.146 17:05:11 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.146 17:05:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.407 ************************************ 00:05:13.407 START TEST rpc_trace_cmd_test 00:05:13.407 ************************************ 00:05:13.407 17:05:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:13.407 17:05:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:13.407 17:05:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:13.407 17:05:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.407 17:05:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:13.407 17:05:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.407 17:05:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:13.407 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2774130", 00:05:13.407 "tpoint_group_mask": "0x8", 00:05:13.407 "iscsi_conn": { 00:05:13.407 "mask": "0x2", 00:05:13.407 "tpoint_mask": "0x0" 00:05:13.407 }, 00:05:13.407 "scsi": { 00:05:13.407 "mask": "0x4", 00:05:13.407 "tpoint_mask": "0x0" 00:05:13.407 }, 00:05:13.407 "bdev": { 00:05:13.407 "mask": "0x8", 00:05:13.407 "tpoint_mask": "0xffffffffffffffff" 00:05:13.407 }, 00:05:13.407 "nvmf_rdma": { 00:05:13.407 "mask": "0x10", 00:05:13.407 "tpoint_mask": "0x0" 00:05:13.407 }, 00:05:13.407 "nvmf_tcp": { 00:05:13.407 "mask": "0x20", 00:05:13.407 "tpoint_mask": "0x0" 00:05:13.407 }, 00:05:13.407 "ftl": { 00:05:13.407 "mask": "0x40", 00:05:13.407 "tpoint_mask": "0x0" 00:05:13.407 }, 00:05:13.407 "blobfs": { 00:05:13.407 "mask": "0x80", 00:05:13.407 "tpoint_mask": "0x0" 00:05:13.407 }, 00:05:13.407 "dsa": { 00:05:13.407 "mask": "0x200", 00:05:13.407 "tpoint_mask": "0x0" 00:05:13.407 }, 00:05:13.407 "thread": { 00:05:13.407 "mask": "0x400", 00:05:13.407 "tpoint_mask": "0x0" 00:05:13.407 }, 00:05:13.407 "nvme_pcie": { 00:05:13.407 "mask": "0x800", 00:05:13.407 "tpoint_mask": "0x0" 00:05:13.407 }, 00:05:13.407 "iaa": { 00:05:13.407 "mask": "0x1000", 00:05:13.407 "tpoint_mask": "0x0" 00:05:13.407 }, 00:05:13.407 "nvme_tcp": { 00:05:13.407 "mask": "0x2000", 00:05:13.407 "tpoint_mask": "0x0" 00:05:13.407 }, 00:05:13.407 "bdev_nvme": { 00:05:13.407 "mask": "0x4000", 00:05:13.407 "tpoint_mask": "0x0" 00:05:13.407 }, 00:05:13.407 "sock": { 00:05:13.407 "mask": "0x8000", 00:05:13.407 "tpoint_mask": "0x0" 00:05:13.407 }, 00:05:13.407 "blob": { 00:05:13.407 "mask": "0x10000", 00:05:13.407 "tpoint_mask": "0x0" 00:05:13.407 }, 00:05:13.407 "bdev_raid": { 00:05:13.407 "mask": "0x20000", 00:05:13.407 "tpoint_mask": "0x0" 00:05:13.407 } 00:05:13.407 }' 00:05:13.407 17:05:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:13.407 17:05:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:05:13.407 17:05:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:13.407 17:05:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:13.407 17:05:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:13.407 17:05:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:13.407 17:05:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:13.407 17:05:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:13.407 17:05:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:13.407 17:05:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:13.407 00:05:13.407 real 0m0.184s 00:05:13.407 user 0m0.150s 00:05:13.407 sys 0m0.023s 00:05:13.407 17:05:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.407 17:05:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:13.407 ************************************ 00:05:13.407 END TEST rpc_trace_cmd_test 00:05:13.407 ************************************ 00:05:13.407 17:05:11 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:13.407 17:05:11 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:13.407 17:05:11 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:13.407 17:05:11 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.407 17:05:11 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.407 17:05:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.668 ************************************ 00:05:13.668 START TEST rpc_daemon_integrity 00:05:13.668 ************************************ 00:05:13.668 17:05:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:13.668 17:05:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:13.668 17:05:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.668 17:05:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.668 17:05:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.668 17:05:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:13.668 17:05:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:13.668 17:05:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:13.668 17:05:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:13.668 17:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.668 17:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.668 17:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.668 17:05:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:13.668 17:05:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:13.668 17:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.668 17:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.668 17:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.668 17:05:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:13.668 { 00:05:13.668 "name": "Malloc2", 00:05:13.668 "aliases": [ 00:05:13.668 "5b1d47fa-62d3-442b-8fb1-328cdbaffe19" 00:05:13.668 ], 00:05:13.668 "product_name": "Malloc disk", 00:05:13.668 "block_size": 512, 00:05:13.668 "num_blocks": 16384, 00:05:13.668 "uuid": "5b1d47fa-62d3-442b-8fb1-328cdbaffe19", 00:05:13.668 "assigned_rate_limits": { 00:05:13.668 "rw_ios_per_sec": 0, 00:05:13.668 "rw_mbytes_per_sec": 0, 00:05:13.668 "r_mbytes_per_sec": 0, 00:05:13.668 "w_mbytes_per_sec": 0 00:05:13.668 }, 00:05:13.668 "claimed": false, 00:05:13.668 "zoned": false, 00:05:13.668 "supported_io_types": { 00:05:13.668 "read": true, 00:05:13.668 "write": true, 00:05:13.668 "unmap": true, 00:05:13.668 "flush": true, 00:05:13.668 "reset": true, 00:05:13.668 "nvme_admin": false, 00:05:13.668 "nvme_io": false, 00:05:13.668 "nvme_io_md": false, 00:05:13.668 "write_zeroes": true, 00:05:13.668 "zcopy": true, 00:05:13.668 "get_zone_info": false, 00:05:13.668 "zone_management": false, 00:05:13.668 "zone_append": false, 00:05:13.668 "compare": false, 00:05:13.668 "compare_and_write": false, 00:05:13.668 "abort": true, 00:05:13.668 "seek_hole": false, 00:05:13.668 "seek_data": false, 00:05:13.668 "copy": true, 00:05:13.668 "nvme_iov_md": false 00:05:13.668 }, 00:05:13.668 "memory_domains": [ 00:05:13.668 { 00:05:13.669 "dma_device_id": "system", 00:05:13.669 "dma_device_type": 1 00:05:13.669 }, 00:05:13.669 { 00:05:13.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.669 "dma_device_type": 2 00:05:13.669 } 00:05:13.669 ], 00:05:13.669 "driver_specific": {} 00:05:13.669 } 00:05:13.669 ]' 00:05:13.669 17:05:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:13.669 17:05:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:13.669 17:05:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:13.669 17:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.669 17:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.669 [2024-10-01 17:05:12.110602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:13.669 [2024-10-01 17:05:12.110632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:13.669 [2024-10-01 17:05:12.110647] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17d2a70 00:05:13.669 [2024-10-01 17:05:12.110654] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:13.669 [2024-10-01 17:05:12.112011] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:13.669 [2024-10-01 17:05:12.112033] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:13.669 Passthru0 00:05:13.669 17:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.669 17:05:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:13.669 17:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.669 17:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.669 17:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.669 17:05:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:13.669 { 00:05:13.669 "name": "Malloc2", 00:05:13.669 "aliases": [ 00:05:13.669 "5b1d47fa-62d3-442b-8fb1-328cdbaffe19" 00:05:13.669 ], 00:05:13.669 "product_name": "Malloc disk", 00:05:13.669 "block_size": 512, 00:05:13.669 "num_blocks": 16384, 00:05:13.669 "uuid": "5b1d47fa-62d3-442b-8fb1-328cdbaffe19", 00:05:13.669 "assigned_rate_limits": { 00:05:13.669 "rw_ios_per_sec": 0, 00:05:13.669 "rw_mbytes_per_sec": 0, 00:05:13.669 "r_mbytes_per_sec": 0, 00:05:13.669 "w_mbytes_per_sec": 0 00:05:13.669 }, 00:05:13.669 "claimed": true, 00:05:13.669 "claim_type": "exclusive_write", 00:05:13.669 "zoned": false, 00:05:13.669 "supported_io_types": { 00:05:13.669 "read": true, 00:05:13.669 "write": true, 00:05:13.669 "unmap": true, 00:05:13.669 "flush": true, 00:05:13.669 "reset": true, 00:05:13.669 "nvme_admin": false, 00:05:13.669 "nvme_io": false, 00:05:13.669 "nvme_io_md": false, 00:05:13.669 "write_zeroes": true, 00:05:13.669 "zcopy": true, 00:05:13.669 "get_zone_info": false, 00:05:13.669 "zone_management": false, 00:05:13.669 "zone_append": false, 00:05:13.669 "compare": false, 00:05:13.669 "compare_and_write": false, 00:05:13.669 "abort": true, 00:05:13.669 "seek_hole": false, 00:05:13.669 "seek_data": false, 00:05:13.669 "copy": true, 00:05:13.669 "nvme_iov_md": false 00:05:13.669 }, 00:05:13.669 "memory_domains": [ 00:05:13.669 { 00:05:13.669 "dma_device_id": "system", 00:05:13.669 "dma_device_type": 1 00:05:13.669 }, 00:05:13.669 { 00:05:13.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.669 "dma_device_type": 2 00:05:13.669 } 00:05:13.669 ], 00:05:13.669 "driver_specific": {} 00:05:13.669 }, 00:05:13.669 { 00:05:13.669 "name": "Passthru0", 00:05:13.669 "aliases": [ 00:05:13.669 "2a5df195-de04-534f-86d1-66213c787248" 00:05:13.669 ], 00:05:13.669 "product_name": "passthru", 00:05:13.669 "block_size": 512, 00:05:13.669 "num_blocks": 16384, 00:05:13.669 "uuid": "2a5df195-de04-534f-86d1-66213c787248", 00:05:13.669 "assigned_rate_limits": { 00:05:13.669 "rw_ios_per_sec": 0, 00:05:13.669 "rw_mbytes_per_sec": 0, 00:05:13.669 "r_mbytes_per_sec": 0, 00:05:13.669 "w_mbytes_per_sec": 0 00:05:13.669 }, 00:05:13.669 "claimed": false, 00:05:13.669 "zoned": false, 00:05:13.669 "supported_io_types": { 00:05:13.669 "read": true, 00:05:13.669 "write": true, 00:05:13.669 "unmap": true, 00:05:13.669 "flush": true, 00:05:13.669 "reset": true, 00:05:13.669 "nvme_admin": false, 00:05:13.669 "nvme_io": false, 00:05:13.669 "nvme_io_md": false, 00:05:13.669 "write_zeroes": true, 00:05:13.669 "zcopy": true, 00:05:13.669 "get_zone_info": false, 00:05:13.669 "zone_management": false, 00:05:13.669 "zone_append": false, 00:05:13.669 "compare": false, 00:05:13.669 "compare_and_write": false, 00:05:13.669 "abort": true, 00:05:13.669 "seek_hole": false, 00:05:13.669 "seek_data": false, 00:05:13.669 "copy": true, 00:05:13.669 "nvme_iov_md": false 00:05:13.669 }, 00:05:13.669 "memory_domains": [ 00:05:13.669 { 00:05:13.669 "dma_device_id": "system", 00:05:13.669 "dma_device_type": 1 00:05:13.669 }, 00:05:13.669 { 00:05:13.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.669 "dma_device_type": 2 00:05:13.669 } 00:05:13.669 ], 00:05:13.669 "driver_specific": { 00:05:13.669 "passthru": { 00:05:13.669 "name": "Passthru0", 00:05:13.669 "base_bdev_name": "Malloc2" 00:05:13.669 } 00:05:13.669 } 00:05:13.669 } 00:05:13.669 ]' 00:05:13.669 17:05:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:13.669 17:05:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:13.669 17:05:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:13.669 17:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.669 17:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.669 17:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.669 17:05:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:13.669 17:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.669 17:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.669 17:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.669 17:05:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:13.669 17:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.669 17:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.669 17:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.669 17:05:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:13.669 17:05:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:13.930 17:05:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:13.930 00:05:13.930 real 0m0.287s 00:05:13.930 user 0m0.176s 00:05:13.930 sys 0m0.044s 00:05:13.930 17:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.930 17:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.930 ************************************ 00:05:13.930 END TEST rpc_daemon_integrity 00:05:13.930 ************************************ 00:05:13.930 17:05:12 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:13.930 17:05:12 rpc -- rpc/rpc.sh@84 -- # killprocess 2774130 00:05:13.930 17:05:12 rpc -- common/autotest_common.sh@950 -- # '[' -z 2774130 ']' 00:05:13.930 17:05:12 rpc -- common/autotest_common.sh@954 -- # kill -0 2774130 00:05:13.930 17:05:12 rpc -- common/autotest_common.sh@955 -- # uname 00:05:13.930 17:05:12 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:13.930 17:05:12 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2774130 00:05:13.930 17:05:12 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:13.930 17:05:12 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:13.930 17:05:12 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2774130' 00:05:13.930 killing process with pid 2774130 00:05:13.930 17:05:12 rpc -- common/autotest_common.sh@969 -- # kill 2774130 00:05:13.930 17:05:12 rpc -- common/autotest_common.sh@974 -- # wait 2774130 00:05:14.191 00:05:14.191 real 0m1.945s 00:05:14.191 user 0m2.492s 00:05:14.191 sys 0m0.676s 00:05:14.191 17:05:12 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.191 17:05:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.191 ************************************ 00:05:14.191 END TEST rpc 00:05:14.191 ************************************ 00:05:14.191 17:05:12 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:14.191 17:05:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.191 17:05:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.191 17:05:12 -- common/autotest_common.sh@10 -- # set +x 00:05:14.191 ************************************ 00:05:14.191 START TEST skip_rpc 00:05:14.191 ************************************ 00:05:14.191 17:05:12 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:14.452 * Looking for test storage... 00:05:14.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:14.452 17:05:12 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:14.452 17:05:12 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:14.452 17:05:12 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:14.452 17:05:12 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:14.452 17:05:12 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.452 17:05:12 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.452 17:05:12 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.452 17:05:12 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.452 17:05:12 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.452 17:05:12 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.452 17:05:12 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.452 17:05:12 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.452 17:05:12 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.452 17:05:12 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.452 17:05:12 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.452 17:05:12 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:14.452 17:05:12 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:14.452 17:05:12 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.452 17:05:12 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.452 17:05:12 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:14.452 17:05:12 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:14.452 17:05:12 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.452 17:05:12 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:14.452 17:05:12 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.452 17:05:12 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:14.452 17:05:12 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:14.452 17:05:12 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.452 17:05:12 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:14.452 17:05:12 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.452 17:05:12 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.452 17:05:12 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.452 17:05:12 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:14.452 17:05:12 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.452 17:05:12 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:14.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.452 --rc genhtml_branch_coverage=1 00:05:14.452 --rc genhtml_function_coverage=1 00:05:14.452 --rc genhtml_legend=1 00:05:14.452 --rc geninfo_all_blocks=1 00:05:14.452 --rc geninfo_unexecuted_blocks=1 00:05:14.452 00:05:14.452 ' 00:05:14.452 17:05:12 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:14.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.452 --rc genhtml_branch_coverage=1 00:05:14.452 --rc genhtml_function_coverage=1 00:05:14.452 --rc genhtml_legend=1 00:05:14.452 --rc geninfo_all_blocks=1 00:05:14.452 --rc geninfo_unexecuted_blocks=1 00:05:14.452 00:05:14.452 ' 00:05:14.452 17:05:12 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:14.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.452 --rc genhtml_branch_coverage=1 00:05:14.452 --rc genhtml_function_coverage=1 00:05:14.452 --rc genhtml_legend=1 00:05:14.452 --rc geninfo_all_blocks=1 00:05:14.452 --rc geninfo_unexecuted_blocks=1 00:05:14.452 00:05:14.452 ' 00:05:14.452 17:05:12 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:14.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.452 --rc genhtml_branch_coverage=1 00:05:14.452 --rc genhtml_function_coverage=1 00:05:14.452 --rc genhtml_legend=1 00:05:14.452 --rc geninfo_all_blocks=1 00:05:14.452 --rc geninfo_unexecuted_blocks=1 00:05:14.452 00:05:14.452 ' 00:05:14.452 17:05:12 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:14.452 17:05:12 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:14.452 17:05:12 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:14.452 17:05:12 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.452 17:05:12 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.452 17:05:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.452 ************************************ 00:05:14.452 START TEST skip_rpc 00:05:14.452 ************************************ 00:05:14.452 17:05:12 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:14.452 17:05:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2774654 00:05:14.452 17:05:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:14.452 17:05:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:14.452 17:05:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:14.452 [2024-10-01 17:05:12.944710] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:05:14.452 [2024-10-01 17:05:12.944756] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2774654 ] 00:05:14.712 [2024-10-01 17:05:13.005689] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.712 [2024-10-01 17:05:13.036477] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.081 17:05:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:20.081 17:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:20.081 17:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:20.081 17:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:20.081 17:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:20.081 17:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:20.081 17:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:20.081 17:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:20.081 17:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.081 17:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.081 17:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:20.081 17:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:20.081 17:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:20.081 17:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:20.081 17:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:20.081 17:05:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:20.081 17:05:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2774654 00:05:20.081 17:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 2774654 ']' 00:05:20.081 17:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 2774654 00:05:20.082 17:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:20.082 17:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:20.082 17:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2774654 00:05:20.082 17:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:20.082 17:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:20.082 17:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2774654' 00:05:20.082 killing process with pid 2774654 00:05:20.082 17:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 2774654 00:05:20.082 17:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 2774654 00:05:20.082 00:05:20.082 real 0m5.287s 00:05:20.082 user 0m5.100s 00:05:20.082 sys 0m0.235s 00:05:20.082 17:05:18 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.082 17:05:18 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.082 ************************************ 00:05:20.082 END TEST skip_rpc 00:05:20.082 ************************************ 00:05:20.082 17:05:18 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:20.082 17:05:18 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.082 17:05:18 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.082 17:05:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.082 ************************************ 00:05:20.082 START TEST skip_rpc_with_json 00:05:20.082 ************************************ 00:05:20.082 17:05:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:20.082 17:05:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:20.082 17:05:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2775697 00:05:20.082 17:05:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:20.082 17:05:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2775697 00:05:20.082 17:05:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:20.082 17:05:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 2775697 ']' 00:05:20.082 17:05:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.082 17:05:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:20.082 17:05:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.082 17:05:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:20.082 17:05:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:20.082 [2024-10-01 17:05:18.310589] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:05:20.082 [2024-10-01 17:05:18.310638] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2775697 ] 00:05:20.082 [2024-10-01 17:05:18.371364] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.082 [2024-10-01 17:05:18.400835] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.082 17:05:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:20.082 17:05:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:20.082 17:05:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:20.082 17:05:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.082 17:05:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:20.082 [2024-10-01 17:05:18.573847] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:20.082 request: 00:05:20.082 { 00:05:20.082 "trtype": "tcp", 00:05:20.082 "method": "nvmf_get_transports", 00:05:20.082 "req_id": 1 00:05:20.082 } 00:05:20.082 Got JSON-RPC error response 00:05:20.082 response: 00:05:20.082 { 00:05:20.082 "code": -19, 00:05:20.082 "message": "No such device" 00:05:20.082 } 00:05:20.082 17:05:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:20.082 17:05:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:20.082 17:05:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.082 17:05:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:20.082 [2024-10-01 17:05:18.585974] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:20.082 17:05:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.082 17:05:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:20.082 17:05:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.082 17:05:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:20.342 17:05:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.342 17:05:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:20.342 { 00:05:20.342 "subsystems": [ 00:05:20.342 { 00:05:20.342 "subsystem": "fsdev", 00:05:20.342 "config": [ 00:05:20.342 { 00:05:20.342 "method": "fsdev_set_opts", 00:05:20.342 "params": { 00:05:20.342 "fsdev_io_pool_size": 65535, 00:05:20.342 "fsdev_io_cache_size": 256 00:05:20.342 } 00:05:20.342 } 00:05:20.342 ] 00:05:20.342 }, 00:05:20.342 { 00:05:20.342 "subsystem": "vfio_user_target", 00:05:20.342 "config": null 00:05:20.342 }, 00:05:20.342 { 00:05:20.342 "subsystem": "keyring", 00:05:20.342 "config": [] 00:05:20.342 }, 00:05:20.342 { 00:05:20.342 "subsystem": "iobuf", 00:05:20.342 "config": [ 00:05:20.342 { 00:05:20.342 "method": "iobuf_set_options", 00:05:20.342 "params": { 00:05:20.342 "small_pool_count": 8192, 00:05:20.342 "large_pool_count": 1024, 00:05:20.342 "small_bufsize": 8192, 00:05:20.342 "large_bufsize": 135168 00:05:20.342 } 00:05:20.342 } 00:05:20.342 ] 00:05:20.342 }, 00:05:20.342 { 00:05:20.342 "subsystem": "sock", 00:05:20.342 "config": [ 00:05:20.342 { 00:05:20.342 "method": "sock_set_default_impl", 00:05:20.342 "params": { 00:05:20.342 "impl_name": "posix" 00:05:20.342 } 00:05:20.342 }, 00:05:20.342 { 00:05:20.342 "method": "sock_impl_set_options", 00:05:20.342 "params": { 00:05:20.342 "impl_name": "ssl", 00:05:20.342 "recv_buf_size": 4096, 00:05:20.342 "send_buf_size": 4096, 00:05:20.342 "enable_recv_pipe": true, 00:05:20.342 "enable_quickack": false, 00:05:20.342 "enable_placement_id": 0, 00:05:20.342 "enable_zerocopy_send_server": true, 00:05:20.342 "enable_zerocopy_send_client": false, 00:05:20.342 "zerocopy_threshold": 0, 00:05:20.342 "tls_version": 0, 00:05:20.342 "enable_ktls": false 00:05:20.342 } 00:05:20.342 }, 00:05:20.342 { 00:05:20.342 "method": "sock_impl_set_options", 00:05:20.342 "params": { 00:05:20.342 "impl_name": "posix", 00:05:20.342 "recv_buf_size": 2097152, 00:05:20.342 "send_buf_size": 2097152, 00:05:20.342 "enable_recv_pipe": true, 00:05:20.342 "enable_quickack": false, 00:05:20.342 "enable_placement_id": 0, 00:05:20.342 "enable_zerocopy_send_server": true, 00:05:20.342 "enable_zerocopy_send_client": false, 00:05:20.342 "zerocopy_threshold": 0, 00:05:20.342 "tls_version": 0, 00:05:20.342 "enable_ktls": false 00:05:20.342 } 00:05:20.342 } 00:05:20.342 ] 00:05:20.342 }, 00:05:20.342 { 00:05:20.342 "subsystem": "vmd", 00:05:20.342 "config": [] 00:05:20.342 }, 00:05:20.342 { 00:05:20.342 "subsystem": "accel", 00:05:20.342 "config": [ 00:05:20.342 { 00:05:20.342 "method": "accel_set_options", 00:05:20.342 "params": { 00:05:20.342 "small_cache_size": 128, 00:05:20.342 "large_cache_size": 16, 00:05:20.342 "task_count": 2048, 00:05:20.342 "sequence_count": 2048, 00:05:20.342 "buf_count": 2048 00:05:20.342 } 00:05:20.342 } 00:05:20.342 ] 00:05:20.342 }, 00:05:20.342 { 00:05:20.342 "subsystem": "bdev", 00:05:20.342 "config": [ 00:05:20.342 { 00:05:20.342 "method": "bdev_set_options", 00:05:20.342 "params": { 00:05:20.342 "bdev_io_pool_size": 65535, 00:05:20.342 "bdev_io_cache_size": 256, 00:05:20.342 "bdev_auto_examine": true, 00:05:20.342 "iobuf_small_cache_size": 128, 00:05:20.342 "iobuf_large_cache_size": 16 00:05:20.342 } 00:05:20.342 }, 00:05:20.342 { 00:05:20.342 "method": "bdev_raid_set_options", 00:05:20.342 "params": { 00:05:20.342 "process_window_size_kb": 1024, 00:05:20.342 "process_max_bandwidth_mb_sec": 0 00:05:20.342 } 00:05:20.342 }, 00:05:20.342 { 00:05:20.342 "method": "bdev_iscsi_set_options", 00:05:20.342 "params": { 00:05:20.342 "timeout_sec": 30 00:05:20.342 } 00:05:20.342 }, 00:05:20.342 { 00:05:20.342 "method": "bdev_nvme_set_options", 00:05:20.342 "params": { 00:05:20.342 "action_on_timeout": "none", 00:05:20.342 "timeout_us": 0, 00:05:20.342 "timeout_admin_us": 0, 00:05:20.342 "keep_alive_timeout_ms": 10000, 00:05:20.342 "arbitration_burst": 0, 00:05:20.342 "low_priority_weight": 0, 00:05:20.342 "medium_priority_weight": 0, 00:05:20.342 "high_priority_weight": 0, 00:05:20.342 "nvme_adminq_poll_period_us": 10000, 00:05:20.342 "nvme_ioq_poll_period_us": 0, 00:05:20.342 "io_queue_requests": 0, 00:05:20.342 "delay_cmd_submit": true, 00:05:20.342 "transport_retry_count": 4, 00:05:20.342 "bdev_retry_count": 3, 00:05:20.342 "transport_ack_timeout": 0, 00:05:20.342 "ctrlr_loss_timeout_sec": 0, 00:05:20.342 "reconnect_delay_sec": 0, 00:05:20.342 "fast_io_fail_timeout_sec": 0, 00:05:20.342 "disable_auto_failback": false, 00:05:20.342 "generate_uuids": false, 00:05:20.342 "transport_tos": 0, 00:05:20.342 "nvme_error_stat": false, 00:05:20.342 "rdma_srq_size": 0, 00:05:20.342 "io_path_stat": false, 00:05:20.342 "allow_accel_sequence": false, 00:05:20.342 "rdma_max_cq_size": 0, 00:05:20.342 "rdma_cm_event_timeout_ms": 0, 00:05:20.342 "dhchap_digests": [ 00:05:20.342 "sha256", 00:05:20.342 "sha384", 00:05:20.342 "sha512" 00:05:20.342 ], 00:05:20.342 "dhchap_dhgroups": [ 00:05:20.342 "null", 00:05:20.342 "ffdhe2048", 00:05:20.342 "ffdhe3072", 00:05:20.342 "ffdhe4096", 00:05:20.342 "ffdhe6144", 00:05:20.342 "ffdhe8192" 00:05:20.342 ] 00:05:20.342 } 00:05:20.342 }, 00:05:20.342 { 00:05:20.342 "method": "bdev_nvme_set_hotplug", 00:05:20.342 "params": { 00:05:20.342 "period_us": 100000, 00:05:20.342 "enable": false 00:05:20.342 } 00:05:20.342 }, 00:05:20.342 { 00:05:20.342 "method": "bdev_wait_for_examine" 00:05:20.342 } 00:05:20.342 ] 00:05:20.342 }, 00:05:20.342 { 00:05:20.342 "subsystem": "scsi", 00:05:20.342 "config": null 00:05:20.342 }, 00:05:20.342 { 00:05:20.342 "subsystem": "scheduler", 00:05:20.342 "config": [ 00:05:20.342 { 00:05:20.342 "method": "framework_set_scheduler", 00:05:20.342 "params": { 00:05:20.342 "name": "static" 00:05:20.342 } 00:05:20.342 } 00:05:20.342 ] 00:05:20.342 }, 00:05:20.342 { 00:05:20.342 "subsystem": "vhost_scsi", 00:05:20.342 "config": [] 00:05:20.342 }, 00:05:20.342 { 00:05:20.342 "subsystem": "vhost_blk", 00:05:20.342 "config": [] 00:05:20.342 }, 00:05:20.342 { 00:05:20.342 "subsystem": "ublk", 00:05:20.342 "config": [] 00:05:20.342 }, 00:05:20.342 { 00:05:20.342 "subsystem": "nbd", 00:05:20.342 "config": [] 00:05:20.342 }, 00:05:20.342 { 00:05:20.342 "subsystem": "nvmf", 00:05:20.342 "config": [ 00:05:20.342 { 00:05:20.342 "method": "nvmf_set_config", 00:05:20.342 "params": { 00:05:20.342 "discovery_filter": "match_any", 00:05:20.342 "admin_cmd_passthru": { 00:05:20.342 "identify_ctrlr": false 00:05:20.342 }, 00:05:20.342 "dhchap_digests": [ 00:05:20.342 "sha256", 00:05:20.342 "sha384", 00:05:20.342 "sha512" 00:05:20.342 ], 00:05:20.342 "dhchap_dhgroups": [ 00:05:20.342 "null", 00:05:20.342 "ffdhe2048", 00:05:20.342 "ffdhe3072", 00:05:20.342 "ffdhe4096", 00:05:20.342 "ffdhe6144", 00:05:20.342 "ffdhe8192" 00:05:20.342 ] 00:05:20.342 } 00:05:20.342 }, 00:05:20.342 { 00:05:20.342 "method": "nvmf_set_max_subsystems", 00:05:20.342 "params": { 00:05:20.342 "max_subsystems": 1024 00:05:20.342 } 00:05:20.342 }, 00:05:20.342 { 00:05:20.342 "method": "nvmf_set_crdt", 00:05:20.342 "params": { 00:05:20.342 "crdt1": 0, 00:05:20.342 "crdt2": 0, 00:05:20.342 "crdt3": 0 00:05:20.342 } 00:05:20.342 }, 00:05:20.342 { 00:05:20.342 "method": "nvmf_create_transport", 00:05:20.342 "params": { 00:05:20.342 "trtype": "TCP", 00:05:20.342 "max_queue_depth": 128, 00:05:20.342 "max_io_qpairs_per_ctrlr": 127, 00:05:20.342 "in_capsule_data_size": 4096, 00:05:20.342 "max_io_size": 131072, 00:05:20.342 "io_unit_size": 131072, 00:05:20.342 "max_aq_depth": 128, 00:05:20.342 "num_shared_buffers": 511, 00:05:20.342 "buf_cache_size": 4294967295, 00:05:20.342 "dif_insert_or_strip": false, 00:05:20.342 "zcopy": false, 00:05:20.342 "c2h_success": true, 00:05:20.342 "sock_priority": 0, 00:05:20.342 "abort_timeout_sec": 1, 00:05:20.342 "ack_timeout": 0, 00:05:20.342 "data_wr_pool_size": 0 00:05:20.342 } 00:05:20.342 } 00:05:20.342 ] 00:05:20.342 }, 00:05:20.342 { 00:05:20.342 "subsystem": "iscsi", 00:05:20.342 "config": [ 00:05:20.342 { 00:05:20.342 "method": "iscsi_set_options", 00:05:20.342 "params": { 00:05:20.342 "node_base": "iqn.2016-06.io.spdk", 00:05:20.342 "max_sessions": 128, 00:05:20.342 "max_connections_per_session": 2, 00:05:20.342 "max_queue_depth": 64, 00:05:20.342 "default_time2wait": 2, 00:05:20.342 "default_time2retain": 20, 00:05:20.342 "first_burst_length": 8192, 00:05:20.342 "immediate_data": true, 00:05:20.342 "allow_duplicated_isid": false, 00:05:20.342 "error_recovery_level": 0, 00:05:20.342 "nop_timeout": 60, 00:05:20.342 "nop_in_interval": 30, 00:05:20.342 "disable_chap": false, 00:05:20.342 "require_chap": false, 00:05:20.342 "mutual_chap": false, 00:05:20.342 "chap_group": 0, 00:05:20.342 "max_large_datain_per_connection": 64, 00:05:20.342 "max_r2t_per_connection": 4, 00:05:20.342 "pdu_pool_size": 36864, 00:05:20.342 "immediate_data_pool_size": 16384, 00:05:20.342 "data_out_pool_size": 2048 00:05:20.342 } 00:05:20.342 } 00:05:20.342 ] 00:05:20.342 } 00:05:20.342 ] 00:05:20.342 } 00:05:20.342 17:05:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:20.342 17:05:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2775697 00:05:20.342 17:05:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2775697 ']' 00:05:20.342 17:05:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2775697 00:05:20.342 17:05:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:20.342 17:05:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:20.342 17:05:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2775697 00:05:20.342 17:05:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:20.342 17:05:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:20.342 17:05:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2775697' 00:05:20.342 killing process with pid 2775697 00:05:20.342 17:05:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2775697 00:05:20.342 17:05:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2775697 00:05:20.604 17:05:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2775945 00:05:20.604 17:05:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:20.604 17:05:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:25.887 17:05:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2775945 00:05:25.887 17:05:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2775945 ']' 00:05:25.887 17:05:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2775945 00:05:25.887 17:05:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:25.887 17:05:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:25.887 17:05:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2775945 00:05:25.887 17:05:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:25.887 17:05:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:25.887 17:05:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2775945' 00:05:25.887 killing process with pid 2775945 00:05:25.887 17:05:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2775945 00:05:25.887 17:05:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2775945 00:05:25.887 17:05:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:25.887 17:05:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:25.887 00:05:25.887 real 0m6.072s 00:05:25.887 user 0m5.856s 00:05:25.887 sys 0m0.536s 00:05:25.887 17:05:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.887 17:05:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:25.887 ************************************ 00:05:25.887 END TEST skip_rpc_with_json 00:05:25.887 ************************************ 00:05:25.887 17:05:24 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:25.887 17:05:24 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:25.887 17:05:24 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.887 17:05:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.887 ************************************ 00:05:25.887 START TEST skip_rpc_with_delay 00:05:25.887 ************************************ 00:05:25.887 17:05:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:25.887 17:05:24 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:25.887 17:05:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:25.887 17:05:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:25.887 17:05:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:25.887 17:05:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:25.887 17:05:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:25.887 17:05:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:25.887 17:05:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:25.887 17:05:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:25.887 17:05:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:25.887 17:05:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:25.887 17:05:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:26.148 [2024-10-01 17:05:24.466346] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:26.148 [2024-10-01 17:05:24.466438] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:26.148 17:05:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:26.148 17:05:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:26.148 17:05:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:26.148 17:05:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:26.148 00:05:26.148 real 0m0.078s 00:05:26.148 user 0m0.046s 00:05:26.148 sys 0m0.032s 00:05:26.148 17:05:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.148 17:05:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:26.148 ************************************ 00:05:26.148 END TEST skip_rpc_with_delay 00:05:26.148 ************************************ 00:05:26.148 17:05:24 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:26.148 17:05:24 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:26.148 17:05:24 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:26.148 17:05:24 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.148 17:05:24 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.148 17:05:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.148 ************************************ 00:05:26.148 START TEST exit_on_failed_rpc_init 00:05:26.148 ************************************ 00:05:26.148 17:05:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:26.148 17:05:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2777099 00:05:26.148 17:05:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2777099 00:05:26.148 17:05:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:26.148 17:05:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 2777099 ']' 00:05:26.148 17:05:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.148 17:05:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:26.148 17:05:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.148 17:05:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:26.148 17:05:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:26.148 [2024-10-01 17:05:24.624031] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:05:26.148 [2024-10-01 17:05:24.624091] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2777099 ] 00:05:26.148 [2024-10-01 17:05:24.688142] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.408 [2024-10-01 17:05:24.726949] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.408 17:05:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:26.408 17:05:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:26.408 17:05:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:26.408 17:05:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:26.408 17:05:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:26.408 17:05:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:26.408 17:05:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.408 17:05:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.409 17:05:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.409 17:05:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.409 17:05:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.409 17:05:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.409 17:05:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.409 17:05:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:26.409 17:05:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:26.669 [2024-10-01 17:05:24.955807] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:05:26.669 [2024-10-01 17:05:24.955859] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2777104 ] 00:05:26.669 [2024-10-01 17:05:25.033424] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.669 [2024-10-01 17:05:25.064153] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.669 [2024-10-01 17:05:25.064209] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:26.669 [2024-10-01 17:05:25.064219] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:26.669 [2024-10-01 17:05:25.064226] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:26.669 17:05:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:26.669 17:05:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:26.669 17:05:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:26.669 17:05:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:26.669 17:05:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:26.669 17:05:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:26.669 17:05:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:26.669 17:05:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2777099 00:05:26.669 17:05:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 2777099 ']' 00:05:26.669 17:05:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 2777099 00:05:26.669 17:05:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:26.669 17:05:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:26.669 17:05:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2777099 00:05:26.669 17:05:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:26.669 17:05:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:26.669 17:05:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2777099' 00:05:26.669 killing process with pid 2777099 00:05:26.669 17:05:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 2777099 00:05:26.669 17:05:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 2777099 00:05:26.929 00:05:26.929 real 0m0.821s 00:05:26.929 user 0m0.906s 00:05:26.929 sys 0m0.373s 00:05:26.929 17:05:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.929 17:05:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:26.929 ************************************ 00:05:26.929 END TEST exit_on_failed_rpc_init 00:05:26.929 ************************************ 00:05:26.929 17:05:25 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:26.929 00:05:26.929 real 0m12.778s 00:05:26.929 user 0m12.134s 00:05:26.929 sys 0m1.497s 00:05:26.929 17:05:25 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.929 17:05:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.929 ************************************ 00:05:26.929 END TEST skip_rpc 00:05:26.929 ************************************ 00:05:26.929 17:05:25 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:26.929 17:05:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.929 17:05:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.929 17:05:25 -- common/autotest_common.sh@10 -- # set +x 00:05:27.191 ************************************ 00:05:27.191 START TEST rpc_client 00:05:27.191 ************************************ 00:05:27.191 17:05:25 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:27.191 * Looking for test storage... 00:05:27.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:27.191 17:05:25 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:27.191 17:05:25 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:05:27.191 17:05:25 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:27.191 17:05:25 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:27.191 17:05:25 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.191 17:05:25 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.191 17:05:25 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.191 17:05:25 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.191 17:05:25 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.191 17:05:25 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.191 17:05:25 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.191 17:05:25 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.191 17:05:25 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.191 17:05:25 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.191 17:05:25 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.191 17:05:25 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:27.191 17:05:25 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:27.191 17:05:25 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.191 17:05:25 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.191 17:05:25 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:27.191 17:05:25 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:27.191 17:05:25 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.191 17:05:25 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:27.191 17:05:25 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.191 17:05:25 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:27.191 17:05:25 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:27.191 17:05:25 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.191 17:05:25 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:27.191 17:05:25 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.191 17:05:25 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.191 17:05:25 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.191 17:05:25 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:27.191 17:05:25 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.191 17:05:25 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:27.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.191 --rc genhtml_branch_coverage=1 00:05:27.191 --rc genhtml_function_coverage=1 00:05:27.191 --rc genhtml_legend=1 00:05:27.191 --rc geninfo_all_blocks=1 00:05:27.191 --rc geninfo_unexecuted_blocks=1 00:05:27.191 00:05:27.191 ' 00:05:27.191 17:05:25 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:27.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.191 --rc genhtml_branch_coverage=1 00:05:27.191 --rc genhtml_function_coverage=1 00:05:27.191 --rc genhtml_legend=1 00:05:27.191 --rc geninfo_all_blocks=1 00:05:27.191 --rc geninfo_unexecuted_blocks=1 00:05:27.191 00:05:27.191 ' 00:05:27.191 17:05:25 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:27.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.191 --rc genhtml_branch_coverage=1 00:05:27.191 --rc genhtml_function_coverage=1 00:05:27.191 --rc genhtml_legend=1 00:05:27.191 --rc geninfo_all_blocks=1 00:05:27.191 --rc geninfo_unexecuted_blocks=1 00:05:27.191 00:05:27.191 ' 00:05:27.191 17:05:25 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:27.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.191 --rc genhtml_branch_coverage=1 00:05:27.191 --rc genhtml_function_coverage=1 00:05:27.191 --rc genhtml_legend=1 00:05:27.191 --rc geninfo_all_blocks=1 00:05:27.191 --rc geninfo_unexecuted_blocks=1 00:05:27.191 00:05:27.191 ' 00:05:27.191 17:05:25 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:27.191 OK 00:05:27.191 17:05:25 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:27.191 00:05:27.191 real 0m0.230s 00:05:27.191 user 0m0.131s 00:05:27.191 sys 0m0.113s 00:05:27.191 17:05:25 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.191 17:05:25 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:27.191 ************************************ 00:05:27.191 END TEST rpc_client 00:05:27.191 ************************************ 00:05:27.452 17:05:25 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:27.452 17:05:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.452 17:05:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.452 17:05:25 -- common/autotest_common.sh@10 -- # set +x 00:05:27.452 ************************************ 00:05:27.452 START TEST json_config 00:05:27.452 ************************************ 00:05:27.452 17:05:25 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:27.452 17:05:25 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:27.452 17:05:25 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:05:27.452 17:05:25 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:27.452 17:05:25 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:27.452 17:05:25 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.452 17:05:25 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.452 17:05:25 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.452 17:05:25 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.452 17:05:25 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.452 17:05:25 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.452 17:05:25 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.452 17:05:25 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.452 17:05:25 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.452 17:05:25 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.452 17:05:25 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.452 17:05:25 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:27.452 17:05:25 json_config -- scripts/common.sh@345 -- # : 1 00:05:27.452 17:05:25 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.452 17:05:25 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.452 17:05:25 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:27.452 17:05:25 json_config -- scripts/common.sh@353 -- # local d=1 00:05:27.452 17:05:25 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.452 17:05:25 json_config -- scripts/common.sh@355 -- # echo 1 00:05:27.452 17:05:25 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.452 17:05:25 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:27.452 17:05:25 json_config -- scripts/common.sh@353 -- # local d=2 00:05:27.452 17:05:25 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.452 17:05:25 json_config -- scripts/common.sh@355 -- # echo 2 00:05:27.452 17:05:25 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.452 17:05:25 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.452 17:05:25 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.452 17:05:25 json_config -- scripts/common.sh@368 -- # return 0 00:05:27.452 17:05:25 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.452 17:05:25 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:27.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.452 --rc genhtml_branch_coverage=1 00:05:27.452 --rc genhtml_function_coverage=1 00:05:27.452 --rc genhtml_legend=1 00:05:27.452 --rc geninfo_all_blocks=1 00:05:27.452 --rc geninfo_unexecuted_blocks=1 00:05:27.452 00:05:27.452 ' 00:05:27.452 17:05:25 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:27.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.452 --rc genhtml_branch_coverage=1 00:05:27.452 --rc genhtml_function_coverage=1 00:05:27.452 --rc genhtml_legend=1 00:05:27.452 --rc geninfo_all_blocks=1 00:05:27.452 --rc geninfo_unexecuted_blocks=1 00:05:27.452 00:05:27.452 ' 00:05:27.452 17:05:25 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:27.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.452 --rc genhtml_branch_coverage=1 00:05:27.452 --rc genhtml_function_coverage=1 00:05:27.452 --rc genhtml_legend=1 00:05:27.452 --rc geninfo_all_blocks=1 00:05:27.452 --rc geninfo_unexecuted_blocks=1 00:05:27.452 00:05:27.452 ' 00:05:27.453 17:05:25 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:27.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.453 --rc genhtml_branch_coverage=1 00:05:27.453 --rc genhtml_function_coverage=1 00:05:27.453 --rc genhtml_legend=1 00:05:27.453 --rc geninfo_all_blocks=1 00:05:27.453 --rc geninfo_unexecuted_blocks=1 00:05:27.453 00:05:27.453 ' 00:05:27.453 17:05:25 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:27.453 17:05:25 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:27.453 17:05:25 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:27.453 17:05:25 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:27.453 17:05:25 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:27.453 17:05:25 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:27.453 17:05:25 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:27.453 17:05:25 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:27.453 17:05:25 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:27.453 17:05:25 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:27.453 17:05:25 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:27.453 17:05:25 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:27.713 17:05:25 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:27.713 17:05:25 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:27.713 17:05:25 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:27.713 17:05:25 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:27.713 17:05:25 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:27.713 17:05:25 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:27.713 17:05:25 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:27.713 17:05:26 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:27.713 17:05:26 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:27.713 17:05:26 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:27.713 17:05:26 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:27.713 17:05:26 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.714 17:05:26 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.714 17:05:26 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.714 17:05:26 json_config -- paths/export.sh@5 -- # export PATH 00:05:27.714 17:05:26 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.714 17:05:26 json_config -- nvmf/common.sh@51 -- # : 0 00:05:27.714 17:05:26 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:27.714 17:05:26 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:27.714 17:05:26 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:27.714 17:05:26 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:27.714 17:05:26 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:27.714 17:05:26 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:27.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:27.714 17:05:26 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:27.714 17:05:26 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:27.714 17:05:26 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:27.714 17:05:26 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:27.714 17:05:26 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:27.714 17:05:26 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:27.714 17:05:26 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:27.714 17:05:26 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:27.714 17:05:26 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:27.714 17:05:26 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:27.714 17:05:26 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:27.714 17:05:26 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:27.714 17:05:26 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:27.714 17:05:26 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:27.714 17:05:26 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:27.714 17:05:26 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:27.714 17:05:26 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:27.714 17:05:26 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:27.714 17:05:26 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:27.714 INFO: JSON configuration test init 00:05:27.714 17:05:26 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:27.714 17:05:26 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:27.714 17:05:26 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:27.714 17:05:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.714 17:05:26 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:27.714 17:05:26 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:27.714 17:05:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.714 17:05:26 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:27.714 17:05:26 json_config -- json_config/common.sh@9 -- # local app=target 00:05:27.714 17:05:26 json_config -- json_config/common.sh@10 -- # shift 00:05:27.714 17:05:26 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:27.714 17:05:26 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:27.714 17:05:26 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:27.714 17:05:26 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:27.714 17:05:26 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:27.714 17:05:26 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2777560 00:05:27.714 17:05:26 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:27.714 Waiting for target to run... 00:05:27.714 17:05:26 json_config -- json_config/common.sh@25 -- # waitforlisten 2777560 /var/tmp/spdk_tgt.sock 00:05:27.714 17:05:26 json_config -- common/autotest_common.sh@831 -- # '[' -z 2777560 ']' 00:05:27.714 17:05:26 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:27.714 17:05:26 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:27.714 17:05:26 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:27.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:27.714 17:05:26 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:27.714 17:05:26 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:27.714 17:05:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.714 [2024-10-01 17:05:26.085200] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:05:27.714 [2024-10-01 17:05:26.085256] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2777560 ] 00:05:27.974 [2024-10-01 17:05:26.354763] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.974 [2024-10-01 17:05:26.373646] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.545 17:05:26 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:28.545 17:05:26 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:28.545 17:05:26 json_config -- json_config/common.sh@26 -- # echo '' 00:05:28.545 00:05:28.545 17:05:26 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:28.545 17:05:26 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:28.545 17:05:26 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:28.545 17:05:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.545 17:05:26 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:28.545 17:05:26 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:28.545 17:05:26 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:28.545 17:05:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.545 17:05:26 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:28.545 17:05:26 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:28.545 17:05:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:29.117 17:05:27 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:29.117 17:05:27 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:29.117 17:05:27 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:29.117 17:05:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.117 17:05:27 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:29.117 17:05:27 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:29.117 17:05:27 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:29.117 17:05:27 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:29.117 17:05:27 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:29.117 17:05:27 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:29.117 17:05:27 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:29.117 17:05:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:29.117 17:05:27 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:29.117 17:05:27 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:29.117 17:05:27 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:29.117 17:05:27 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:29.117 17:05:27 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:29.117 17:05:27 json_config -- json_config/json_config.sh@54 -- # sort 00:05:29.117 17:05:27 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:29.117 17:05:27 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:29.117 17:05:27 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:29.377 17:05:27 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:29.377 17:05:27 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:29.377 17:05:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.377 17:05:27 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:29.377 17:05:27 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:29.377 17:05:27 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:29.377 17:05:27 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:29.377 17:05:27 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:29.377 17:05:27 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:29.377 17:05:27 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:29.377 17:05:27 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:29.377 17:05:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.377 17:05:27 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:29.377 17:05:27 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:29.377 17:05:27 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:29.377 17:05:27 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:29.377 17:05:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:29.377 MallocForNvmf0 00:05:29.377 17:05:27 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:29.377 17:05:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:29.637 MallocForNvmf1 00:05:29.637 17:05:28 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:29.637 17:05:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:29.897 [2024-10-01 17:05:28.226150] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:29.897 17:05:28 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:29.897 17:05:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:29.897 17:05:28 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:29.897 17:05:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:30.157 17:05:28 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:30.157 17:05:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:30.418 17:05:28 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:30.418 17:05:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:30.418 [2024-10-01 17:05:28.944436] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:30.684 17:05:28 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:30.684 17:05:28 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:30.684 17:05:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.684 17:05:29 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:30.684 17:05:29 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:30.684 17:05:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.684 17:05:29 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:30.684 17:05:29 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:30.684 17:05:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:30.684 MallocBdevForConfigChangeCheck 00:05:30.684 17:05:29 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:30.684 17:05:29 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:30.684 17:05:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.943 17:05:29 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:30.943 17:05:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:31.202 17:05:29 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:31.202 INFO: shutting down applications... 00:05:31.202 17:05:29 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:31.202 17:05:29 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:31.202 17:05:29 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:31.202 17:05:29 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:31.462 Calling clear_iscsi_subsystem 00:05:31.462 Calling clear_nvmf_subsystem 00:05:31.462 Calling clear_nbd_subsystem 00:05:31.462 Calling clear_ublk_subsystem 00:05:31.462 Calling clear_vhost_blk_subsystem 00:05:31.462 Calling clear_vhost_scsi_subsystem 00:05:31.462 Calling clear_bdev_subsystem 00:05:31.722 17:05:30 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:31.722 17:05:30 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:31.722 17:05:30 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:31.722 17:05:30 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:31.722 17:05:30 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:31.722 17:05:30 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:31.982 17:05:30 json_config -- json_config/json_config.sh@352 -- # break 00:05:31.982 17:05:30 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:31.982 17:05:30 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:31.982 17:05:30 json_config -- json_config/common.sh@31 -- # local app=target 00:05:31.982 17:05:30 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:31.982 17:05:30 json_config -- json_config/common.sh@35 -- # [[ -n 2777560 ]] 00:05:31.982 17:05:30 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2777560 00:05:31.982 17:05:30 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:31.982 17:05:30 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:31.982 17:05:30 json_config -- json_config/common.sh@41 -- # kill -0 2777560 00:05:31.982 17:05:30 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:32.556 17:05:30 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:32.556 17:05:30 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:32.556 17:05:30 json_config -- json_config/common.sh@41 -- # kill -0 2777560 00:05:32.556 17:05:30 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:32.556 17:05:30 json_config -- json_config/common.sh@43 -- # break 00:05:32.556 17:05:30 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:32.556 17:05:30 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:32.556 SPDK target shutdown done 00:05:32.556 17:05:30 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:32.556 INFO: relaunching applications... 00:05:32.556 17:05:30 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:32.556 17:05:30 json_config -- json_config/common.sh@9 -- # local app=target 00:05:32.556 17:05:30 json_config -- json_config/common.sh@10 -- # shift 00:05:32.556 17:05:30 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:32.556 17:05:30 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:32.556 17:05:30 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:32.556 17:05:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:32.556 17:05:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:32.556 17:05:30 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2778679 00:05:32.556 17:05:30 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:32.556 Waiting for target to run... 00:05:32.556 17:05:30 json_config -- json_config/common.sh@25 -- # waitforlisten 2778679 /var/tmp/spdk_tgt.sock 00:05:32.556 17:05:30 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:32.556 17:05:30 json_config -- common/autotest_common.sh@831 -- # '[' -z 2778679 ']' 00:05:32.556 17:05:30 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:32.556 17:05:30 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:32.556 17:05:30 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:32.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:32.556 17:05:30 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:32.556 17:05:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.556 [2024-10-01 17:05:30.954193] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:05:32.556 [2024-10-01 17:05:30.954270] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2778679 ] 00:05:32.816 [2024-10-01 17:05:31.257051] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.816 [2024-10-01 17:05:31.278598] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.387 [2024-10-01 17:05:31.759975] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:33.387 [2024-10-01 17:05:31.792360] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:33.387 17:05:31 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:33.387 17:05:31 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:33.387 17:05:31 json_config -- json_config/common.sh@26 -- # echo '' 00:05:33.387 00:05:33.387 17:05:31 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:33.387 17:05:31 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:33.387 INFO: Checking if target configuration is the same... 00:05:33.387 17:05:31 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:33.387 17:05:31 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:33.387 17:05:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:33.387 + '[' 2 -ne 2 ']' 00:05:33.387 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:33.387 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:33.387 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:33.387 +++ basename /dev/fd/62 00:05:33.387 ++ mktemp /tmp/62.XXX 00:05:33.387 + tmp_file_1=/tmp/62.rbR 00:05:33.387 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:33.387 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:33.387 + tmp_file_2=/tmp/spdk_tgt_config.json.YOK 00:05:33.387 + ret=0 00:05:33.387 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:33.647 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:33.907 + diff -u /tmp/62.rbR /tmp/spdk_tgt_config.json.YOK 00:05:33.907 + echo 'INFO: JSON config files are the same' 00:05:33.907 INFO: JSON config files are the same 00:05:33.907 + rm /tmp/62.rbR /tmp/spdk_tgt_config.json.YOK 00:05:33.907 + exit 0 00:05:33.907 17:05:32 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:33.907 17:05:32 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:33.907 INFO: changing configuration and checking if this can be detected... 00:05:33.907 17:05:32 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:33.907 17:05:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:33.907 17:05:32 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:33.907 17:05:32 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:33.907 17:05:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:33.907 + '[' 2 -ne 2 ']' 00:05:33.907 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:33.907 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:33.907 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:33.907 +++ basename /dev/fd/62 00:05:33.908 ++ mktemp /tmp/62.XXX 00:05:33.908 + tmp_file_1=/tmp/62.dSJ 00:05:33.908 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:33.908 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:33.908 + tmp_file_2=/tmp/spdk_tgt_config.json.tvG 00:05:33.908 + ret=0 00:05:33.908 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:34.168 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:34.429 + diff -u /tmp/62.dSJ /tmp/spdk_tgt_config.json.tvG 00:05:34.429 + ret=1 00:05:34.429 + echo '=== Start of file: /tmp/62.dSJ ===' 00:05:34.429 + cat /tmp/62.dSJ 00:05:34.429 + echo '=== End of file: /tmp/62.dSJ ===' 00:05:34.429 + echo '' 00:05:34.429 + echo '=== Start of file: /tmp/spdk_tgt_config.json.tvG ===' 00:05:34.429 + cat /tmp/spdk_tgt_config.json.tvG 00:05:34.429 + echo '=== End of file: /tmp/spdk_tgt_config.json.tvG ===' 00:05:34.429 + echo '' 00:05:34.429 + rm /tmp/62.dSJ /tmp/spdk_tgt_config.json.tvG 00:05:34.429 + exit 1 00:05:34.429 17:05:32 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:34.429 INFO: configuration change detected. 00:05:34.429 17:05:32 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:34.429 17:05:32 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:34.429 17:05:32 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:34.429 17:05:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.429 17:05:32 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:34.429 17:05:32 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:34.429 17:05:32 json_config -- json_config/json_config.sh@324 -- # [[ -n 2778679 ]] 00:05:34.429 17:05:32 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:34.429 17:05:32 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:34.429 17:05:32 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:34.429 17:05:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.429 17:05:32 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:34.429 17:05:32 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:34.429 17:05:32 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:34.429 17:05:32 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:34.429 17:05:32 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:34.429 17:05:32 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:34.429 17:05:32 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:34.429 17:05:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.429 17:05:32 json_config -- json_config/json_config.sh@330 -- # killprocess 2778679 00:05:34.429 17:05:32 json_config -- common/autotest_common.sh@950 -- # '[' -z 2778679 ']' 00:05:34.429 17:05:32 json_config -- common/autotest_common.sh@954 -- # kill -0 2778679 00:05:34.429 17:05:32 json_config -- common/autotest_common.sh@955 -- # uname 00:05:34.429 17:05:32 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:34.429 17:05:32 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2778679 00:05:34.429 17:05:32 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:34.429 17:05:32 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:34.429 17:05:32 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2778679' 00:05:34.429 killing process with pid 2778679 00:05:34.429 17:05:32 json_config -- common/autotest_common.sh@969 -- # kill 2778679 00:05:34.429 17:05:32 json_config -- common/autotest_common.sh@974 -- # wait 2778679 00:05:34.689 17:05:33 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:34.689 17:05:33 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:34.689 17:05:33 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:34.689 17:05:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.689 17:05:33 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:34.689 17:05:33 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:34.689 INFO: Success 00:05:34.689 00:05:34.689 real 0m7.409s 00:05:34.689 user 0m9.226s 00:05:34.689 sys 0m1.750s 00:05:34.689 17:05:33 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:34.689 17:05:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.689 ************************************ 00:05:34.689 END TEST json_config 00:05:34.689 ************************************ 00:05:34.950 17:05:33 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:34.950 17:05:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:34.950 17:05:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.950 17:05:33 -- common/autotest_common.sh@10 -- # set +x 00:05:34.950 ************************************ 00:05:34.950 START TEST json_config_extra_key 00:05:34.950 ************************************ 00:05:34.950 17:05:33 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:34.950 17:05:33 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:34.950 17:05:33 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:05:34.950 17:05:33 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:34.950 17:05:33 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:34.950 17:05:33 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.950 17:05:33 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.950 17:05:33 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.950 17:05:33 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.950 17:05:33 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.950 17:05:33 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.950 17:05:33 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.950 17:05:33 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.950 17:05:33 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.950 17:05:33 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.950 17:05:33 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.950 17:05:33 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:34.950 17:05:33 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:34.950 17:05:33 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.950 17:05:33 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.950 17:05:33 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:34.950 17:05:33 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:34.950 17:05:33 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.950 17:05:33 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:34.950 17:05:33 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.950 17:05:33 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:34.950 17:05:33 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:34.950 17:05:33 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.950 17:05:33 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:34.950 17:05:33 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.950 17:05:33 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.950 17:05:33 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.950 17:05:33 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:34.950 17:05:33 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.950 17:05:33 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:34.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.950 --rc genhtml_branch_coverage=1 00:05:34.950 --rc genhtml_function_coverage=1 00:05:34.950 --rc genhtml_legend=1 00:05:34.950 --rc geninfo_all_blocks=1 00:05:34.950 --rc geninfo_unexecuted_blocks=1 00:05:34.950 00:05:34.950 ' 00:05:34.950 17:05:33 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:34.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.950 --rc genhtml_branch_coverage=1 00:05:34.950 --rc genhtml_function_coverage=1 00:05:34.950 --rc genhtml_legend=1 00:05:34.950 --rc geninfo_all_blocks=1 00:05:34.950 --rc geninfo_unexecuted_blocks=1 00:05:34.950 00:05:34.950 ' 00:05:34.950 17:05:33 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:34.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.950 --rc genhtml_branch_coverage=1 00:05:34.950 --rc genhtml_function_coverage=1 00:05:34.950 --rc genhtml_legend=1 00:05:34.950 --rc geninfo_all_blocks=1 00:05:34.950 --rc geninfo_unexecuted_blocks=1 00:05:34.950 00:05:34.950 ' 00:05:34.950 17:05:33 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:34.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.950 --rc genhtml_branch_coverage=1 00:05:34.950 --rc genhtml_function_coverage=1 00:05:34.950 --rc genhtml_legend=1 00:05:34.950 --rc geninfo_all_blocks=1 00:05:34.950 --rc geninfo_unexecuted_blocks=1 00:05:34.950 00:05:34.950 ' 00:05:34.951 17:05:33 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:34.951 17:05:33 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:34.951 17:05:33 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:34.951 17:05:33 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:34.951 17:05:33 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:34.951 17:05:33 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:34.951 17:05:33 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:34.951 17:05:33 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:34.951 17:05:33 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:34.951 17:05:33 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:34.951 17:05:33 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:34.951 17:05:33 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:34.951 17:05:33 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:34.951 17:05:33 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:34.951 17:05:33 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:34.951 17:05:33 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:34.951 17:05:33 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:34.951 17:05:33 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:34.951 17:05:33 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:34.951 17:05:33 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:34.951 17:05:33 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:34.951 17:05:33 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:34.951 17:05:33 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:34.951 17:05:33 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.951 17:05:33 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.951 17:05:33 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.951 17:05:33 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:34.951 17:05:33 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.951 17:05:33 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:34.951 17:05:33 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:34.951 17:05:33 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:34.951 17:05:33 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:34.951 17:05:33 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:34.951 17:05:33 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:34.951 17:05:33 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:34.951 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:34.951 17:05:33 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:34.951 17:05:33 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:34.951 17:05:33 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:35.212 17:05:33 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:35.212 17:05:33 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:35.212 17:05:33 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:35.212 17:05:33 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:35.212 17:05:33 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:35.212 17:05:33 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:35.212 17:05:33 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:35.212 17:05:33 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:35.212 17:05:33 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:35.212 17:05:33 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:35.212 17:05:33 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:35.212 INFO: launching applications... 00:05:35.212 17:05:33 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:35.212 17:05:33 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:35.212 17:05:33 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:35.212 17:05:33 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:35.212 17:05:33 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:35.212 17:05:33 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:35.212 17:05:33 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:35.212 17:05:33 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:35.212 17:05:33 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2779170 00:05:35.212 17:05:33 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:35.212 Waiting for target to run... 00:05:35.212 17:05:33 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2779170 /var/tmp/spdk_tgt.sock 00:05:35.212 17:05:33 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 2779170 ']' 00:05:35.212 17:05:33 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:35.212 17:05:33 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:35.212 17:05:33 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:35.212 17:05:33 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:35.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:35.212 17:05:33 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:35.212 17:05:33 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:35.212 [2024-10-01 17:05:33.564191] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:05:35.212 [2024-10-01 17:05:33.564262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2779170 ] 00:05:35.473 [2024-10-01 17:05:33.825514] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.473 [2024-10-01 17:05:33.843981] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.044 17:05:34 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:36.044 17:05:34 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:36.044 17:05:34 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:36.044 00:05:36.044 17:05:34 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:36.044 INFO: shutting down applications... 00:05:36.044 17:05:34 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:36.044 17:05:34 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:36.044 17:05:34 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:36.044 17:05:34 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2779170 ]] 00:05:36.044 17:05:34 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2779170 00:05:36.044 17:05:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:36.044 17:05:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:36.044 17:05:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2779170 00:05:36.044 17:05:34 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:36.615 17:05:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:36.615 17:05:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:36.615 17:05:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2779170 00:05:36.615 17:05:34 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:36.615 17:05:34 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:36.615 17:05:34 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:36.615 17:05:34 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:36.615 SPDK target shutdown done 00:05:36.615 17:05:34 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:36.615 Success 00:05:36.615 00:05:36.615 real 0m1.564s 00:05:36.615 user 0m1.203s 00:05:36.615 sys 0m0.394s 00:05:36.615 17:05:34 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.615 17:05:34 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:36.615 ************************************ 00:05:36.615 END TEST json_config_extra_key 00:05:36.615 ************************************ 00:05:36.615 17:05:34 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:36.615 17:05:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:36.615 17:05:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.615 17:05:34 -- common/autotest_common.sh@10 -- # set +x 00:05:36.615 ************************************ 00:05:36.615 START TEST alias_rpc 00:05:36.615 ************************************ 00:05:36.616 17:05:34 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:36.616 * Looking for test storage... 00:05:36.616 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:36.616 17:05:35 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:36.616 17:05:35 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:36.616 17:05:35 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:36.616 17:05:35 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:36.616 17:05:35 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.616 17:05:35 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.616 17:05:35 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.616 17:05:35 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.616 17:05:35 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.616 17:05:35 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.616 17:05:35 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.616 17:05:35 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.616 17:05:35 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.616 17:05:35 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.616 17:05:35 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.616 17:05:35 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:36.616 17:05:35 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:36.616 17:05:35 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.616 17:05:35 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.616 17:05:35 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:36.616 17:05:35 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:36.616 17:05:35 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.616 17:05:35 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:36.616 17:05:35 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.616 17:05:35 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:36.616 17:05:35 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:36.616 17:05:35 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.616 17:05:35 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:36.616 17:05:35 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.616 17:05:35 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.616 17:05:35 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.616 17:05:35 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:36.616 17:05:35 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.616 17:05:35 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:36.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.616 --rc genhtml_branch_coverage=1 00:05:36.616 --rc genhtml_function_coverage=1 00:05:36.616 --rc genhtml_legend=1 00:05:36.616 --rc geninfo_all_blocks=1 00:05:36.616 --rc geninfo_unexecuted_blocks=1 00:05:36.616 00:05:36.616 ' 00:05:36.616 17:05:35 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:36.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.616 --rc genhtml_branch_coverage=1 00:05:36.616 --rc genhtml_function_coverage=1 00:05:36.616 --rc genhtml_legend=1 00:05:36.616 --rc geninfo_all_blocks=1 00:05:36.616 --rc geninfo_unexecuted_blocks=1 00:05:36.616 00:05:36.616 ' 00:05:36.616 17:05:35 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:36.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.616 --rc genhtml_branch_coverage=1 00:05:36.616 --rc genhtml_function_coverage=1 00:05:36.616 --rc genhtml_legend=1 00:05:36.616 --rc geninfo_all_blocks=1 00:05:36.616 --rc geninfo_unexecuted_blocks=1 00:05:36.616 00:05:36.616 ' 00:05:36.616 17:05:35 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:36.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.616 --rc genhtml_branch_coverage=1 00:05:36.616 --rc genhtml_function_coverage=1 00:05:36.616 --rc genhtml_legend=1 00:05:36.616 --rc geninfo_all_blocks=1 00:05:36.616 --rc geninfo_unexecuted_blocks=1 00:05:36.616 00:05:36.616 ' 00:05:36.616 17:05:35 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:36.616 17:05:35 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2779562 00:05:36.616 17:05:35 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2779562 00:05:36.616 17:05:35 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:36.616 17:05:35 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 2779562 ']' 00:05:36.616 17:05:35 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.616 17:05:35 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:36.616 17:05:35 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.616 17:05:35 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:36.616 17:05:35 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.876 [2024-10-01 17:05:35.192894] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:05:36.876 [2024-10-01 17:05:35.192945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2779562 ] 00:05:36.876 [2024-10-01 17:05:35.254304] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.876 [2024-10-01 17:05:35.284920] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.137 17:05:35 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:37.137 17:05:35 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:37.137 17:05:35 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:37.137 17:05:35 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2779562 00:05:37.137 17:05:35 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 2779562 ']' 00:05:37.137 17:05:35 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 2779562 00:05:37.137 17:05:35 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:37.137 17:05:35 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:37.137 17:05:35 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2779562 00:05:37.398 17:05:35 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:37.398 17:05:35 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:37.398 17:05:35 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2779562' 00:05:37.398 killing process with pid 2779562 00:05:37.398 17:05:35 alias_rpc -- common/autotest_common.sh@969 -- # kill 2779562 00:05:37.398 17:05:35 alias_rpc -- common/autotest_common.sh@974 -- # wait 2779562 00:05:37.398 00:05:37.398 real 0m1.005s 00:05:37.398 user 0m1.047s 00:05:37.398 sys 0m0.377s 00:05:37.398 17:05:35 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:37.398 17:05:35 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.398 ************************************ 00:05:37.398 END TEST alias_rpc 00:05:37.398 ************************************ 00:05:37.659 17:05:35 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:37.659 17:05:35 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:37.659 17:05:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:37.659 17:05:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.659 17:05:35 -- common/autotest_common.sh@10 -- # set +x 00:05:37.659 ************************************ 00:05:37.659 START TEST spdkcli_tcp 00:05:37.659 ************************************ 00:05:37.659 17:05:36 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:37.659 * Looking for test storage... 00:05:37.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:37.659 17:05:36 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:37.659 17:05:36 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:37.659 17:05:36 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:37.659 17:05:36 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:37.659 17:05:36 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.659 17:05:36 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.659 17:05:36 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.659 17:05:36 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.659 17:05:36 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.659 17:05:36 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.659 17:05:36 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.659 17:05:36 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.659 17:05:36 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.659 17:05:36 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.659 17:05:36 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.659 17:05:36 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:37.659 17:05:36 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:37.659 17:05:36 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.659 17:05:36 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.659 17:05:36 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:37.659 17:05:36 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:37.659 17:05:36 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.659 17:05:36 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:37.659 17:05:36 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.659 17:05:36 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:37.659 17:05:36 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:37.659 17:05:36 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.659 17:05:36 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:37.659 17:05:36 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.659 17:05:36 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.659 17:05:36 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.659 17:05:36 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:37.659 17:05:36 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.659 17:05:36 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:37.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.659 --rc genhtml_branch_coverage=1 00:05:37.659 --rc genhtml_function_coverage=1 00:05:37.659 --rc genhtml_legend=1 00:05:37.659 --rc geninfo_all_blocks=1 00:05:37.659 --rc geninfo_unexecuted_blocks=1 00:05:37.659 00:05:37.659 ' 00:05:37.659 17:05:36 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:37.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.659 --rc genhtml_branch_coverage=1 00:05:37.659 --rc genhtml_function_coverage=1 00:05:37.659 --rc genhtml_legend=1 00:05:37.659 --rc geninfo_all_blocks=1 00:05:37.659 --rc geninfo_unexecuted_blocks=1 00:05:37.659 00:05:37.659 ' 00:05:37.659 17:05:36 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:37.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.659 --rc genhtml_branch_coverage=1 00:05:37.659 --rc genhtml_function_coverage=1 00:05:37.659 --rc genhtml_legend=1 00:05:37.659 --rc geninfo_all_blocks=1 00:05:37.660 --rc geninfo_unexecuted_blocks=1 00:05:37.660 00:05:37.660 ' 00:05:37.660 17:05:36 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:37.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.660 --rc genhtml_branch_coverage=1 00:05:37.660 --rc genhtml_function_coverage=1 00:05:37.660 --rc genhtml_legend=1 00:05:37.660 --rc geninfo_all_blocks=1 00:05:37.660 --rc geninfo_unexecuted_blocks=1 00:05:37.660 00:05:37.660 ' 00:05:37.660 17:05:36 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:37.660 17:05:36 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:37.660 17:05:36 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:37.660 17:05:36 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:37.660 17:05:36 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:37.660 17:05:36 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:37.660 17:05:36 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:37.660 17:05:36 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:37.660 17:05:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:37.660 17:05:36 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2779950 00:05:37.660 17:05:36 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2779950 00:05:37.660 17:05:36 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 2779950 ']' 00:05:37.660 17:05:36 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.660 17:05:36 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:37.660 17:05:36 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.660 17:05:36 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:37.660 17:05:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:37.920 17:05:36 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:37.920 [2024-10-01 17:05:36.264711] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:05:37.920 [2024-10-01 17:05:36.264788] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2779950 ] 00:05:37.920 [2024-10-01 17:05:36.328918] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.920 [2024-10-01 17:05:36.369321] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.920 [2024-10-01 17:05:36.369418] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.862 17:05:37 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:38.862 17:05:37 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:38.862 17:05:37 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2779969 00:05:38.862 17:05:37 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:38.862 17:05:37 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:38.862 [ 00:05:38.862 "bdev_malloc_delete", 00:05:38.862 "bdev_malloc_create", 00:05:38.862 "bdev_null_resize", 00:05:38.862 "bdev_null_delete", 00:05:38.862 "bdev_null_create", 00:05:38.862 "bdev_nvme_cuse_unregister", 00:05:38.862 "bdev_nvme_cuse_register", 00:05:38.862 "bdev_opal_new_user", 00:05:38.862 "bdev_opal_set_lock_state", 00:05:38.862 "bdev_opal_delete", 00:05:38.862 "bdev_opal_get_info", 00:05:38.862 "bdev_opal_create", 00:05:38.862 "bdev_nvme_opal_revert", 00:05:38.862 "bdev_nvme_opal_init", 00:05:38.862 "bdev_nvme_send_cmd", 00:05:38.862 "bdev_nvme_set_keys", 00:05:38.862 "bdev_nvme_get_path_iostat", 00:05:38.862 "bdev_nvme_get_mdns_discovery_info", 00:05:38.862 "bdev_nvme_stop_mdns_discovery", 00:05:38.862 "bdev_nvme_start_mdns_discovery", 00:05:38.862 "bdev_nvme_set_multipath_policy", 00:05:38.862 "bdev_nvme_set_preferred_path", 00:05:38.862 "bdev_nvme_get_io_paths", 00:05:38.862 "bdev_nvme_remove_error_injection", 00:05:38.862 "bdev_nvme_add_error_injection", 00:05:38.862 "bdev_nvme_get_discovery_info", 00:05:38.862 "bdev_nvme_stop_discovery", 00:05:38.862 "bdev_nvme_start_discovery", 00:05:38.862 "bdev_nvme_get_controller_health_info", 00:05:38.862 "bdev_nvme_disable_controller", 00:05:38.862 "bdev_nvme_enable_controller", 00:05:38.862 "bdev_nvme_reset_controller", 00:05:38.862 "bdev_nvme_get_transport_statistics", 00:05:38.862 "bdev_nvme_apply_firmware", 00:05:38.862 "bdev_nvme_detach_controller", 00:05:38.862 "bdev_nvme_get_controllers", 00:05:38.862 "bdev_nvme_attach_controller", 00:05:38.862 "bdev_nvme_set_hotplug", 00:05:38.862 "bdev_nvme_set_options", 00:05:38.862 "bdev_passthru_delete", 00:05:38.862 "bdev_passthru_create", 00:05:38.862 "bdev_lvol_set_parent_bdev", 00:05:38.862 "bdev_lvol_set_parent", 00:05:38.862 "bdev_lvol_check_shallow_copy", 00:05:38.862 "bdev_lvol_start_shallow_copy", 00:05:38.862 "bdev_lvol_grow_lvstore", 00:05:38.862 "bdev_lvol_get_lvols", 00:05:38.862 "bdev_lvol_get_lvstores", 00:05:38.862 "bdev_lvol_delete", 00:05:38.862 "bdev_lvol_set_read_only", 00:05:38.862 "bdev_lvol_resize", 00:05:38.862 "bdev_lvol_decouple_parent", 00:05:38.862 "bdev_lvol_inflate", 00:05:38.862 "bdev_lvol_rename", 00:05:38.862 "bdev_lvol_clone_bdev", 00:05:38.862 "bdev_lvol_clone", 00:05:38.862 "bdev_lvol_snapshot", 00:05:38.862 "bdev_lvol_create", 00:05:38.862 "bdev_lvol_delete_lvstore", 00:05:38.862 "bdev_lvol_rename_lvstore", 00:05:38.862 "bdev_lvol_create_lvstore", 00:05:38.862 "bdev_raid_set_options", 00:05:38.862 "bdev_raid_remove_base_bdev", 00:05:38.862 "bdev_raid_add_base_bdev", 00:05:38.862 "bdev_raid_delete", 00:05:38.862 "bdev_raid_create", 00:05:38.862 "bdev_raid_get_bdevs", 00:05:38.862 "bdev_error_inject_error", 00:05:38.862 "bdev_error_delete", 00:05:38.862 "bdev_error_create", 00:05:38.862 "bdev_split_delete", 00:05:38.862 "bdev_split_create", 00:05:38.862 "bdev_delay_delete", 00:05:38.862 "bdev_delay_create", 00:05:38.862 "bdev_delay_update_latency", 00:05:38.862 "bdev_zone_block_delete", 00:05:38.862 "bdev_zone_block_create", 00:05:38.862 "blobfs_create", 00:05:38.862 "blobfs_detect", 00:05:38.862 "blobfs_set_cache_size", 00:05:38.862 "bdev_aio_delete", 00:05:38.862 "bdev_aio_rescan", 00:05:38.862 "bdev_aio_create", 00:05:38.862 "bdev_ftl_set_property", 00:05:38.862 "bdev_ftl_get_properties", 00:05:38.862 "bdev_ftl_get_stats", 00:05:38.862 "bdev_ftl_unmap", 00:05:38.862 "bdev_ftl_unload", 00:05:38.862 "bdev_ftl_delete", 00:05:38.862 "bdev_ftl_load", 00:05:38.862 "bdev_ftl_create", 00:05:38.862 "bdev_virtio_attach_controller", 00:05:38.862 "bdev_virtio_scsi_get_devices", 00:05:38.862 "bdev_virtio_detach_controller", 00:05:38.862 "bdev_virtio_blk_set_hotplug", 00:05:38.862 "bdev_iscsi_delete", 00:05:38.862 "bdev_iscsi_create", 00:05:38.862 "bdev_iscsi_set_options", 00:05:38.862 "accel_error_inject_error", 00:05:38.862 "ioat_scan_accel_module", 00:05:38.863 "dsa_scan_accel_module", 00:05:38.863 "iaa_scan_accel_module", 00:05:38.863 "vfu_virtio_create_fs_endpoint", 00:05:38.863 "vfu_virtio_create_scsi_endpoint", 00:05:38.863 "vfu_virtio_scsi_remove_target", 00:05:38.863 "vfu_virtio_scsi_add_target", 00:05:38.863 "vfu_virtio_create_blk_endpoint", 00:05:38.863 "vfu_virtio_delete_endpoint", 00:05:38.863 "keyring_file_remove_key", 00:05:38.863 "keyring_file_add_key", 00:05:38.863 "keyring_linux_set_options", 00:05:38.863 "fsdev_aio_delete", 00:05:38.863 "fsdev_aio_create", 00:05:38.863 "iscsi_get_histogram", 00:05:38.863 "iscsi_enable_histogram", 00:05:38.863 "iscsi_set_options", 00:05:38.863 "iscsi_get_auth_groups", 00:05:38.863 "iscsi_auth_group_remove_secret", 00:05:38.863 "iscsi_auth_group_add_secret", 00:05:38.863 "iscsi_delete_auth_group", 00:05:38.863 "iscsi_create_auth_group", 00:05:38.863 "iscsi_set_discovery_auth", 00:05:38.863 "iscsi_get_options", 00:05:38.863 "iscsi_target_node_request_logout", 00:05:38.863 "iscsi_target_node_set_redirect", 00:05:38.863 "iscsi_target_node_set_auth", 00:05:38.863 "iscsi_target_node_add_lun", 00:05:38.863 "iscsi_get_stats", 00:05:38.863 "iscsi_get_connections", 00:05:38.863 "iscsi_portal_group_set_auth", 00:05:38.863 "iscsi_start_portal_group", 00:05:38.863 "iscsi_delete_portal_group", 00:05:38.863 "iscsi_create_portal_group", 00:05:38.863 "iscsi_get_portal_groups", 00:05:38.863 "iscsi_delete_target_node", 00:05:38.863 "iscsi_target_node_remove_pg_ig_maps", 00:05:38.863 "iscsi_target_node_add_pg_ig_maps", 00:05:38.863 "iscsi_create_target_node", 00:05:38.863 "iscsi_get_target_nodes", 00:05:38.863 "iscsi_delete_initiator_group", 00:05:38.863 "iscsi_initiator_group_remove_initiators", 00:05:38.863 "iscsi_initiator_group_add_initiators", 00:05:38.863 "iscsi_create_initiator_group", 00:05:38.863 "iscsi_get_initiator_groups", 00:05:38.863 "nvmf_set_crdt", 00:05:38.863 "nvmf_set_config", 00:05:38.863 "nvmf_set_max_subsystems", 00:05:38.863 "nvmf_stop_mdns_prr", 00:05:38.863 "nvmf_publish_mdns_prr", 00:05:38.863 "nvmf_subsystem_get_listeners", 00:05:38.863 "nvmf_subsystem_get_qpairs", 00:05:38.863 "nvmf_subsystem_get_controllers", 00:05:38.863 "nvmf_get_stats", 00:05:38.863 "nvmf_get_transports", 00:05:38.863 "nvmf_create_transport", 00:05:38.863 "nvmf_get_targets", 00:05:38.863 "nvmf_delete_target", 00:05:38.863 "nvmf_create_target", 00:05:38.863 "nvmf_subsystem_allow_any_host", 00:05:38.863 "nvmf_subsystem_set_keys", 00:05:38.863 "nvmf_subsystem_remove_host", 00:05:38.863 "nvmf_subsystem_add_host", 00:05:38.863 "nvmf_ns_remove_host", 00:05:38.863 "nvmf_ns_add_host", 00:05:38.863 "nvmf_subsystem_remove_ns", 00:05:38.863 "nvmf_subsystem_set_ns_ana_group", 00:05:38.863 "nvmf_subsystem_add_ns", 00:05:38.863 "nvmf_subsystem_listener_set_ana_state", 00:05:38.863 "nvmf_discovery_get_referrals", 00:05:38.863 "nvmf_discovery_remove_referral", 00:05:38.863 "nvmf_discovery_add_referral", 00:05:38.863 "nvmf_subsystem_remove_listener", 00:05:38.863 "nvmf_subsystem_add_listener", 00:05:38.863 "nvmf_delete_subsystem", 00:05:38.863 "nvmf_create_subsystem", 00:05:38.863 "nvmf_get_subsystems", 00:05:38.863 "env_dpdk_get_mem_stats", 00:05:38.863 "nbd_get_disks", 00:05:38.863 "nbd_stop_disk", 00:05:38.863 "nbd_start_disk", 00:05:38.863 "ublk_recover_disk", 00:05:38.863 "ublk_get_disks", 00:05:38.863 "ublk_stop_disk", 00:05:38.863 "ublk_start_disk", 00:05:38.863 "ublk_destroy_target", 00:05:38.863 "ublk_create_target", 00:05:38.863 "virtio_blk_create_transport", 00:05:38.863 "virtio_blk_get_transports", 00:05:38.863 "vhost_controller_set_coalescing", 00:05:38.863 "vhost_get_controllers", 00:05:38.863 "vhost_delete_controller", 00:05:38.863 "vhost_create_blk_controller", 00:05:38.863 "vhost_scsi_controller_remove_target", 00:05:38.863 "vhost_scsi_controller_add_target", 00:05:38.863 "vhost_start_scsi_controller", 00:05:38.863 "vhost_create_scsi_controller", 00:05:38.863 "thread_set_cpumask", 00:05:38.863 "scheduler_set_options", 00:05:38.863 "framework_get_governor", 00:05:38.863 "framework_get_scheduler", 00:05:38.863 "framework_set_scheduler", 00:05:38.863 "framework_get_reactors", 00:05:38.863 "thread_get_io_channels", 00:05:38.863 "thread_get_pollers", 00:05:38.863 "thread_get_stats", 00:05:38.863 "framework_monitor_context_switch", 00:05:38.863 "spdk_kill_instance", 00:05:38.863 "log_enable_timestamps", 00:05:38.863 "log_get_flags", 00:05:38.863 "log_clear_flag", 00:05:38.863 "log_set_flag", 00:05:38.863 "log_get_level", 00:05:38.863 "log_set_level", 00:05:38.863 "log_get_print_level", 00:05:38.863 "log_set_print_level", 00:05:38.863 "framework_enable_cpumask_locks", 00:05:38.863 "framework_disable_cpumask_locks", 00:05:38.863 "framework_wait_init", 00:05:38.863 "framework_start_init", 00:05:38.863 "scsi_get_devices", 00:05:38.863 "bdev_get_histogram", 00:05:38.863 "bdev_enable_histogram", 00:05:38.863 "bdev_set_qos_limit", 00:05:38.863 "bdev_set_qd_sampling_period", 00:05:38.863 "bdev_get_bdevs", 00:05:38.863 "bdev_reset_iostat", 00:05:38.863 "bdev_get_iostat", 00:05:38.863 "bdev_examine", 00:05:38.863 "bdev_wait_for_examine", 00:05:38.863 "bdev_set_options", 00:05:38.863 "accel_get_stats", 00:05:38.863 "accel_set_options", 00:05:38.863 "accel_set_driver", 00:05:38.863 "accel_crypto_key_destroy", 00:05:38.863 "accel_crypto_keys_get", 00:05:38.863 "accel_crypto_key_create", 00:05:38.863 "accel_assign_opc", 00:05:38.863 "accel_get_module_info", 00:05:38.863 "accel_get_opc_assignments", 00:05:38.863 "vmd_rescan", 00:05:38.863 "vmd_remove_device", 00:05:38.863 "vmd_enable", 00:05:38.863 "sock_get_default_impl", 00:05:38.863 "sock_set_default_impl", 00:05:38.863 "sock_impl_set_options", 00:05:38.863 "sock_impl_get_options", 00:05:38.863 "iobuf_get_stats", 00:05:38.863 "iobuf_set_options", 00:05:38.863 "keyring_get_keys", 00:05:38.863 "vfu_tgt_set_base_path", 00:05:38.863 "framework_get_pci_devices", 00:05:38.863 "framework_get_config", 00:05:38.863 "framework_get_subsystems", 00:05:38.863 "fsdev_set_opts", 00:05:38.863 "fsdev_get_opts", 00:05:38.863 "trace_get_info", 00:05:38.863 "trace_get_tpoint_group_mask", 00:05:38.863 "trace_disable_tpoint_group", 00:05:38.863 "trace_enable_tpoint_group", 00:05:38.863 "trace_clear_tpoint_mask", 00:05:38.863 "trace_set_tpoint_mask", 00:05:38.863 "notify_get_notifications", 00:05:38.863 "notify_get_types", 00:05:38.863 "spdk_get_version", 00:05:38.863 "rpc_get_methods" 00:05:38.863 ] 00:05:38.863 17:05:37 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:38.863 17:05:37 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:38.863 17:05:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:38.863 17:05:37 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:38.863 17:05:37 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2779950 00:05:38.863 17:05:37 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 2779950 ']' 00:05:38.863 17:05:37 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 2779950 00:05:38.863 17:05:37 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:38.863 17:05:37 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:38.863 17:05:37 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2779950 00:05:38.863 17:05:37 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:38.863 17:05:37 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:38.863 17:05:37 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2779950' 00:05:38.863 killing process with pid 2779950 00:05:38.863 17:05:37 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 2779950 00:05:38.863 17:05:37 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 2779950 00:05:39.125 00:05:39.125 real 0m1.530s 00:05:39.125 user 0m2.816s 00:05:39.125 sys 0m0.453s 00:05:39.125 17:05:37 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.125 17:05:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:39.125 ************************************ 00:05:39.125 END TEST spdkcli_tcp 00:05:39.125 ************************************ 00:05:39.125 17:05:37 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:39.125 17:05:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.125 17:05:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.125 17:05:37 -- common/autotest_common.sh@10 -- # set +x 00:05:39.125 ************************************ 00:05:39.125 START TEST dpdk_mem_utility 00:05:39.125 ************************************ 00:05:39.125 17:05:37 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:39.386 * Looking for test storage... 00:05:39.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:39.386 17:05:37 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:39.386 17:05:37 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:05:39.387 17:05:37 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:39.387 17:05:37 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:39.387 17:05:37 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.387 17:05:37 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.387 17:05:37 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.387 17:05:37 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.387 17:05:37 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.387 17:05:37 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.387 17:05:37 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.387 17:05:37 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.387 17:05:37 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.387 17:05:37 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.387 17:05:37 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.387 17:05:37 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:39.387 17:05:37 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:39.387 17:05:37 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.387 17:05:37 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.387 17:05:37 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:39.387 17:05:37 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:39.387 17:05:37 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.387 17:05:37 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:39.387 17:05:37 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.387 17:05:37 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:39.387 17:05:37 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:39.387 17:05:37 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.387 17:05:37 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:39.387 17:05:37 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.387 17:05:37 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.387 17:05:37 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.387 17:05:37 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:39.387 17:05:37 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.387 17:05:37 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:39.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.387 --rc genhtml_branch_coverage=1 00:05:39.387 --rc genhtml_function_coverage=1 00:05:39.387 --rc genhtml_legend=1 00:05:39.387 --rc geninfo_all_blocks=1 00:05:39.387 --rc geninfo_unexecuted_blocks=1 00:05:39.387 00:05:39.387 ' 00:05:39.387 17:05:37 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:39.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.387 --rc genhtml_branch_coverage=1 00:05:39.387 --rc genhtml_function_coverage=1 00:05:39.387 --rc genhtml_legend=1 00:05:39.387 --rc geninfo_all_blocks=1 00:05:39.387 --rc geninfo_unexecuted_blocks=1 00:05:39.387 00:05:39.387 ' 00:05:39.387 17:05:37 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:39.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.387 --rc genhtml_branch_coverage=1 00:05:39.387 --rc genhtml_function_coverage=1 00:05:39.387 --rc genhtml_legend=1 00:05:39.387 --rc geninfo_all_blocks=1 00:05:39.387 --rc geninfo_unexecuted_blocks=1 00:05:39.387 00:05:39.387 ' 00:05:39.387 17:05:37 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:39.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.387 --rc genhtml_branch_coverage=1 00:05:39.387 --rc genhtml_function_coverage=1 00:05:39.387 --rc genhtml_legend=1 00:05:39.387 --rc geninfo_all_blocks=1 00:05:39.387 --rc geninfo_unexecuted_blocks=1 00:05:39.387 00:05:39.387 ' 00:05:39.387 17:05:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:39.387 17:05:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2780369 00:05:39.387 17:05:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2780369 00:05:39.387 17:05:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:39.387 17:05:37 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 2780369 ']' 00:05:39.387 17:05:37 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.387 17:05:37 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:39.387 17:05:37 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.387 17:05:37 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:39.387 17:05:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:39.387 [2024-10-01 17:05:37.887809] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:05:39.387 [2024-10-01 17:05:37.887887] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2780369 ] 00:05:39.648 [2024-10-01 17:05:37.951710] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.648 [2024-10-01 17:05:37.990905] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.220 17:05:38 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:40.220 17:05:38 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:40.220 17:05:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:40.220 17:05:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:40.220 17:05:38 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.220 17:05:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:40.220 { 00:05:40.220 "filename": "/tmp/spdk_mem_dump.txt" 00:05:40.220 } 00:05:40.220 17:05:38 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.220 17:05:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:40.220 DPDK memory size 860.000000 MiB in 1 heap(s) 00:05:40.220 1 heaps totaling size 860.000000 MiB 00:05:40.220 size: 860.000000 MiB heap id: 0 00:05:40.220 end heaps---------- 00:05:40.220 9 mempools totaling size 642.649841 MiB 00:05:40.220 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:40.220 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:40.220 size: 92.545471 MiB name: bdev_io_2780369 00:05:40.220 size: 51.011292 MiB name: evtpool_2780369 00:05:40.220 size: 50.003479 MiB name: msgpool_2780369 00:05:40.220 size: 36.509338 MiB name: fsdev_io_2780369 00:05:40.220 size: 21.763794 MiB name: PDU_Pool 00:05:40.220 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:40.220 size: 0.026123 MiB name: Session_Pool 00:05:40.220 end mempools------- 00:05:40.220 6 memzones totaling size 4.142822 MiB 00:05:40.220 size: 1.000366 MiB name: RG_ring_0_2780369 00:05:40.220 size: 1.000366 MiB name: RG_ring_1_2780369 00:05:40.220 size: 1.000366 MiB name: RG_ring_4_2780369 00:05:40.220 size: 1.000366 MiB name: RG_ring_5_2780369 00:05:40.220 size: 0.125366 MiB name: RG_ring_2_2780369 00:05:40.220 size: 0.015991 MiB name: RG_ring_3_2780369 00:05:40.220 end memzones------- 00:05:40.220 17:05:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:40.481 heap id: 0 total size: 860.000000 MiB number of busy elements: 44 number of free elements: 16 00:05:40.481 list of free elements. size: 13.984680 MiB 00:05:40.481 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:40.481 element at address: 0x200000800000 with size: 1.996948 MiB 00:05:40.481 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:05:40.481 element at address: 0x20001be00000 with size: 0.999878 MiB 00:05:40.481 element at address: 0x200034a00000 with size: 0.994446 MiB 00:05:40.481 element at address: 0x200009600000 with size: 0.959839 MiB 00:05:40.481 element at address: 0x200015e00000 with size: 0.954285 MiB 00:05:40.481 element at address: 0x20001c000000 with size: 0.936584 MiB 00:05:40.481 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:40.481 element at address: 0x20001d800000 with size: 0.582886 MiB 00:05:40.481 element at address: 0x200003e00000 with size: 0.495605 MiB 00:05:40.481 element at address: 0x20000d800000 with size: 0.490723 MiB 00:05:40.481 element at address: 0x20001c200000 with size: 0.485657 MiB 00:05:40.481 element at address: 0x200007000000 with size: 0.481934 MiB 00:05:40.481 element at address: 0x20002ac00000 with size: 0.410034 MiB 00:05:40.481 element at address: 0x200003a00000 with size: 0.354858 MiB 00:05:40.481 list of standard malloc elements. size: 199.218628 MiB 00:05:40.481 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:05:40.481 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:05:40.481 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:05:40.481 element at address: 0x20001befff80 with size: 1.000122 MiB 00:05:40.481 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:05:40.481 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:40.481 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:05:40.481 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:40.481 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:05:40.481 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:40.481 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:40.481 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:40.481 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:40.481 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:40.481 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:40.481 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:40.481 element at address: 0x200003a5ad80 with size: 0.000183 MiB 00:05:40.481 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:40.481 element at address: 0x200003a5f240 with size: 0.000183 MiB 00:05:40.481 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:05:40.481 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:05:40.481 element at address: 0x200003aff880 with size: 0.000183 MiB 00:05:40.481 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:40.481 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:40.481 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:05:40.481 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:40.481 element at address: 0x20000707b600 with size: 0.000183 MiB 00:05:40.481 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:05:40.481 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:05:40.481 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:05:40.481 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:05:40.481 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:05:40.481 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:05:40.481 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:05:40.481 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:05:40.482 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:05:40.482 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:05:40.482 element at address: 0x20001d895380 with size: 0.000183 MiB 00:05:40.482 element at address: 0x20001d895440 with size: 0.000183 MiB 00:05:40.482 element at address: 0x20002ac68f80 with size: 0.000183 MiB 00:05:40.482 element at address: 0x20002ac69040 with size: 0.000183 MiB 00:05:40.482 element at address: 0x20002ac6fc40 with size: 0.000183 MiB 00:05:40.482 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:05:40.482 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:05:40.482 list of memzone associated elements. size: 646.796692 MiB 00:05:40.482 element at address: 0x20001d895500 with size: 211.416748 MiB 00:05:40.482 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:40.482 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:05:40.482 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:40.482 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:05:40.482 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_2780369_0 00:05:40.482 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:40.482 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2780369_0 00:05:40.482 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:40.482 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2780369_0 00:05:40.482 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:05:40.482 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2780369_0 00:05:40.482 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:05:40.482 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:40.482 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:05:40.482 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:40.482 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:40.482 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2780369 00:05:40.482 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:40.482 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2780369 00:05:40.482 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:40.482 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2780369 00:05:40.482 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:05:40.482 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:40.482 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:05:40.482 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:40.482 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:05:40.482 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:40.482 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:05:40.482 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:40.482 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:40.482 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2780369 00:05:40.482 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:40.482 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2780369 00:05:40.482 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:05:40.482 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2780369 00:05:40.482 element at address: 0x200034afe940 with size: 1.000488 MiB 00:05:40.482 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2780369 00:05:40.482 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:05:40.482 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2780369 00:05:40.482 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:05:40.482 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2780369 00:05:40.482 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:05:40.482 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:40.482 element at address: 0x20000707b780 with size: 0.500488 MiB 00:05:40.482 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:40.482 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:05:40.482 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:40.482 element at address: 0x200003a5f300 with size: 0.125488 MiB 00:05:40.482 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2780369 00:05:40.482 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:05:40.482 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:40.482 element at address: 0x20002ac69100 with size: 0.023743 MiB 00:05:40.482 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:40.482 element at address: 0x200003a5b040 with size: 0.016113 MiB 00:05:40.482 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2780369 00:05:40.482 element at address: 0x20002ac6f240 with size: 0.002441 MiB 00:05:40.482 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:40.482 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:40.482 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2780369 00:05:40.482 element at address: 0x200003aff940 with size: 0.000305 MiB 00:05:40.482 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2780369 00:05:40.482 element at address: 0x200003a5ae40 with size: 0.000305 MiB 00:05:40.482 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2780369 00:05:40.482 element at address: 0x20002ac6fd00 with size: 0.000305 MiB 00:05:40.482 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:40.482 17:05:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:40.482 17:05:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2780369 00:05:40.482 17:05:38 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 2780369 ']' 00:05:40.482 17:05:38 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 2780369 00:05:40.482 17:05:38 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:40.482 17:05:38 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:40.482 17:05:38 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2780369 00:05:40.482 17:05:38 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:40.482 17:05:38 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:40.482 17:05:38 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2780369' 00:05:40.482 killing process with pid 2780369 00:05:40.482 17:05:38 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 2780369 00:05:40.482 17:05:38 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 2780369 00:05:40.742 00:05:40.742 real 0m1.458s 00:05:40.742 user 0m1.548s 00:05:40.742 sys 0m0.429s 00:05:40.742 17:05:39 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.742 17:05:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:40.742 ************************************ 00:05:40.742 END TEST dpdk_mem_utility 00:05:40.742 ************************************ 00:05:40.742 17:05:39 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:40.742 17:05:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.742 17:05:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.742 17:05:39 -- common/autotest_common.sh@10 -- # set +x 00:05:40.742 ************************************ 00:05:40.742 START TEST event 00:05:40.742 ************************************ 00:05:40.742 17:05:39 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:40.742 * Looking for test storage... 00:05:40.742 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:40.742 17:05:39 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:40.742 17:05:39 event -- common/autotest_common.sh@1681 -- # lcov --version 00:05:40.742 17:05:39 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:41.002 17:05:39 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:41.002 17:05:39 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.002 17:05:39 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.002 17:05:39 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.002 17:05:39 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.002 17:05:39 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.002 17:05:39 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.002 17:05:39 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.002 17:05:39 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.002 17:05:39 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.002 17:05:39 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.002 17:05:39 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.002 17:05:39 event -- scripts/common.sh@344 -- # case "$op" in 00:05:41.002 17:05:39 event -- scripts/common.sh@345 -- # : 1 00:05:41.002 17:05:39 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.002 17:05:39 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.002 17:05:39 event -- scripts/common.sh@365 -- # decimal 1 00:05:41.002 17:05:39 event -- scripts/common.sh@353 -- # local d=1 00:05:41.002 17:05:39 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.003 17:05:39 event -- scripts/common.sh@355 -- # echo 1 00:05:41.003 17:05:39 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.003 17:05:39 event -- scripts/common.sh@366 -- # decimal 2 00:05:41.003 17:05:39 event -- scripts/common.sh@353 -- # local d=2 00:05:41.003 17:05:39 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.003 17:05:39 event -- scripts/common.sh@355 -- # echo 2 00:05:41.003 17:05:39 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.003 17:05:39 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.003 17:05:39 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.003 17:05:39 event -- scripts/common.sh@368 -- # return 0 00:05:41.003 17:05:39 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.003 17:05:39 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:41.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.003 --rc genhtml_branch_coverage=1 00:05:41.003 --rc genhtml_function_coverage=1 00:05:41.003 --rc genhtml_legend=1 00:05:41.003 --rc geninfo_all_blocks=1 00:05:41.003 --rc geninfo_unexecuted_blocks=1 00:05:41.003 00:05:41.003 ' 00:05:41.003 17:05:39 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:41.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.003 --rc genhtml_branch_coverage=1 00:05:41.003 --rc genhtml_function_coverage=1 00:05:41.003 --rc genhtml_legend=1 00:05:41.003 --rc geninfo_all_blocks=1 00:05:41.003 --rc geninfo_unexecuted_blocks=1 00:05:41.003 00:05:41.003 ' 00:05:41.003 17:05:39 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:41.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.003 --rc genhtml_branch_coverage=1 00:05:41.003 --rc genhtml_function_coverage=1 00:05:41.003 --rc genhtml_legend=1 00:05:41.003 --rc geninfo_all_blocks=1 00:05:41.003 --rc geninfo_unexecuted_blocks=1 00:05:41.003 00:05:41.003 ' 00:05:41.003 17:05:39 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:41.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.003 --rc genhtml_branch_coverage=1 00:05:41.003 --rc genhtml_function_coverage=1 00:05:41.003 --rc genhtml_legend=1 00:05:41.003 --rc geninfo_all_blocks=1 00:05:41.003 --rc geninfo_unexecuted_blocks=1 00:05:41.003 00:05:41.003 ' 00:05:41.003 17:05:39 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:41.003 17:05:39 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:41.003 17:05:39 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:41.003 17:05:39 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:41.003 17:05:39 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.003 17:05:39 event -- common/autotest_common.sh@10 -- # set +x 00:05:41.003 ************************************ 00:05:41.003 START TEST event_perf 00:05:41.003 ************************************ 00:05:41.003 17:05:39 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:41.003 Running I/O for 1 seconds...[2024-10-01 17:05:39.397954] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:05:41.003 [2024-10-01 17:05:39.398053] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2780766 ] 00:05:41.003 [2024-10-01 17:05:39.462900] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:41.003 [2024-10-01 17:05:39.496632] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.003 [2024-10-01 17:05:39.496747] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.003 [2024-10-01 17:05:39.496901] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.003 Running I/O for 1 seconds...[2024-10-01 17:05:39.496902] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:42.386 00:05:42.386 lcore 0: 183587 00:05:42.386 lcore 1: 183586 00:05:42.386 lcore 2: 183588 00:05:42.386 lcore 3: 183592 00:05:42.386 done. 00:05:42.386 00:05:42.386 real 0m1.161s 00:05:42.386 user 0m4.083s 00:05:42.386 sys 0m0.076s 00:05:42.386 17:05:40 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.386 17:05:40 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:42.386 ************************************ 00:05:42.386 END TEST event_perf 00:05:42.386 ************************************ 00:05:42.386 17:05:40 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:42.386 17:05:40 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:42.386 17:05:40 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.386 17:05:40 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.386 ************************************ 00:05:42.386 START TEST event_reactor 00:05:42.386 ************************************ 00:05:42.386 17:05:40 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:42.386 [2024-10-01 17:05:40.635982] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:05:42.387 [2024-10-01 17:05:40.636073] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2780901 ] 00:05:42.387 [2024-10-01 17:05:40.701232] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.387 [2024-10-01 17:05:40.732055] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.326 test_start 00:05:43.326 oneshot 00:05:43.326 tick 100 00:05:43.326 tick 100 00:05:43.326 tick 250 00:05:43.326 tick 100 00:05:43.326 tick 100 00:05:43.326 tick 100 00:05:43.326 tick 250 00:05:43.326 tick 500 00:05:43.326 tick 100 00:05:43.326 tick 100 00:05:43.326 tick 250 00:05:43.326 tick 100 00:05:43.326 tick 100 00:05:43.326 test_end 00:05:43.326 00:05:43.326 real 0m1.157s 00:05:43.326 user 0m1.081s 00:05:43.326 sys 0m0.072s 00:05:43.326 17:05:41 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.326 17:05:41 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:43.326 ************************************ 00:05:43.326 END TEST event_reactor 00:05:43.326 ************************************ 00:05:43.327 17:05:41 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:43.327 17:05:41 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:43.327 17:05:41 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.327 17:05:41 event -- common/autotest_common.sh@10 -- # set +x 00:05:43.327 ************************************ 00:05:43.327 START TEST event_reactor_perf 00:05:43.327 ************************************ 00:05:43.327 17:05:41 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:43.327 [2024-10-01 17:05:41.871238] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:05:43.327 [2024-10-01 17:05:41.871358] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2781160 ] 00:05:43.586 [2024-10-01 17:05:41.942362] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.586 [2024-10-01 17:05:41.971309] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.528 test_start 00:05:44.528 test_end 00:05:44.528 Performance: 371217 events per second 00:05:44.528 00:05:44.528 real 0m1.164s 00:05:44.528 user 0m1.079s 00:05:44.528 sys 0m0.081s 00:05:44.528 17:05:43 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.528 17:05:43 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:44.528 ************************************ 00:05:44.528 END TEST event_reactor_perf 00:05:44.528 ************************************ 00:05:44.528 17:05:43 event -- event/event.sh@49 -- # uname -s 00:05:44.528 17:05:43 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:44.528 17:05:43 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:44.528 17:05:43 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:44.528 17:05:43 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.528 17:05:43 event -- common/autotest_common.sh@10 -- # set +x 00:05:44.788 ************************************ 00:05:44.788 START TEST event_scheduler 00:05:44.788 ************************************ 00:05:44.788 17:05:43 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:44.788 * Looking for test storage... 00:05:44.788 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:44.788 17:05:43 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:44.788 17:05:43 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:05:44.788 17:05:43 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:44.788 17:05:43 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:44.788 17:05:43 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.788 17:05:43 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.788 17:05:43 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.788 17:05:43 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.788 17:05:43 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.788 17:05:43 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.788 17:05:43 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.788 17:05:43 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.789 17:05:43 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.789 17:05:43 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.789 17:05:43 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.789 17:05:43 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:44.789 17:05:43 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:44.789 17:05:43 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.789 17:05:43 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.789 17:05:43 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:44.789 17:05:43 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:44.789 17:05:43 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.789 17:05:43 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:44.789 17:05:43 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.789 17:05:43 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:44.789 17:05:43 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:44.789 17:05:43 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.789 17:05:43 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:44.789 17:05:43 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.789 17:05:43 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.789 17:05:43 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.789 17:05:43 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:44.789 17:05:43 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.789 17:05:43 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:44.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.789 --rc genhtml_branch_coverage=1 00:05:44.789 --rc genhtml_function_coverage=1 00:05:44.789 --rc genhtml_legend=1 00:05:44.789 --rc geninfo_all_blocks=1 00:05:44.789 --rc geninfo_unexecuted_blocks=1 00:05:44.789 00:05:44.789 ' 00:05:44.789 17:05:43 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:44.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.789 --rc genhtml_branch_coverage=1 00:05:44.789 --rc genhtml_function_coverage=1 00:05:44.789 --rc genhtml_legend=1 00:05:44.789 --rc geninfo_all_blocks=1 00:05:44.789 --rc geninfo_unexecuted_blocks=1 00:05:44.789 00:05:44.789 ' 00:05:44.789 17:05:43 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:44.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.789 --rc genhtml_branch_coverage=1 00:05:44.789 --rc genhtml_function_coverage=1 00:05:44.789 --rc genhtml_legend=1 00:05:44.789 --rc geninfo_all_blocks=1 00:05:44.789 --rc geninfo_unexecuted_blocks=1 00:05:44.789 00:05:44.789 ' 00:05:44.789 17:05:43 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:44.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.789 --rc genhtml_branch_coverage=1 00:05:44.789 --rc genhtml_function_coverage=1 00:05:44.789 --rc genhtml_legend=1 00:05:44.789 --rc geninfo_all_blocks=1 00:05:44.789 --rc geninfo_unexecuted_blocks=1 00:05:44.789 00:05:44.789 ' 00:05:44.789 17:05:43 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:44.789 17:05:43 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2781542 00:05:44.789 17:05:43 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.789 17:05:43 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:44.789 17:05:43 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2781542 00:05:44.789 17:05:43 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 2781542 ']' 00:05:44.789 17:05:43 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.789 17:05:43 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.789 17:05:43 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.789 17:05:43 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.789 17:05:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:45.048 [2024-10-01 17:05:43.345380] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:05:45.048 [2024-10-01 17:05:43.345430] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2781542 ] 00:05:45.048 [2024-10-01 17:05:43.397392] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:45.048 [2024-10-01 17:05:43.427685] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.048 [2024-10-01 17:05:43.427839] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.048 [2024-10-01 17:05:43.428000] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:45.048 [2024-10-01 17:05:43.428014] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:45.048 17:05:43 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.048 17:05:43 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:45.048 17:05:43 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:45.048 17:05:43 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.048 17:05:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:45.048 [2024-10-01 17:05:43.488460] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:45.048 [2024-10-01 17:05:43.488475] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:45.048 [2024-10-01 17:05:43.488482] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:45.048 [2024-10-01 17:05:43.488487] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:45.048 [2024-10-01 17:05:43.488490] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:45.048 17:05:43 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.048 17:05:43 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:45.048 17:05:43 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.048 17:05:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:45.048 [2024-10-01 17:05:43.542744] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:45.048 17:05:43 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.048 17:05:43 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:45.049 17:05:43 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.049 17:05:43 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.049 17:05:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:45.049 ************************************ 00:05:45.049 START TEST scheduler_create_thread 00:05:45.049 ************************************ 00:05:45.049 17:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:45.049 17:05:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:45.049 17:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.049 17:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.049 2 00:05:45.309 17:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.309 17:05:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:45.309 17:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.309 17:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.309 3 00:05:45.309 17:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.309 17:05:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:45.309 17:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.309 17:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.309 4 00:05:45.309 17:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.309 17:05:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:45.309 17:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.309 17:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.309 5 00:05:45.309 17:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.309 17:05:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:45.309 17:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.309 17:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.309 6 00:05:45.309 17:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.309 17:05:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:45.309 17:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.309 17:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.309 7 00:05:45.309 17:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.309 17:05:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:45.309 17:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.309 17:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.309 8 00:05:45.309 17:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.309 17:05:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:45.309 17:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.309 17:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.251 9 00:05:46.251 17:05:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.251 17:05:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:46.251 17:05:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.251 17:05:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.192 10 00:05:47.192 17:05:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.192 17:05:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:47.192 17:05:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.192 17:05:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.132 17:05:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.132 17:05:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:48.132 17:05:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:48.132 17:05:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.132 17:05:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.703 17:05:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.703 17:05:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:48.703 17:05:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.703 17:05:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.273 17:05:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.273 17:05:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:49.273 17:05:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:49.273 17:05:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.273 17:05:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.843 17:05:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.843 00:05:49.843 real 0m4.565s 00:05:49.843 user 0m0.024s 00:05:49.843 sys 0m0.008s 00:05:49.843 17:05:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.843 17:05:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.843 ************************************ 00:05:49.844 END TEST scheduler_create_thread 00:05:49.844 ************************************ 00:05:49.844 17:05:48 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:49.844 17:05:48 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2781542 00:05:49.844 17:05:48 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 2781542 ']' 00:05:49.844 17:05:48 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 2781542 00:05:49.844 17:05:48 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:49.844 17:05:48 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:49.844 17:05:48 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2781542 00:05:49.844 17:05:48 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:49.844 17:05:48 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:49.844 17:05:48 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2781542' 00:05:49.844 killing process with pid 2781542 00:05:49.844 17:05:48 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 2781542 00:05:49.844 17:05:48 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 2781542 00:05:49.844 [2024-10-01 17:05:48.375638] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:50.105 00:05:50.105 real 0m5.474s 00:05:50.105 user 0m12.126s 00:05:50.105 sys 0m0.353s 00:05:50.105 17:05:48 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.105 17:05:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:50.105 ************************************ 00:05:50.105 END TEST event_scheduler 00:05:50.105 ************************************ 00:05:50.105 17:05:48 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:50.105 17:05:48 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:50.105 17:05:48 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:50.105 17:05:48 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.105 17:05:48 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.397 ************************************ 00:05:50.397 START TEST app_repeat 00:05:50.397 ************************************ 00:05:50.397 17:05:48 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:50.397 17:05:48 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.397 17:05:48 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.397 17:05:48 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:50.397 17:05:48 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.397 17:05:48 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:50.397 17:05:48 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:50.397 17:05:48 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:50.397 17:05:48 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2782607 00:05:50.397 17:05:48 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:50.397 17:05:48 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:50.397 17:05:48 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2782607' 00:05:50.397 Process app_repeat pid: 2782607 00:05:50.397 17:05:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:50.397 17:05:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:50.397 spdk_app_start Round 0 00:05:50.397 17:05:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2782607 /var/tmp/spdk-nbd.sock 00:05:50.397 17:05:48 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2782607 ']' 00:05:50.397 17:05:48 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:50.397 17:05:48 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.397 17:05:48 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:50.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:50.397 17:05:48 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.397 17:05:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:50.397 [2024-10-01 17:05:48.700199] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:05:50.398 [2024-10-01 17:05:48.700261] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2782607 ] 00:05:50.398 [2024-10-01 17:05:48.764147] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:50.398 [2024-10-01 17:05:48.799037] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.398 [2024-10-01 17:05:48.799063] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.398 17:05:48 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.398 17:05:48 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:50.398 17:05:48 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.658 Malloc0 00:05:50.658 17:05:49 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.923 Malloc1 00:05:50.923 17:05:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.923 17:05:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.923 17:05:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.923 17:05:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:50.923 17:05:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.923 17:05:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:50.923 17:05:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.923 17:05:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.923 17:05:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.923 17:05:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:50.923 17:05:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.923 17:05:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:50.923 17:05:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:50.923 17:05:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:50.923 17:05:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.923 17:05:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:50.923 /dev/nbd0 00:05:50.923 17:05:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:50.923 17:05:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:50.923 17:05:49 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:50.923 17:05:49 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:50.923 17:05:49 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:50.923 17:05:49 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:50.923 17:05:49 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:50.923 17:05:49 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:50.923 17:05:49 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:50.923 17:05:49 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:50.923 17:05:49 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.923 1+0 records in 00:05:50.923 1+0 records out 00:05:50.923 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290146 s, 14.1 MB/s 00:05:50.923 17:05:49 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.923 17:05:49 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:50.923 17:05:49 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.923 17:05:49 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:50.923 17:05:49 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:50.923 17:05:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.923 17:05:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.923 17:05:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:51.184 /dev/nbd1 00:05:51.184 17:05:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:51.184 17:05:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:51.184 17:05:49 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:51.184 17:05:49 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:51.184 17:05:49 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:51.184 17:05:49 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:51.184 17:05:49 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:51.184 17:05:49 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:51.184 17:05:49 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:51.184 17:05:49 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:51.184 17:05:49 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.184 1+0 records in 00:05:51.184 1+0 records out 00:05:51.184 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284106 s, 14.4 MB/s 00:05:51.184 17:05:49 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.184 17:05:49 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:51.184 17:05:49 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.184 17:05:49 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:51.184 17:05:49 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:51.184 17:05:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.184 17:05:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.184 17:05:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.184 17:05:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.184 17:05:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.446 17:05:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:51.446 { 00:05:51.446 "nbd_device": "/dev/nbd0", 00:05:51.446 "bdev_name": "Malloc0" 00:05:51.446 }, 00:05:51.446 { 00:05:51.446 "nbd_device": "/dev/nbd1", 00:05:51.446 "bdev_name": "Malloc1" 00:05:51.446 } 00:05:51.446 ]' 00:05:51.446 17:05:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:51.446 { 00:05:51.446 "nbd_device": "/dev/nbd0", 00:05:51.446 "bdev_name": "Malloc0" 00:05:51.446 }, 00:05:51.446 { 00:05:51.446 "nbd_device": "/dev/nbd1", 00:05:51.446 "bdev_name": "Malloc1" 00:05:51.446 } 00:05:51.446 ]' 00:05:51.446 17:05:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.446 17:05:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:51.446 /dev/nbd1' 00:05:51.446 17:05:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:51.446 /dev/nbd1' 00:05:51.446 17:05:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.446 17:05:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:51.446 17:05:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:51.446 17:05:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:51.446 17:05:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:51.446 17:05:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:51.446 17:05:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.446 17:05:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.446 17:05:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:51.446 17:05:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.446 17:05:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:51.446 17:05:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:51.446 256+0 records in 00:05:51.446 256+0 records out 00:05:51.446 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127514 s, 82.2 MB/s 00:05:51.446 17:05:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.446 17:05:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:51.446 256+0 records in 00:05:51.446 256+0 records out 00:05:51.446 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0164539 s, 63.7 MB/s 00:05:51.446 17:05:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.446 17:05:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:51.447 256+0 records in 00:05:51.447 256+0 records out 00:05:51.447 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0181447 s, 57.8 MB/s 00:05:51.447 17:05:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:51.447 17:05:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.447 17:05:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.447 17:05:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:51.447 17:05:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.447 17:05:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:51.447 17:05:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:51.447 17:05:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.447 17:05:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:51.447 17:05:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.447 17:05:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:51.447 17:05:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.447 17:05:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:51.447 17:05:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.447 17:05:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.447 17:05:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:51.447 17:05:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:51.447 17:05:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.447 17:05:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:51.708 17:05:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:51.708 17:05:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:51.708 17:05:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:51.708 17:05:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.708 17:05:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.708 17:05:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:51.708 17:05:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:51.708 17:05:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.708 17:05:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.708 17:05:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:51.970 17:05:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:51.970 17:05:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:51.970 17:05:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:51.970 17:05:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.970 17:05:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.970 17:05:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:51.970 17:05:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:51.970 17:05:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.970 17:05:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.970 17:05:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.970 17:05:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.231 17:05:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:52.231 17:05:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:52.231 17:05:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.231 17:05:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:52.231 17:05:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:52.231 17:05:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.231 17:05:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:52.231 17:05:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:52.231 17:05:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:52.231 17:05:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:52.231 17:05:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:52.231 17:05:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:52.231 17:05:50 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:52.231 17:05:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:52.491 [2024-10-01 17:05:50.871240] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:52.491 [2024-10-01 17:05:50.901924] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.491 [2024-10-01 17:05:50.901926] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.491 [2024-10-01 17:05:50.933601] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:52.491 [2024-10-01 17:05:50.933640] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:55.789 17:05:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:55.789 17:05:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:55.789 spdk_app_start Round 1 00:05:55.789 17:05:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2782607 /var/tmp/spdk-nbd.sock 00:05:55.789 17:05:53 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2782607 ']' 00:05:55.789 17:05:53 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:55.789 17:05:53 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:55.789 17:05:53 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:55.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:55.789 17:05:53 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:55.789 17:05:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:55.789 17:05:53 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:55.789 17:05:53 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:55.789 17:05:53 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.789 Malloc0 00:05:55.789 17:05:54 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.789 Malloc1 00:05:55.789 17:05:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.789 17:05:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.789 17:05:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.789 17:05:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:55.789 17:05:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.789 17:05:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:55.789 17:05:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.789 17:05:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.789 17:05:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.789 17:05:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:55.789 17:05:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.789 17:05:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:55.789 17:05:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:55.789 17:05:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:55.789 17:05:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.789 17:05:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:56.049 /dev/nbd0 00:05:56.049 17:05:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:56.049 17:05:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:56.049 17:05:54 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:56.049 17:05:54 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:56.049 17:05:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:56.049 17:05:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:56.049 17:05:54 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:56.049 17:05:54 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:56.049 17:05:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:56.049 17:05:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:56.049 17:05:54 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:56.049 1+0 records in 00:05:56.049 1+0 records out 00:05:56.049 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035158 s, 11.7 MB/s 00:05:56.049 17:05:54 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:56.049 17:05:54 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:56.049 17:05:54 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:56.049 17:05:54 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:56.049 17:05:54 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:56.049 17:05:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.049 17:05:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.049 17:05:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:56.311 /dev/nbd1 00:05:56.311 17:05:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:56.311 17:05:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:56.311 17:05:54 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:56.311 17:05:54 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:56.311 17:05:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:56.311 17:05:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:56.311 17:05:54 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:56.311 17:05:54 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:56.311 17:05:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:56.311 17:05:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:56.311 17:05:54 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:56.311 1+0 records in 00:05:56.311 1+0 records out 00:05:56.311 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287379 s, 14.3 MB/s 00:05:56.311 17:05:54 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:56.311 17:05:54 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:56.311 17:05:54 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:56.311 17:05:54 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:56.311 17:05:54 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:56.311 17:05:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.311 17:05:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.311 17:05:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.311 17:05:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.311 17:05:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.572 17:05:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:56.572 { 00:05:56.572 "nbd_device": "/dev/nbd0", 00:05:56.572 "bdev_name": "Malloc0" 00:05:56.572 }, 00:05:56.573 { 00:05:56.573 "nbd_device": "/dev/nbd1", 00:05:56.573 "bdev_name": "Malloc1" 00:05:56.573 } 00:05:56.573 ]' 00:05:56.573 17:05:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:56.573 { 00:05:56.573 "nbd_device": "/dev/nbd0", 00:05:56.573 "bdev_name": "Malloc0" 00:05:56.573 }, 00:05:56.573 { 00:05:56.573 "nbd_device": "/dev/nbd1", 00:05:56.573 "bdev_name": "Malloc1" 00:05:56.573 } 00:05:56.573 ]' 00:05:56.573 17:05:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.573 17:05:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:56.573 /dev/nbd1' 00:05:56.573 17:05:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:56.573 /dev/nbd1' 00:05:56.573 17:05:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.573 17:05:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:56.573 17:05:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:56.573 17:05:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:56.573 17:05:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:56.573 17:05:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:56.573 17:05:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.573 17:05:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.573 17:05:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:56.573 17:05:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.573 17:05:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:56.573 17:05:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:56.573 256+0 records in 00:05:56.573 256+0 records out 00:05:56.573 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125652 s, 83.5 MB/s 00:05:56.573 17:05:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.573 17:05:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:56.573 256+0 records in 00:05:56.573 256+0 records out 00:05:56.573 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0175961 s, 59.6 MB/s 00:05:56.573 17:05:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.573 17:05:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:56.573 256+0 records in 00:05:56.573 256+0 records out 00:05:56.573 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203194 s, 51.6 MB/s 00:05:56.573 17:05:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:56.573 17:05:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.573 17:05:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.573 17:05:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:56.573 17:05:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.573 17:05:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:56.573 17:05:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:56.573 17:05:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.573 17:05:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:56.573 17:05:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.573 17:05:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:56.573 17:05:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.573 17:05:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:56.573 17:05:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.573 17:05:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.573 17:05:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:56.573 17:05:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:56.573 17:05:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.573 17:05:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:56.833 17:05:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:56.833 17:05:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:56.833 17:05:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:56.833 17:05:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.833 17:05:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.833 17:05:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:56.833 17:05:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:56.833 17:05:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.833 17:05:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.833 17:05:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:57.093 17:05:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:57.093 17:05:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:57.093 17:05:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:57.093 17:05:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.093 17:05:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.093 17:05:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:57.093 17:05:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:57.093 17:05:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.093 17:05:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:57.093 17:05:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.093 17:05:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.093 17:05:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:57.093 17:05:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:57.094 17:05:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:57.355 17:05:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:57.355 17:05:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:57.355 17:05:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:57.355 17:05:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:57.356 17:05:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:57.356 17:05:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:57.356 17:05:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:57.356 17:05:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:57.356 17:05:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:57.356 17:05:55 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:57.356 17:05:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:57.615 [2024-10-01 17:05:55.953920] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.615 [2024-10-01 17:05:55.985082] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.615 [2024-10-01 17:05:55.985243] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.615 [2024-10-01 17:05:56.017459] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:57.615 [2024-10-01 17:05:56.017501] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:00.910 17:05:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:00.910 17:05:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:00.910 spdk_app_start Round 2 00:06:00.910 17:05:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2782607 /var/tmp/spdk-nbd.sock 00:06:00.910 17:05:58 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2782607 ']' 00:06:00.910 17:05:58 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:00.910 17:05:58 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.910 17:05:58 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:00.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:00.910 17:05:58 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.910 17:05:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:00.910 17:05:59 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:00.910 17:05:59 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:00.910 17:05:59 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.910 Malloc0 00:06:00.910 17:05:59 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.910 Malloc1 00:06:00.910 17:05:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.910 17:05:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.910 17:05:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.910 17:05:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:00.910 17:05:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.910 17:05:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:00.910 17:05:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.910 17:05:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.910 17:05:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.910 17:05:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:00.910 17:05:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.910 17:05:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:00.910 17:05:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:00.910 17:05:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:00.910 17:05:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.910 17:05:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:01.170 /dev/nbd0 00:06:01.170 17:05:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:01.170 17:05:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:01.170 17:05:59 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:01.170 17:05:59 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:01.170 17:05:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:01.170 17:05:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:01.170 17:05:59 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:01.170 17:05:59 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:01.170 17:05:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:01.170 17:05:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:01.170 17:05:59 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.170 1+0 records in 00:06:01.170 1+0 records out 00:06:01.171 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219193 s, 18.7 MB/s 00:06:01.171 17:05:59 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.171 17:05:59 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:01.171 17:05:59 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.171 17:05:59 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:01.171 17:05:59 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:01.171 17:05:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.171 17:05:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.171 17:05:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:01.430 /dev/nbd1 00:06:01.430 17:05:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:01.430 17:05:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:01.430 17:05:59 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:01.430 17:05:59 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:01.430 17:05:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:01.430 17:05:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:01.431 17:05:59 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:01.431 17:05:59 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:01.431 17:05:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:01.431 17:05:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:01.431 17:05:59 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.431 1+0 records in 00:06:01.431 1+0 records out 00:06:01.431 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249786 s, 16.4 MB/s 00:06:01.431 17:05:59 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.431 17:05:59 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:01.431 17:05:59 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.431 17:05:59 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:01.431 17:05:59 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:01.431 17:05:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.431 17:05:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.431 17:05:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.431 17:05:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.431 17:05:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.691 17:05:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:01.691 { 00:06:01.691 "nbd_device": "/dev/nbd0", 00:06:01.691 "bdev_name": "Malloc0" 00:06:01.691 }, 00:06:01.691 { 00:06:01.691 "nbd_device": "/dev/nbd1", 00:06:01.691 "bdev_name": "Malloc1" 00:06:01.691 } 00:06:01.691 ]' 00:06:01.691 17:05:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:01.691 { 00:06:01.691 "nbd_device": "/dev/nbd0", 00:06:01.691 "bdev_name": "Malloc0" 00:06:01.691 }, 00:06:01.691 { 00:06:01.691 "nbd_device": "/dev/nbd1", 00:06:01.691 "bdev_name": "Malloc1" 00:06:01.691 } 00:06:01.691 ]' 00:06:01.691 17:05:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.691 17:06:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:01.691 /dev/nbd1' 00:06:01.691 17:06:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:01.691 /dev/nbd1' 00:06:01.691 17:06:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.691 17:06:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:01.691 17:06:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:01.691 17:06:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:01.691 17:06:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:01.691 17:06:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:01.691 17:06:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.691 17:06:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.691 17:06:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:01.691 17:06:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.691 17:06:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:01.691 17:06:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:01.691 256+0 records in 00:06:01.691 256+0 records out 00:06:01.691 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127617 s, 82.2 MB/s 00:06:01.691 17:06:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.691 17:06:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:01.691 256+0 records in 00:06:01.691 256+0 records out 00:06:01.691 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0165573 s, 63.3 MB/s 00:06:01.691 17:06:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.691 17:06:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:01.691 256+0 records in 00:06:01.691 256+0 records out 00:06:01.691 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0177011 s, 59.2 MB/s 00:06:01.691 17:06:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:01.691 17:06:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.691 17:06:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.691 17:06:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:01.691 17:06:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.691 17:06:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:01.691 17:06:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:01.691 17:06:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.691 17:06:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:01.691 17:06:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.691 17:06:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:01.691 17:06:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.691 17:06:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:01.691 17:06:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.691 17:06:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.691 17:06:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:01.691 17:06:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:01.691 17:06:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.691 17:06:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:01.950 17:06:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:01.950 17:06:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:01.950 17:06:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:01.950 17:06:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.950 17:06:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.950 17:06:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:01.950 17:06:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.950 17:06:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.950 17:06:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.950 17:06:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:01.950 17:06:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:02.212 17:06:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:02.212 17:06:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:02.212 17:06:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.212 17:06:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.212 17:06:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:02.212 17:06:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.212 17:06:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.212 17:06:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.212 17:06:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.212 17:06:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.212 17:06:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:02.212 17:06:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:02.212 17:06:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.212 17:06:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:02.212 17:06:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:02.212 17:06:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.212 17:06:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:02.212 17:06:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:02.212 17:06:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:02.212 17:06:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:02.212 17:06:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:02.212 17:06:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:02.212 17:06:00 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:02.471 17:06:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:02.747 [2024-10-01 17:06:01.038306] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:02.747 [2024-10-01 17:06:01.069548] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.747 [2024-10-01 17:06:01.069550] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.747 [2024-10-01 17:06:01.101045] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:02.747 [2024-10-01 17:06:01.101082] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:06.108 17:06:03 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2782607 /var/tmp/spdk-nbd.sock 00:06:06.108 17:06:03 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2782607 ']' 00:06:06.108 17:06:03 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:06.108 17:06:03 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:06.108 17:06:03 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:06.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:06.108 17:06:03 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:06.108 17:06:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:06.108 17:06:04 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:06.108 17:06:04 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:06.108 17:06:04 event.app_repeat -- event/event.sh@39 -- # killprocess 2782607 00:06:06.108 17:06:04 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 2782607 ']' 00:06:06.108 17:06:04 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 2782607 00:06:06.108 17:06:04 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:06.108 17:06:04 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:06.108 17:06:04 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2782607 00:06:06.108 17:06:04 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:06.108 17:06:04 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:06.108 17:06:04 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2782607' 00:06:06.108 killing process with pid 2782607 00:06:06.108 17:06:04 event.app_repeat -- common/autotest_common.sh@969 -- # kill 2782607 00:06:06.108 17:06:04 event.app_repeat -- common/autotest_common.sh@974 -- # wait 2782607 00:06:06.108 spdk_app_start is called in Round 0. 00:06:06.108 Shutdown signal received, stop current app iteration 00:06:06.108 Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 reinitialization... 00:06:06.108 spdk_app_start is called in Round 1. 00:06:06.108 Shutdown signal received, stop current app iteration 00:06:06.108 Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 reinitialization... 00:06:06.108 spdk_app_start is called in Round 2. 00:06:06.108 Shutdown signal received, stop current app iteration 00:06:06.108 Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 reinitialization... 00:06:06.108 spdk_app_start is called in Round 3. 00:06:06.108 Shutdown signal received, stop current app iteration 00:06:06.108 17:06:04 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:06.108 17:06:04 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:06.108 00:06:06.108 real 0m15.605s 00:06:06.108 user 0m34.001s 00:06:06.108 sys 0m2.286s 00:06:06.108 17:06:04 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.108 17:06:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:06.108 ************************************ 00:06:06.108 END TEST app_repeat 00:06:06.108 ************************************ 00:06:06.108 17:06:04 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:06.108 17:06:04 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:06.108 17:06:04 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.108 17:06:04 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.108 17:06:04 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.108 ************************************ 00:06:06.109 START TEST cpu_locks 00:06:06.109 ************************************ 00:06:06.109 17:06:04 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:06.109 * Looking for test storage... 00:06:06.109 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:06.109 17:06:04 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:06.109 17:06:04 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:06:06.109 17:06:04 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:06.109 17:06:04 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:06.109 17:06:04 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.109 17:06:04 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.109 17:06:04 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.109 17:06:04 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.109 17:06:04 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.109 17:06:04 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.109 17:06:04 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.109 17:06:04 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.109 17:06:04 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.109 17:06:04 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.109 17:06:04 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.109 17:06:04 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:06.109 17:06:04 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:06.109 17:06:04 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.109 17:06:04 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.109 17:06:04 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:06.109 17:06:04 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:06.109 17:06:04 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.109 17:06:04 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:06.109 17:06:04 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.109 17:06:04 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:06.109 17:06:04 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:06.109 17:06:04 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.109 17:06:04 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:06.109 17:06:04 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.109 17:06:04 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.109 17:06:04 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.109 17:06:04 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:06.109 17:06:04 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.109 17:06:04 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:06.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.109 --rc genhtml_branch_coverage=1 00:06:06.109 --rc genhtml_function_coverage=1 00:06:06.109 --rc genhtml_legend=1 00:06:06.109 --rc geninfo_all_blocks=1 00:06:06.109 --rc geninfo_unexecuted_blocks=1 00:06:06.109 00:06:06.109 ' 00:06:06.109 17:06:04 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:06.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.109 --rc genhtml_branch_coverage=1 00:06:06.109 --rc genhtml_function_coverage=1 00:06:06.109 --rc genhtml_legend=1 00:06:06.109 --rc geninfo_all_blocks=1 00:06:06.109 --rc geninfo_unexecuted_blocks=1 00:06:06.109 00:06:06.109 ' 00:06:06.109 17:06:04 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:06.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.109 --rc genhtml_branch_coverage=1 00:06:06.109 --rc genhtml_function_coverage=1 00:06:06.109 --rc genhtml_legend=1 00:06:06.109 --rc geninfo_all_blocks=1 00:06:06.109 --rc geninfo_unexecuted_blocks=1 00:06:06.109 00:06:06.109 ' 00:06:06.109 17:06:04 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:06.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.109 --rc genhtml_branch_coverage=1 00:06:06.109 --rc genhtml_function_coverage=1 00:06:06.109 --rc genhtml_legend=1 00:06:06.109 --rc geninfo_all_blocks=1 00:06:06.109 --rc geninfo_unexecuted_blocks=1 00:06:06.109 00:06:06.109 ' 00:06:06.109 17:06:04 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:06.109 17:06:04 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:06.109 17:06:04 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:06.109 17:06:04 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:06.109 17:06:04 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.109 17:06:04 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.109 17:06:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.109 ************************************ 00:06:06.109 START TEST default_locks 00:06:06.109 ************************************ 00:06:06.109 17:06:04 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:06.109 17:06:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2786270 00:06:06.109 17:06:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2786270 00:06:06.109 17:06:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:06.109 17:06:04 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2786270 ']' 00:06:06.109 17:06:04 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.109 17:06:04 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:06.109 17:06:04 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.109 17:06:04 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:06.109 17:06:04 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.109 [2024-10-01 17:06:04.650556] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:06:06.109 [2024-10-01 17:06:04.650624] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2786270 ] 00:06:06.368 [2024-10-01 17:06:04.712485] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.368 [2024-10-01 17:06:04.743285] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.937 17:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:06.937 17:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:06.937 17:06:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2786270 00:06:06.937 17:06:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:06.937 17:06:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2786270 00:06:07.197 lslocks: write error 00:06:07.197 17:06:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2786270 00:06:07.197 17:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 2786270 ']' 00:06:07.197 17:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 2786270 00:06:07.197 17:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:07.197 17:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:07.197 17:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2786270 00:06:07.457 17:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:07.457 17:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:07.457 17:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2786270' 00:06:07.457 killing process with pid 2786270 00:06:07.457 17:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 2786270 00:06:07.457 17:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 2786270 00:06:07.457 17:06:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2786270 00:06:07.457 17:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:07.457 17:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2786270 00:06:07.457 17:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:07.457 17:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:07.457 17:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:07.457 17:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:07.458 17:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 2786270 00:06:07.458 17:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2786270 ']' 00:06:07.458 17:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.458 17:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.458 17:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.458 17:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.458 17:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.458 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2786270) - No such process 00:06:07.458 ERROR: process (pid: 2786270) is no longer running 00:06:07.458 17:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:07.458 17:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:07.458 17:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:07.458 17:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:07.458 17:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:07.458 17:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:07.458 17:06:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:07.458 17:06:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:07.458 17:06:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:07.458 17:06:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:07.458 00:06:07.458 real 0m1.397s 00:06:07.458 user 0m1.520s 00:06:07.458 sys 0m0.465s 00:06:07.458 17:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.458 17:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.458 ************************************ 00:06:07.458 END TEST default_locks 00:06:07.458 ************************************ 00:06:07.718 17:06:06 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:07.718 17:06:06 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.718 17:06:06 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.718 17:06:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.718 ************************************ 00:06:07.718 START TEST default_locks_via_rpc 00:06:07.718 ************************************ 00:06:07.718 17:06:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:07.718 17:06:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2786520 00:06:07.718 17:06:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2786520 00:06:07.718 17:06:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.718 17:06:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2786520 ']' 00:06:07.718 17:06:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.718 17:06:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.718 17:06:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.719 17:06:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.719 17:06:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.719 [2024-10-01 17:06:06.105814] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:06:07.719 [2024-10-01 17:06:06.105869] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2786520 ] 00:06:07.719 [2024-10-01 17:06:06.171148] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.719 [2024-10-01 17:06:06.206656] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.978 17:06:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:07.978 17:06:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:07.978 17:06:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:07.978 17:06:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.978 17:06:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.978 17:06:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.978 17:06:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:07.978 17:06:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:07.979 17:06:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:07.979 17:06:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:07.979 17:06:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:07.979 17:06:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.979 17:06:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.979 17:06:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.979 17:06:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2786520 00:06:07.979 17:06:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2786520 00:06:07.979 17:06:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:08.240 17:06:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2786520 00:06:08.240 17:06:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 2786520 ']' 00:06:08.240 17:06:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 2786520 00:06:08.240 17:06:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:08.240 17:06:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:08.240 17:06:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2786520 00:06:08.240 17:06:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:08.240 17:06:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:08.240 17:06:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2786520' 00:06:08.240 killing process with pid 2786520 00:06:08.240 17:06:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 2786520 00:06:08.240 17:06:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 2786520 00:06:08.501 00:06:08.501 real 0m0.812s 00:06:08.501 user 0m0.794s 00:06:08.501 sys 0m0.417s 00:06:08.501 17:06:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.501 17:06:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.501 ************************************ 00:06:08.501 END TEST default_locks_via_rpc 00:06:08.501 ************************************ 00:06:08.501 17:06:06 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:08.501 17:06:06 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.501 17:06:06 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.501 17:06:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.501 ************************************ 00:06:08.501 START TEST non_locking_app_on_locked_coremask 00:06:08.501 ************************************ 00:06:08.501 17:06:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:08.501 17:06:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2786708 00:06:08.501 17:06:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2786708 /var/tmp/spdk.sock 00:06:08.501 17:06:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:08.501 17:06:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2786708 ']' 00:06:08.501 17:06:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.501 17:06:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.501 17:06:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.501 17:06:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.501 17:06:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.501 [2024-10-01 17:06:06.998024] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:06:08.501 [2024-10-01 17:06:06.998084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2786708 ] 00:06:08.761 [2024-10-01 17:06:07.059412] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.762 [2024-10-01 17:06:07.094072] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.762 17:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.762 17:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:08.762 17:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2786712 00:06:08.762 17:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2786712 /var/tmp/spdk2.sock 00:06:08.762 17:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:08.762 17:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2786712 ']' 00:06:08.762 17:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.762 17:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.762 17:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.762 17:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.762 17:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.022 [2024-10-01 17:06:07.324692] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:06:09.022 [2024-10-01 17:06:07.324745] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2786712 ] 00:06:09.022 [2024-10-01 17:06:07.413298] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:09.022 [2024-10-01 17:06:07.413326] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.022 [2024-10-01 17:06:07.476411] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.592 17:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.592 17:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:09.592 17:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2786708 00:06:09.592 17:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2786708 00:06:09.592 17:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:10.532 lslocks: write error 00:06:10.532 17:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2786708 00:06:10.532 17:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2786708 ']' 00:06:10.532 17:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2786708 00:06:10.532 17:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:10.532 17:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:10.532 17:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2786708 00:06:10.532 17:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:10.532 17:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:10.532 17:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2786708' 00:06:10.532 killing process with pid 2786708 00:06:10.532 17:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2786708 00:06:10.532 17:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2786708 00:06:10.792 17:06:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2786712 00:06:10.792 17:06:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2786712 ']' 00:06:10.792 17:06:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2786712 00:06:10.792 17:06:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:10.792 17:06:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:10.792 17:06:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2786712 00:06:10.792 17:06:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:10.792 17:06:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:10.792 17:06:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2786712' 00:06:10.792 killing process with pid 2786712 00:06:10.792 17:06:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2786712 00:06:10.792 17:06:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2786712 00:06:11.053 00:06:11.053 real 0m2.559s 00:06:11.053 user 0m2.784s 00:06:11.053 sys 0m0.924s 00:06:11.053 17:06:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.053 17:06:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.053 ************************************ 00:06:11.053 END TEST non_locking_app_on_locked_coremask 00:06:11.053 ************************************ 00:06:11.053 17:06:09 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:11.053 17:06:09 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:11.053 17:06:09 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.053 17:06:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.053 ************************************ 00:06:11.053 START TEST locking_app_on_unlocked_coremask 00:06:11.053 ************************************ 00:06:11.053 17:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:11.053 17:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2787498 00:06:11.053 17:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2787498 /var/tmp/spdk.sock 00:06:11.053 17:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:11.053 17:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2787498 ']' 00:06:11.053 17:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.053 17:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.053 17:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.053 17:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.053 17:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.314 [2024-10-01 17:06:09.638035] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:06:11.314 [2024-10-01 17:06:09.638095] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2787498 ] 00:06:11.314 [2024-10-01 17:06:09.699721] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:11.314 [2024-10-01 17:06:09.699757] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.314 [2024-10-01 17:06:09.734739] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.575 17:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:11.575 17:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:11.575 17:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2787651 00:06:11.575 17:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2787651 /var/tmp/spdk2.sock 00:06:11.575 17:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:11.575 17:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2787651 ']' 00:06:11.575 17:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.575 17:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.575 17:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.575 17:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.575 17:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.575 [2024-10-01 17:06:09.969088] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:06:11.575 [2024-10-01 17:06:09.969143] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2787651 ] 00:06:11.575 [2024-10-01 17:06:10.059307] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.836 [2024-10-01 17:06:10.122594] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.407 17:06:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.407 17:06:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:12.407 17:06:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2787651 00:06:12.407 17:06:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2787651 00:06:12.407 17:06:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:12.979 lslocks: write error 00:06:12.979 17:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2787498 00:06:12.979 17:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2787498 ']' 00:06:12.979 17:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2787498 00:06:12.979 17:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:12.979 17:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:12.979 17:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2787498 00:06:13.241 17:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:13.241 17:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:13.241 17:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2787498' 00:06:13.241 killing process with pid 2787498 00:06:13.241 17:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2787498 00:06:13.241 17:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2787498 00:06:13.502 17:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2787651 00:06:13.502 17:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2787651 ']' 00:06:13.502 17:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2787651 00:06:13.502 17:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:13.502 17:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:13.502 17:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2787651 00:06:13.764 17:06:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:13.764 17:06:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:13.764 17:06:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2787651' 00:06:13.764 killing process with pid 2787651 00:06:13.764 17:06:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2787651 00:06:13.764 17:06:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2787651 00:06:13.764 00:06:13.764 real 0m2.682s 00:06:13.764 user 0m2.872s 00:06:13.764 sys 0m0.970s 00:06:13.764 17:06:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.764 17:06:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.764 ************************************ 00:06:13.764 END TEST locking_app_on_unlocked_coremask 00:06:13.764 ************************************ 00:06:13.764 17:06:12 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:13.764 17:06:12 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:13.764 17:06:12 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.764 17:06:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.025 ************************************ 00:06:14.025 START TEST locking_app_on_locked_coremask 00:06:14.025 ************************************ 00:06:14.025 17:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:14.025 17:06:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2788256 00:06:14.025 17:06:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2788256 /var/tmp/spdk.sock 00:06:14.025 17:06:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:14.025 17:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2788256 ']' 00:06:14.025 17:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.025 17:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.025 17:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.025 17:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.025 17:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.025 [2024-10-01 17:06:12.394401] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:06:14.025 [2024-10-01 17:06:12.394457] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2788256 ] 00:06:14.025 [2024-10-01 17:06:12.453852] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.025 [2024-10-01 17:06:12.485727] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.286 17:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:14.286 17:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:14.286 17:06:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2788339 00:06:14.286 17:06:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2788339 /var/tmp/spdk2.sock 00:06:14.286 17:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:14.286 17:06:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:14.286 17:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2788339 /var/tmp/spdk2.sock 00:06:14.286 17:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:14.286 17:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.286 17:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:14.286 17:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.286 17:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2788339 /var/tmp/spdk2.sock 00:06:14.286 17:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2788339 ']' 00:06:14.286 17:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.286 17:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.286 17:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.286 17:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.287 17:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.287 [2024-10-01 17:06:12.706515] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:06:14.287 [2024-10-01 17:06:12.706568] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2788339 ] 00:06:14.287 [2024-10-01 17:06:12.792734] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2788256 has claimed it. 00:06:14.287 [2024-10-01 17:06:12.792774] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:14.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2788339) - No such process 00:06:14.859 ERROR: process (pid: 2788339) is no longer running 00:06:14.859 17:06:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:14.859 17:06:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:14.859 17:06:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:14.859 17:06:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:14.859 17:06:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:14.859 17:06:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:14.859 17:06:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2788256 00:06:14.859 17:06:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2788256 00:06:14.859 17:06:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:15.431 lslocks: write error 00:06:15.431 17:06:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2788256 00:06:15.431 17:06:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2788256 ']' 00:06:15.431 17:06:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2788256 00:06:15.431 17:06:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:15.431 17:06:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:15.431 17:06:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2788256 00:06:15.431 17:06:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:15.431 17:06:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:15.431 17:06:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2788256' 00:06:15.431 killing process with pid 2788256 00:06:15.431 17:06:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2788256 00:06:15.431 17:06:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2788256 00:06:15.693 00:06:15.693 real 0m1.721s 00:06:15.693 user 0m1.898s 00:06:15.693 sys 0m0.606s 00:06:15.693 17:06:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.693 17:06:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.693 ************************************ 00:06:15.693 END TEST locking_app_on_locked_coremask 00:06:15.693 ************************************ 00:06:15.693 17:06:14 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:15.693 17:06:14 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.693 17:06:14 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.693 17:06:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.693 ************************************ 00:06:15.693 START TEST locking_overlapped_coremask 00:06:15.693 ************************************ 00:06:15.693 17:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:15.693 17:06:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2788630 00:06:15.693 17:06:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2788630 /var/tmp/spdk.sock 00:06:15.693 17:06:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:15.693 17:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2788630 ']' 00:06:15.693 17:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.693 17:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.693 17:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.693 17:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.693 17:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.693 [2024-10-01 17:06:14.197118] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:06:15.693 [2024-10-01 17:06:14.197174] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2788630 ] 00:06:15.955 [2024-10-01 17:06:14.260971] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:15.955 [2024-10-01 17:06:14.300053] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.955 [2024-10-01 17:06:14.300091] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.955 [2024-10-01 17:06:14.300094] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.955 17:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.955 17:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:15.955 17:06:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2788768 00:06:15.955 17:06:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2788768 /var/tmp/spdk2.sock 00:06:15.955 17:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:15.955 17:06:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:15.955 17:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2788768 /var/tmp/spdk2.sock 00:06:15.955 17:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:15.955 17:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.955 17:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:15.955 17:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.955 17:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2788768 /var/tmp/spdk2.sock 00:06:15.955 17:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2788768 ']' 00:06:15.955 17:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:15.955 17:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.955 17:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:15.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:15.955 17:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.955 17:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.216 [2024-10-01 17:06:14.532550] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:06:16.216 [2024-10-01 17:06:14.532603] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2788768 ] 00:06:16.216 [2024-10-01 17:06:14.605598] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2788630 has claimed it. 00:06:16.216 [2024-10-01 17:06:14.605631] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:16.788 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2788768) - No such process 00:06:16.788 ERROR: process (pid: 2788768) is no longer running 00:06:16.788 17:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:16.788 17:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:16.788 17:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:16.788 17:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:16.788 17:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:16.788 17:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:16.788 17:06:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:16.788 17:06:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:16.788 17:06:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:16.788 17:06:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:16.788 17:06:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2788630 00:06:16.788 17:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 2788630 ']' 00:06:16.788 17:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 2788630 00:06:16.788 17:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:16.788 17:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:16.788 17:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2788630 00:06:16.788 17:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:16.788 17:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:16.788 17:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2788630' 00:06:16.788 killing process with pid 2788630 00:06:16.788 17:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 2788630 00:06:16.788 17:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 2788630 00:06:17.050 00:06:17.050 real 0m1.279s 00:06:17.050 user 0m3.530s 00:06:17.050 sys 0m0.356s 00:06:17.050 17:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.050 17:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.050 ************************************ 00:06:17.050 END TEST locking_overlapped_coremask 00:06:17.050 ************************************ 00:06:17.050 17:06:15 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:17.050 17:06:15 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:17.050 17:06:15 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.050 17:06:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.050 ************************************ 00:06:17.050 START TEST locking_overlapped_coremask_via_rpc 00:06:17.050 ************************************ 00:06:17.050 17:06:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:17.050 17:06:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2788995 00:06:17.050 17:06:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2788995 /var/tmp/spdk.sock 00:06:17.050 17:06:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:17.050 17:06:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2788995 ']' 00:06:17.050 17:06:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.050 17:06:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.050 17:06:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.050 17:06:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.050 17:06:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.050 [2024-10-01 17:06:15.549712] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:06:17.050 [2024-10-01 17:06:15.549764] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2788995 ] 00:06:17.311 [2024-10-01 17:06:15.614320] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:17.311 [2024-10-01 17:06:15.614357] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:17.311 [2024-10-01 17:06:15.649939] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.311 [2024-10-01 17:06:15.650055] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.311 [2024-10-01 17:06:15.650222] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.311 17:06:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.311 17:06:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:17.311 17:06:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2789009 00:06:17.311 17:06:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2789009 /var/tmp/spdk2.sock 00:06:17.311 17:06:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:17.311 17:06:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2789009 ']' 00:06:17.311 17:06:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.311 17:06:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.311 17:06:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.311 17:06:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.311 17:06:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.572 [2024-10-01 17:06:15.874523] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:06:17.572 [2024-10-01 17:06:15.874571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2789009 ] 00:06:17.572 [2024-10-01 17:06:15.947714] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:17.572 [2024-10-01 17:06:15.947743] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:17.572 [2024-10-01 17:06:16.009337] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:17.572 [2024-10-01 17:06:16.009490] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.572 [2024-10-01 17:06:16.009493] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:06:18.145 17:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:18.145 17:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:18.145 17:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:18.145 17:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.145 17:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.145 17:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.145 17:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:18.145 17:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:18.145 17:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:18.145 17:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:18.145 17:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.145 17:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:18.145 17:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.145 17:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:18.146 17:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.146 17:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.146 [2024-10-01 17:06:16.681062] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2788995 has claimed it. 00:06:18.146 request: 00:06:18.146 { 00:06:18.146 "method": "framework_enable_cpumask_locks", 00:06:18.146 "req_id": 1 00:06:18.146 } 00:06:18.146 Got JSON-RPC error response 00:06:18.146 response: 00:06:18.146 { 00:06:18.146 "code": -32603, 00:06:18.146 "message": "Failed to claim CPU core: 2" 00:06:18.146 } 00:06:18.146 17:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:18.146 17:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:18.146 17:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:18.146 17:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:18.146 17:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:18.406 17:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2788995 /var/tmp/spdk.sock 00:06:18.406 17:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2788995 ']' 00:06:18.406 17:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.407 17:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:18.407 17:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.407 17:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:18.407 17:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.407 17:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:18.407 17:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:18.407 17:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2789009 /var/tmp/spdk2.sock 00:06:18.407 17:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2789009 ']' 00:06:18.407 17:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.407 17:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:18.407 17:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.407 17:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:18.407 17:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.667 17:06:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:18.667 17:06:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:18.667 17:06:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:18.667 17:06:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:18.667 17:06:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:18.667 17:06:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:18.667 00:06:18.668 real 0m1.565s 00:06:18.668 user 0m0.727s 00:06:18.668 sys 0m0.127s 00:06:18.668 17:06:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.668 17:06:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.668 ************************************ 00:06:18.668 END TEST locking_overlapped_coremask_via_rpc 00:06:18.668 ************************************ 00:06:18.668 17:06:17 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:18.668 17:06:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2788995 ]] 00:06:18.668 17:06:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2788995 00:06:18.668 17:06:17 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2788995 ']' 00:06:18.668 17:06:17 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2788995 00:06:18.668 17:06:17 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:18.668 17:06:17 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:18.668 17:06:17 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2788995 00:06:18.668 17:06:17 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:18.668 17:06:17 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:18.668 17:06:17 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2788995' 00:06:18.668 killing process with pid 2788995 00:06:18.668 17:06:17 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2788995 00:06:18.668 17:06:17 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2788995 00:06:18.928 17:06:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2789009 ]] 00:06:18.928 17:06:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2789009 00:06:18.928 17:06:17 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2789009 ']' 00:06:18.928 17:06:17 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2789009 00:06:18.928 17:06:17 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:18.928 17:06:17 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:18.928 17:06:17 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2789009 00:06:18.928 17:06:17 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:18.928 17:06:17 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:18.928 17:06:17 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2789009' 00:06:18.928 killing process with pid 2789009 00:06:18.928 17:06:17 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2789009 00:06:18.928 17:06:17 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2789009 00:06:19.189 17:06:17 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:19.189 17:06:17 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:19.189 17:06:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2788995 ]] 00:06:19.189 17:06:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2788995 00:06:19.189 17:06:17 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2788995 ']' 00:06:19.189 17:06:17 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2788995 00:06:19.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2788995) - No such process 00:06:19.189 17:06:17 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2788995 is not found' 00:06:19.189 Process with pid 2788995 is not found 00:06:19.189 17:06:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2789009 ]] 00:06:19.189 17:06:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2789009 00:06:19.189 17:06:17 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2789009 ']' 00:06:19.189 17:06:17 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2789009 00:06:19.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2789009) - No such process 00:06:19.189 17:06:17 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2789009 is not found' 00:06:19.189 Process with pid 2789009 is not found 00:06:19.189 17:06:17 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:19.189 00:06:19.189 real 0m13.302s 00:06:19.189 user 0m22.862s 00:06:19.189 sys 0m4.791s 00:06:19.189 17:06:17 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.189 17:06:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.189 ************************************ 00:06:19.189 END TEST cpu_locks 00:06:19.189 ************************************ 00:06:19.189 00:06:19.189 real 0m38.543s 00:06:19.189 user 1m15.520s 00:06:19.189 sys 0m8.084s 00:06:19.189 17:06:17 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.189 17:06:17 event -- common/autotest_common.sh@10 -- # set +x 00:06:19.189 ************************************ 00:06:19.189 END TEST event 00:06:19.189 ************************************ 00:06:19.189 17:06:17 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:19.189 17:06:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.189 17:06:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.189 17:06:17 -- common/autotest_common.sh@10 -- # set +x 00:06:19.450 ************************************ 00:06:19.450 START TEST thread 00:06:19.450 ************************************ 00:06:19.450 17:06:17 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:19.450 * Looking for test storage... 00:06:19.450 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:19.450 17:06:17 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:19.450 17:06:17 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:19.450 17:06:17 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:19.450 17:06:17 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:19.450 17:06:17 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.450 17:06:17 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.450 17:06:17 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.450 17:06:17 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.450 17:06:17 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.450 17:06:17 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.450 17:06:17 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.450 17:06:17 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.450 17:06:17 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.450 17:06:17 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.450 17:06:17 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.450 17:06:17 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:19.450 17:06:17 thread -- scripts/common.sh@345 -- # : 1 00:06:19.450 17:06:17 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.450 17:06:17 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.450 17:06:17 thread -- scripts/common.sh@365 -- # decimal 1 00:06:19.450 17:06:17 thread -- scripts/common.sh@353 -- # local d=1 00:06:19.450 17:06:17 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.450 17:06:17 thread -- scripts/common.sh@355 -- # echo 1 00:06:19.450 17:06:17 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.450 17:06:17 thread -- scripts/common.sh@366 -- # decimal 2 00:06:19.450 17:06:17 thread -- scripts/common.sh@353 -- # local d=2 00:06:19.450 17:06:17 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.450 17:06:17 thread -- scripts/common.sh@355 -- # echo 2 00:06:19.450 17:06:17 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.450 17:06:17 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.450 17:06:17 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.450 17:06:17 thread -- scripts/common.sh@368 -- # return 0 00:06:19.450 17:06:17 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.450 17:06:17 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:19.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.450 --rc genhtml_branch_coverage=1 00:06:19.450 --rc genhtml_function_coverage=1 00:06:19.450 --rc genhtml_legend=1 00:06:19.450 --rc geninfo_all_blocks=1 00:06:19.450 --rc geninfo_unexecuted_blocks=1 00:06:19.450 00:06:19.450 ' 00:06:19.450 17:06:17 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:19.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.451 --rc genhtml_branch_coverage=1 00:06:19.451 --rc genhtml_function_coverage=1 00:06:19.451 --rc genhtml_legend=1 00:06:19.451 --rc geninfo_all_blocks=1 00:06:19.451 --rc geninfo_unexecuted_blocks=1 00:06:19.451 00:06:19.451 ' 00:06:19.451 17:06:17 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:19.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.451 --rc genhtml_branch_coverage=1 00:06:19.451 --rc genhtml_function_coverage=1 00:06:19.451 --rc genhtml_legend=1 00:06:19.451 --rc geninfo_all_blocks=1 00:06:19.451 --rc geninfo_unexecuted_blocks=1 00:06:19.451 00:06:19.451 ' 00:06:19.451 17:06:17 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:19.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.451 --rc genhtml_branch_coverage=1 00:06:19.451 --rc genhtml_function_coverage=1 00:06:19.451 --rc genhtml_legend=1 00:06:19.451 --rc geninfo_all_blocks=1 00:06:19.451 --rc geninfo_unexecuted_blocks=1 00:06:19.451 00:06:19.451 ' 00:06:19.451 17:06:17 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:19.451 17:06:17 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:19.451 17:06:17 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.451 17:06:17 thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.451 ************************************ 00:06:19.451 START TEST thread_poller_perf 00:06:19.451 ************************************ 00:06:19.711 17:06:17 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:19.711 [2024-10-01 17:06:18.019125] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:06:19.711 [2024-10-01 17:06:18.019210] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2789629 ] 00:06:19.711 [2024-10-01 17:06:18.082146] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.711 [2024-10-01 17:06:18.113278] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.711 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:20.651 ====================================== 00:06:20.651 busy:2409034498 (cyc) 00:06:20.651 total_run_count: 288000 00:06:20.651 tsc_hz: 2400000000 (cyc) 00:06:20.651 ====================================== 00:06:20.651 poller_cost: 8364 (cyc), 3485 (nsec) 00:06:20.651 00:06:20.651 real 0m1.164s 00:06:20.651 user 0m1.088s 00:06:20.651 sys 0m0.072s 00:06:20.651 17:06:19 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.651 17:06:19 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:20.651 ************************************ 00:06:20.651 END TEST thread_poller_perf 00:06:20.651 ************************************ 00:06:20.912 17:06:19 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:20.912 17:06:19 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:20.912 17:06:19 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.912 17:06:19 thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.912 ************************************ 00:06:20.912 START TEST thread_poller_perf 00:06:20.912 ************************************ 00:06:20.912 17:06:19 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:20.912 [2024-10-01 17:06:19.264286] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:06:20.912 [2024-10-01 17:06:19.264370] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2789806 ] 00:06:20.912 [2024-10-01 17:06:19.330780] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.912 [2024-10-01 17:06:19.365645] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.912 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:22.297 ====================================== 00:06:22.297 busy:2402269008 (cyc) 00:06:22.297 total_run_count: 3804000 00:06:22.297 tsc_hz: 2400000000 (cyc) 00:06:22.297 ====================================== 00:06:22.297 poller_cost: 631 (cyc), 262 (nsec) 00:06:22.297 00:06:22.297 real 0m1.165s 00:06:22.297 user 0m1.088s 00:06:22.297 sys 0m0.073s 00:06:22.297 17:06:20 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.297 17:06:20 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:22.297 ************************************ 00:06:22.297 END TEST thread_poller_perf 00:06:22.297 ************************************ 00:06:22.297 17:06:20 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:22.297 00:06:22.297 real 0m2.690s 00:06:22.297 user 0m2.339s 00:06:22.297 sys 0m0.364s 00:06:22.297 17:06:20 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.297 17:06:20 thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.297 ************************************ 00:06:22.297 END TEST thread 00:06:22.297 ************************************ 00:06:22.297 17:06:20 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:22.297 17:06:20 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:22.297 17:06:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.297 17:06:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.297 17:06:20 -- common/autotest_common.sh@10 -- # set +x 00:06:22.297 ************************************ 00:06:22.297 START TEST app_cmdline 00:06:22.297 ************************************ 00:06:22.297 17:06:20 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:22.297 * Looking for test storage... 00:06:22.297 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:22.297 17:06:20 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:22.297 17:06:20 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:06:22.297 17:06:20 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:22.297 17:06:20 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:22.297 17:06:20 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.297 17:06:20 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.297 17:06:20 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.297 17:06:20 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.297 17:06:20 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.297 17:06:20 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.297 17:06:20 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.297 17:06:20 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.297 17:06:20 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.297 17:06:20 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.297 17:06:20 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.297 17:06:20 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:22.297 17:06:20 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:22.297 17:06:20 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.297 17:06:20 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.297 17:06:20 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:22.297 17:06:20 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:22.297 17:06:20 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.297 17:06:20 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:22.297 17:06:20 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.297 17:06:20 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:22.297 17:06:20 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:22.297 17:06:20 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.297 17:06:20 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:22.297 17:06:20 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.297 17:06:20 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.297 17:06:20 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.297 17:06:20 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:22.297 17:06:20 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.297 17:06:20 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:22.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.297 --rc genhtml_branch_coverage=1 00:06:22.297 --rc genhtml_function_coverage=1 00:06:22.297 --rc genhtml_legend=1 00:06:22.297 --rc geninfo_all_blocks=1 00:06:22.297 --rc geninfo_unexecuted_blocks=1 00:06:22.297 00:06:22.297 ' 00:06:22.297 17:06:20 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:22.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.297 --rc genhtml_branch_coverage=1 00:06:22.297 --rc genhtml_function_coverage=1 00:06:22.297 --rc genhtml_legend=1 00:06:22.297 --rc geninfo_all_blocks=1 00:06:22.297 --rc geninfo_unexecuted_blocks=1 00:06:22.297 00:06:22.297 ' 00:06:22.297 17:06:20 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:22.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.297 --rc genhtml_branch_coverage=1 00:06:22.297 --rc genhtml_function_coverage=1 00:06:22.297 --rc genhtml_legend=1 00:06:22.297 --rc geninfo_all_blocks=1 00:06:22.297 --rc geninfo_unexecuted_blocks=1 00:06:22.297 00:06:22.297 ' 00:06:22.297 17:06:20 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:22.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.297 --rc genhtml_branch_coverage=1 00:06:22.297 --rc genhtml_function_coverage=1 00:06:22.297 --rc genhtml_legend=1 00:06:22.297 --rc geninfo_all_blocks=1 00:06:22.297 --rc geninfo_unexecuted_blocks=1 00:06:22.297 00:06:22.297 ' 00:06:22.297 17:06:20 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:22.297 17:06:20 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2790206 00:06:22.297 17:06:20 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2790206 00:06:22.297 17:06:20 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 2790206 ']' 00:06:22.297 17:06:20 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:22.297 17:06:20 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.297 17:06:20 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:22.297 17:06:20 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.297 17:06:20 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:22.297 17:06:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:22.297 [2024-10-01 17:06:20.780747] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:06:22.297 [2024-10-01 17:06:20.780825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2790206 ] 00:06:22.558 [2024-10-01 17:06:20.845093] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.558 [2024-10-01 17:06:20.884209] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.558 17:06:21 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.558 17:06:21 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:22.558 17:06:21 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:22.819 { 00:06:22.819 "version": "SPDK v25.01-pre git sha1 e9b861378", 00:06:22.819 "fields": { 00:06:22.819 "major": 25, 00:06:22.819 "minor": 1, 00:06:22.819 "patch": 0, 00:06:22.819 "suffix": "-pre", 00:06:22.819 "commit": "e9b861378" 00:06:22.819 } 00:06:22.819 } 00:06:22.819 17:06:21 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:22.819 17:06:21 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:22.819 17:06:21 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:22.819 17:06:21 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:22.819 17:06:21 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:22.819 17:06:21 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:22.819 17:06:21 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:22.819 17:06:21 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.819 17:06:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:22.819 17:06:21 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.819 17:06:21 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:22.819 17:06:21 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:22.819 17:06:21 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:22.819 17:06:21 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:22.819 17:06:21 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:22.819 17:06:21 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:22.819 17:06:21 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.819 17:06:21 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:22.819 17:06:21 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.819 17:06:21 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:22.819 17:06:21 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.819 17:06:21 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:22.819 17:06:21 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:22.819 17:06:21 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:23.080 request: 00:06:23.080 { 00:06:23.080 "method": "env_dpdk_get_mem_stats", 00:06:23.080 "req_id": 1 00:06:23.080 } 00:06:23.080 Got JSON-RPC error response 00:06:23.080 response: 00:06:23.080 { 00:06:23.080 "code": -32601, 00:06:23.080 "message": "Method not found" 00:06:23.080 } 00:06:23.080 17:06:21 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:23.080 17:06:21 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:23.080 17:06:21 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:23.080 17:06:21 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:23.080 17:06:21 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2790206 00:06:23.080 17:06:21 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 2790206 ']' 00:06:23.080 17:06:21 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 2790206 00:06:23.080 17:06:21 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:23.080 17:06:21 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:23.080 17:06:21 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2790206 00:06:23.080 17:06:21 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:23.080 17:06:21 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:23.080 17:06:21 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2790206' 00:06:23.080 killing process with pid 2790206 00:06:23.080 17:06:21 app_cmdline -- common/autotest_common.sh@969 -- # kill 2790206 00:06:23.080 17:06:21 app_cmdline -- common/autotest_common.sh@974 -- # wait 2790206 00:06:23.341 00:06:23.341 real 0m1.187s 00:06:23.341 user 0m1.383s 00:06:23.341 sys 0m0.432s 00:06:23.341 17:06:21 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.341 17:06:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:23.341 ************************************ 00:06:23.341 END TEST app_cmdline 00:06:23.341 ************************************ 00:06:23.341 17:06:21 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:23.341 17:06:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:23.341 17:06:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.341 17:06:21 -- common/autotest_common.sh@10 -- # set +x 00:06:23.341 ************************************ 00:06:23.341 START TEST version 00:06:23.341 ************************************ 00:06:23.341 17:06:21 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:23.341 * Looking for test storage... 00:06:23.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:23.341 17:06:21 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:23.602 17:06:21 version -- common/autotest_common.sh@1681 -- # lcov --version 00:06:23.602 17:06:21 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:23.602 17:06:21 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:23.602 17:06:21 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.602 17:06:21 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.602 17:06:21 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.602 17:06:21 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.602 17:06:21 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.602 17:06:21 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.602 17:06:21 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.602 17:06:21 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.602 17:06:21 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.602 17:06:21 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.602 17:06:21 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.602 17:06:21 version -- scripts/common.sh@344 -- # case "$op" in 00:06:23.602 17:06:21 version -- scripts/common.sh@345 -- # : 1 00:06:23.602 17:06:21 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.602 17:06:21 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.602 17:06:21 version -- scripts/common.sh@365 -- # decimal 1 00:06:23.602 17:06:21 version -- scripts/common.sh@353 -- # local d=1 00:06:23.602 17:06:21 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.602 17:06:21 version -- scripts/common.sh@355 -- # echo 1 00:06:23.602 17:06:21 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.602 17:06:21 version -- scripts/common.sh@366 -- # decimal 2 00:06:23.602 17:06:21 version -- scripts/common.sh@353 -- # local d=2 00:06:23.602 17:06:21 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.603 17:06:21 version -- scripts/common.sh@355 -- # echo 2 00:06:23.603 17:06:21 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.603 17:06:21 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.603 17:06:21 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.603 17:06:21 version -- scripts/common.sh@368 -- # return 0 00:06:23.603 17:06:21 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.603 17:06:21 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:23.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.603 --rc genhtml_branch_coverage=1 00:06:23.603 --rc genhtml_function_coverage=1 00:06:23.603 --rc genhtml_legend=1 00:06:23.603 --rc geninfo_all_blocks=1 00:06:23.603 --rc geninfo_unexecuted_blocks=1 00:06:23.603 00:06:23.603 ' 00:06:23.603 17:06:21 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:23.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.603 --rc genhtml_branch_coverage=1 00:06:23.603 --rc genhtml_function_coverage=1 00:06:23.603 --rc genhtml_legend=1 00:06:23.603 --rc geninfo_all_blocks=1 00:06:23.603 --rc geninfo_unexecuted_blocks=1 00:06:23.603 00:06:23.603 ' 00:06:23.603 17:06:21 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:23.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.603 --rc genhtml_branch_coverage=1 00:06:23.603 --rc genhtml_function_coverage=1 00:06:23.603 --rc genhtml_legend=1 00:06:23.603 --rc geninfo_all_blocks=1 00:06:23.603 --rc geninfo_unexecuted_blocks=1 00:06:23.603 00:06:23.603 ' 00:06:23.603 17:06:21 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:23.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.603 --rc genhtml_branch_coverage=1 00:06:23.603 --rc genhtml_function_coverage=1 00:06:23.603 --rc genhtml_legend=1 00:06:23.603 --rc geninfo_all_blocks=1 00:06:23.603 --rc geninfo_unexecuted_blocks=1 00:06:23.603 00:06:23.603 ' 00:06:23.603 17:06:21 version -- app/version.sh@17 -- # get_header_version major 00:06:23.603 17:06:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:23.603 17:06:21 version -- app/version.sh@14 -- # cut -f2 00:06:23.603 17:06:21 version -- app/version.sh@14 -- # tr -d '"' 00:06:23.603 17:06:21 version -- app/version.sh@17 -- # major=25 00:06:23.603 17:06:21 version -- app/version.sh@18 -- # get_header_version minor 00:06:23.603 17:06:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:23.603 17:06:21 version -- app/version.sh@14 -- # cut -f2 00:06:23.603 17:06:21 version -- app/version.sh@14 -- # tr -d '"' 00:06:23.603 17:06:22 version -- app/version.sh@18 -- # minor=1 00:06:23.603 17:06:22 version -- app/version.sh@19 -- # get_header_version patch 00:06:23.603 17:06:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:23.603 17:06:22 version -- app/version.sh@14 -- # cut -f2 00:06:23.603 17:06:22 version -- app/version.sh@14 -- # tr -d '"' 00:06:23.603 17:06:22 version -- app/version.sh@19 -- # patch=0 00:06:23.603 17:06:22 version -- app/version.sh@20 -- # get_header_version suffix 00:06:23.603 17:06:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:23.603 17:06:22 version -- app/version.sh@14 -- # cut -f2 00:06:23.603 17:06:22 version -- app/version.sh@14 -- # tr -d '"' 00:06:23.603 17:06:22 version -- app/version.sh@20 -- # suffix=-pre 00:06:23.603 17:06:22 version -- app/version.sh@22 -- # version=25.1 00:06:23.603 17:06:22 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:23.603 17:06:22 version -- app/version.sh@28 -- # version=25.1rc0 00:06:23.603 17:06:22 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:23.603 17:06:22 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:23.603 17:06:22 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:23.603 17:06:22 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:23.603 00:06:23.603 real 0m0.278s 00:06:23.603 user 0m0.171s 00:06:23.603 sys 0m0.156s 00:06:23.603 17:06:22 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.603 17:06:22 version -- common/autotest_common.sh@10 -- # set +x 00:06:23.603 ************************************ 00:06:23.603 END TEST version 00:06:23.603 ************************************ 00:06:23.603 17:06:22 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:23.603 17:06:22 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:23.603 17:06:22 -- spdk/autotest.sh@194 -- # uname -s 00:06:23.603 17:06:22 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:23.603 17:06:22 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:23.603 17:06:22 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:23.603 17:06:22 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:23.603 17:06:22 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:06:23.603 17:06:22 -- spdk/autotest.sh@256 -- # timing_exit lib 00:06:23.603 17:06:22 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:23.603 17:06:22 -- common/autotest_common.sh@10 -- # set +x 00:06:23.865 17:06:22 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:06:23.865 17:06:22 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:06:23.865 17:06:22 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:06:23.865 17:06:22 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:06:23.865 17:06:22 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:06:23.865 17:06:22 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:06:23.865 17:06:22 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:23.865 17:06:22 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:23.865 17:06:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.865 17:06:22 -- common/autotest_common.sh@10 -- # set +x 00:06:23.865 ************************************ 00:06:23.865 START TEST nvmf_tcp 00:06:23.865 ************************************ 00:06:23.865 17:06:22 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:23.865 * Looking for test storage... 00:06:23.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:23.865 17:06:22 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:23.865 17:06:22 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:06:23.865 17:06:22 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:23.865 17:06:22 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:23.865 17:06:22 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.865 17:06:22 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.865 17:06:22 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.865 17:06:22 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.865 17:06:22 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.865 17:06:22 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.865 17:06:22 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.865 17:06:22 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.865 17:06:22 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.865 17:06:22 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.865 17:06:22 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.865 17:06:22 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:23.865 17:06:22 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:23.865 17:06:22 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.865 17:06:22 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.865 17:06:22 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:23.865 17:06:22 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:23.865 17:06:22 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.865 17:06:22 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:23.865 17:06:22 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.865 17:06:22 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:23.865 17:06:22 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:23.865 17:06:22 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.865 17:06:22 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:23.865 17:06:22 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.865 17:06:22 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.865 17:06:22 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.865 17:06:22 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:23.865 17:06:22 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.865 17:06:22 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:23.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.865 --rc genhtml_branch_coverage=1 00:06:23.865 --rc genhtml_function_coverage=1 00:06:23.865 --rc genhtml_legend=1 00:06:23.865 --rc geninfo_all_blocks=1 00:06:23.865 --rc geninfo_unexecuted_blocks=1 00:06:23.865 00:06:23.865 ' 00:06:23.865 17:06:22 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:23.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.865 --rc genhtml_branch_coverage=1 00:06:23.865 --rc genhtml_function_coverage=1 00:06:23.865 --rc genhtml_legend=1 00:06:23.865 --rc geninfo_all_blocks=1 00:06:23.865 --rc geninfo_unexecuted_blocks=1 00:06:23.865 00:06:23.865 ' 00:06:23.865 17:06:22 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:23.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.865 --rc genhtml_branch_coverage=1 00:06:23.865 --rc genhtml_function_coverage=1 00:06:23.865 --rc genhtml_legend=1 00:06:23.865 --rc geninfo_all_blocks=1 00:06:23.865 --rc geninfo_unexecuted_blocks=1 00:06:23.865 00:06:23.865 ' 00:06:23.865 17:06:22 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:23.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.865 --rc genhtml_branch_coverage=1 00:06:23.865 --rc genhtml_function_coverage=1 00:06:23.865 --rc genhtml_legend=1 00:06:23.865 --rc geninfo_all_blocks=1 00:06:23.865 --rc geninfo_unexecuted_blocks=1 00:06:23.865 00:06:23.865 ' 00:06:23.865 17:06:22 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:23.865 17:06:22 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:23.865 17:06:22 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:23.865 17:06:22 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:23.865 17:06:22 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.865 17:06:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:24.128 ************************************ 00:06:24.128 START TEST nvmf_target_core 00:06:24.128 ************************************ 00:06:24.128 17:06:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:24.128 * Looking for test storage... 00:06:24.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:24.128 17:06:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:24.128 17:06:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:06:24.128 17:06:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:24.128 17:06:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:24.128 17:06:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.128 17:06:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.128 17:06:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.128 17:06:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.128 17:06:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.128 17:06:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.128 17:06:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.128 17:06:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.128 17:06:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.128 17:06:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.128 17:06:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.128 17:06:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:24.128 17:06:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:24.128 17:06:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.128 17:06:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.128 17:06:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:24.128 17:06:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:24.128 17:06:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.128 17:06:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:24.128 17:06:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.128 17:06:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:24.128 17:06:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:24.128 17:06:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.128 17:06:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:24.128 17:06:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.128 17:06:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.128 17:06:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.128 17:06:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:24.128 17:06:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.128 17:06:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:24.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.128 --rc genhtml_branch_coverage=1 00:06:24.128 --rc genhtml_function_coverage=1 00:06:24.128 --rc genhtml_legend=1 00:06:24.128 --rc geninfo_all_blocks=1 00:06:24.128 --rc geninfo_unexecuted_blocks=1 00:06:24.128 00:06:24.128 ' 00:06:24.128 17:06:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:24.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.128 --rc genhtml_branch_coverage=1 00:06:24.128 --rc genhtml_function_coverage=1 00:06:24.128 --rc genhtml_legend=1 00:06:24.128 --rc geninfo_all_blocks=1 00:06:24.128 --rc geninfo_unexecuted_blocks=1 00:06:24.128 00:06:24.128 ' 00:06:24.128 17:06:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:24.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.128 --rc genhtml_branch_coverage=1 00:06:24.128 --rc genhtml_function_coverage=1 00:06:24.128 --rc genhtml_legend=1 00:06:24.128 --rc geninfo_all_blocks=1 00:06:24.128 --rc geninfo_unexecuted_blocks=1 00:06:24.128 00:06:24.128 ' 00:06:24.128 17:06:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:24.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.128 --rc genhtml_branch_coverage=1 00:06:24.128 --rc genhtml_function_coverage=1 00:06:24.128 --rc genhtml_legend=1 00:06:24.128 --rc geninfo_all_blocks=1 00:06:24.128 --rc geninfo_unexecuted_blocks=1 00:06:24.128 00:06:24.128 ' 00:06:24.128 17:06:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:24.128 17:06:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:24.129 17:06:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:24.129 17:06:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:24.129 17:06:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:24.129 17:06:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:24.129 17:06:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:24.129 17:06:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:24.129 17:06:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:24.129 17:06:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:24.129 17:06:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:24.129 17:06:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:24.129 17:06:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:24.129 17:06:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:24.129 17:06:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:24.129 17:06:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:24.129 17:06:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:24.129 17:06:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:24.129 17:06:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:24.129 17:06:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:24.129 17:06:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:24.129 17:06:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:24.129 17:06:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:24.129 17:06:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:24.129 17:06:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:24.129 17:06:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.129 17:06:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.129 17:06:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.129 17:06:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:24.129 17:06:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.129 17:06:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:24.129 17:06:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:24.129 17:06:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:24.129 17:06:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:24.129 17:06:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:24.129 17:06:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:24.129 17:06:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:24.129 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:24.129 17:06:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:24.129 17:06:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:24.129 17:06:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:24.391 17:06:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:24.391 17:06:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:24.391 17:06:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:24.391 17:06:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:24.391 17:06:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:24.391 17:06:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.391 17:06:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:24.391 ************************************ 00:06:24.391 START TEST nvmf_abort 00:06:24.391 ************************************ 00:06:24.391 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:24.391 * Looking for test storage... 00:06:24.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:24.391 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:24.391 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:06:24.391 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:24.391 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:24.391 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.391 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.391 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.391 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.391 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.391 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.391 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.391 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.391 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.391 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.391 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.391 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:24.391 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:24.391 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.391 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.391 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:24.391 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:24.392 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.392 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:24.392 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.392 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:24.392 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:24.392 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.392 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:24.392 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.392 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.392 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.392 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:24.392 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.392 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:24.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.392 --rc genhtml_branch_coverage=1 00:06:24.392 --rc genhtml_function_coverage=1 00:06:24.392 --rc genhtml_legend=1 00:06:24.392 --rc geninfo_all_blocks=1 00:06:24.392 --rc geninfo_unexecuted_blocks=1 00:06:24.392 00:06:24.392 ' 00:06:24.392 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:24.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.392 --rc genhtml_branch_coverage=1 00:06:24.392 --rc genhtml_function_coverage=1 00:06:24.392 --rc genhtml_legend=1 00:06:24.392 --rc geninfo_all_blocks=1 00:06:24.392 --rc geninfo_unexecuted_blocks=1 00:06:24.392 00:06:24.392 ' 00:06:24.392 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:24.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.392 --rc genhtml_branch_coverage=1 00:06:24.392 --rc genhtml_function_coverage=1 00:06:24.392 --rc genhtml_legend=1 00:06:24.392 --rc geninfo_all_blocks=1 00:06:24.392 --rc geninfo_unexecuted_blocks=1 00:06:24.392 00:06:24.392 ' 00:06:24.392 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:24.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.392 --rc genhtml_branch_coverage=1 00:06:24.392 --rc genhtml_function_coverage=1 00:06:24.392 --rc genhtml_legend=1 00:06:24.392 --rc geninfo_all_blocks=1 00:06:24.392 --rc geninfo_unexecuted_blocks=1 00:06:24.392 00:06:24.392 ' 00:06:24.392 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:24.392 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:24.392 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:24.392 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:24.392 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:24.392 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:24.392 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:24.392 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:24.392 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:24.392 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:24.392 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:24.392 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:24.654 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:24.654 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:24.654 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:24.654 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:24.654 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:24.654 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:24.654 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:24.654 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:24.654 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:24.654 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:24.654 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:24.654 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.654 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.654 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.654 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:24.654 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.654 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:24.654 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:24.654 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:24.654 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:24.655 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:24.655 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:24.655 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:24.655 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:24.655 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:24.655 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:24.655 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:24.655 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:24.655 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:24.655 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:24.655 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:24.655 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:24.655 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:24.655 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:24.655 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:24.655 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:24.655 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:24.655 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:24.655 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:24.655 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:24.655 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:24.655 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:32.801 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:32.801 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:32.801 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:32.801 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:32.801 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:32.801 17:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:32.801 17:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:32.801 17:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:32.802 17:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:32.802 17:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:32.802 17:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:32.802 17:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:32.802 17:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:32.802 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:32.802 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.542 ms 00:06:32.802 00:06:32.802 --- 10.0.0.2 ping statistics --- 00:06:32.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:32.802 rtt min/avg/max/mdev = 0.542/0.542/0.542/0.000 ms 00:06:32.802 17:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:32.802 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:32.802 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:06:32.802 00:06:32.802 --- 10.0.0.1 ping statistics --- 00:06:32.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:32.802 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:06:32.802 17:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:32.802 17:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:06:32.802 17:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:32.802 17:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:32.802 17:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:32.802 17:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:32.802 17:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:32.802 17:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:32.802 17:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:32.802 17:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:32.802 17:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:32.802 17:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:32.802 17:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:32.802 17:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=2794620 00:06:32.802 17:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 2794620 00:06:32.802 17:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:32.802 17:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 2794620 ']' 00:06:32.802 17:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.802 17:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:32.802 17:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.802 17:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:32.802 17:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:32.802 [2024-10-01 17:06:30.351690] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:06:32.802 [2024-10-01 17:06:30.351746] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:32.802 [2024-10-01 17:06:30.441219] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:32.802 [2024-10-01 17:06:30.489152] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:32.802 [2024-10-01 17:06:30.489211] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:32.802 [2024-10-01 17:06:30.489220] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:32.802 [2024-10-01 17:06:30.489227] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:32.802 [2024-10-01 17:06:30.489233] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:32.802 [2024-10-01 17:06:30.489365] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.802 [2024-10-01 17:06:30.489528] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.802 [2024-10-01 17:06:30.489530] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:32.802 17:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.802 17:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:06:32.802 17:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:32.802 17:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:32.802 17:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:32.802 17:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:32.802 17:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:32.802 17:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.802 17:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:32.802 [2024-10-01 17:06:31.204755] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:32.802 17:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.802 17:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:32.802 17:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.802 17:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:32.802 Malloc0 00:06:32.802 17:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.802 17:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:32.802 17:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.802 17:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:32.802 Delay0 00:06:32.802 17:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.802 17:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:32.802 17:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.802 17:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:32.802 17:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.802 17:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:32.802 17:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.802 17:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:32.802 17:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.802 17:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:32.802 17:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.802 17:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:32.802 [2024-10-01 17:06:31.285705] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:32.802 17:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.802 17:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:32.802 17:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.802 17:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:32.802 17:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.802 17:06:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:33.064 [2024-10-01 17:06:31.365358] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:34.980 Initializing NVMe Controllers 00:06:34.980 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:34.980 controller IO queue size 128 less than required 00:06:34.980 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:34.980 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:34.980 Initialization complete. Launching workers. 00:06:34.980 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29019 00:06:34.980 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29080, failed to submit 62 00:06:34.980 success 29023, unsuccessful 57, failed 0 00:06:34.980 17:06:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:34.980 17:06:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.980 17:06:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:34.980 17:06:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.980 17:06:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:34.980 17:06:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:34.980 17:06:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:34.980 17:06:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:34.980 17:06:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:34.980 17:06:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:34.980 17:06:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:34.980 17:06:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:34.980 rmmod nvme_tcp 00:06:34.980 rmmod nvme_fabrics 00:06:35.242 rmmod nvme_keyring 00:06:35.242 17:06:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:35.242 17:06:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:35.242 17:06:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:35.242 17:06:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 2794620 ']' 00:06:35.242 17:06:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 2794620 00:06:35.242 17:06:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 2794620 ']' 00:06:35.242 17:06:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 2794620 00:06:35.242 17:06:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:06:35.242 17:06:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:35.242 17:06:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2794620 00:06:35.242 17:06:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:35.242 17:06:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:35.242 17:06:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2794620' 00:06:35.242 killing process with pid 2794620 00:06:35.242 17:06:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 2794620 00:06:35.242 17:06:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 2794620 00:06:35.242 17:06:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:35.242 17:06:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:06:35.242 17:06:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:06:35.242 17:06:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:35.242 17:06:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:06:35.243 17:06:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:06:35.243 17:06:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:06:35.243 17:06:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:35.243 17:06:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:35.243 17:06:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:35.243 17:06:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:35.243 17:06:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:37.791 17:06:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:37.791 00:06:37.791 real 0m13.126s 00:06:37.791 user 0m13.747s 00:06:37.791 sys 0m6.372s 00:06:37.791 17:06:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.791 17:06:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:37.791 ************************************ 00:06:37.791 END TEST nvmf_abort 00:06:37.791 ************************************ 00:06:37.791 17:06:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:37.791 17:06:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:37.791 17:06:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.791 17:06:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:37.791 ************************************ 00:06:37.791 START TEST nvmf_ns_hotplug_stress 00:06:37.791 ************************************ 00:06:37.791 17:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:37.791 * Looking for test storage... 00:06:37.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:37.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.791 --rc genhtml_branch_coverage=1 00:06:37.791 --rc genhtml_function_coverage=1 00:06:37.791 --rc genhtml_legend=1 00:06:37.791 --rc geninfo_all_blocks=1 00:06:37.791 --rc geninfo_unexecuted_blocks=1 00:06:37.791 00:06:37.791 ' 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:37.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.791 --rc genhtml_branch_coverage=1 00:06:37.791 --rc genhtml_function_coverage=1 00:06:37.791 --rc genhtml_legend=1 00:06:37.791 --rc geninfo_all_blocks=1 00:06:37.791 --rc geninfo_unexecuted_blocks=1 00:06:37.791 00:06:37.791 ' 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:37.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.791 --rc genhtml_branch_coverage=1 00:06:37.791 --rc genhtml_function_coverage=1 00:06:37.791 --rc genhtml_legend=1 00:06:37.791 --rc geninfo_all_blocks=1 00:06:37.791 --rc geninfo_unexecuted_blocks=1 00:06:37.791 00:06:37.791 ' 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:37.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.791 --rc genhtml_branch_coverage=1 00:06:37.791 --rc genhtml_function_coverage=1 00:06:37.791 --rc genhtml_legend=1 00:06:37.791 --rc geninfo_all_blocks=1 00:06:37.791 --rc geninfo_unexecuted_blocks=1 00:06:37.791 00:06:37.791 ' 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:37.791 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.792 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.792 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.792 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:37.792 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.792 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:37.792 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:37.792 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:37.792 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:37.792 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:37.792 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:37.792 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:37.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:37.792 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:37.792 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:37.792 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:37.792 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:37.792 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:37.792 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:37.792 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:37.792 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:37.792 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:37.792 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:37.792 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:37.792 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:37.792 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:37.792 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:37.792 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:37.792 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:37.792 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:45.931 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:45.931 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:45.931 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:45.931 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:45.931 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:45.932 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:45.932 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:45.932 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:45.932 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:45.932 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:06:45.932 00:06:45.932 --- 10.0.0.2 ping statistics --- 00:06:45.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:45.932 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:06:45.932 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:45.932 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:45.932 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:06:45.932 00:06:45.932 --- 10.0.0.1 ping statistics --- 00:06:45.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:45.932 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:06:45.932 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:45.932 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:06:45.932 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:45.932 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:45.932 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:45.932 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:45.932 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:45.932 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:45.932 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:45.932 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:45.932 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:45.932 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:45.932 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:45.932 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=2799409 00:06:45.932 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 2799409 00:06:45.932 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:45.932 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 2799409 ']' 00:06:45.932 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.932 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.932 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.932 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.932 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:45.932 [2024-10-01 17:06:43.617015] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:06:45.932 [2024-10-01 17:06:43.617086] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:45.932 [2024-10-01 17:06:43.705342] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:45.932 [2024-10-01 17:06:43.753499] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:45.932 [2024-10-01 17:06:43.753554] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:45.932 [2024-10-01 17:06:43.753563] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:45.932 [2024-10-01 17:06:43.753571] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:45.932 [2024-10-01 17:06:43.753577] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:45.932 [2024-10-01 17:06:43.753715] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.932 [2024-10-01 17:06:43.753860] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.932 [2024-10-01 17:06:43.753862] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:45.932 17:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:45.932 17:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:06:45.932 17:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:45.932 17:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:45.932 17:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:45.932 17:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:45.932 17:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:45.932 17:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:46.192 [2024-10-01 17:06:44.623035] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:46.192 17:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:46.452 17:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:46.452 [2024-10-01 17:06:44.993560] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:46.712 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:46.712 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:46.973 Malloc0 00:06:46.973 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:47.233 Delay0 00:06:47.233 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.233 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:47.494 NULL1 00:06:47.494 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:47.755 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2800099 00:06:47.756 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:47.756 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:06:47.756 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.017 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.017 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:48.017 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:48.277 true 00:06:48.277 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:06:48.277 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.537 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.538 17:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:48.538 17:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:48.799 true 00:06:48.799 17:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:06:48.799 17:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.061 17:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.061 17:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:49.061 17:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:49.322 true 00:06:49.322 17:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:06:49.322 17:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.582 17:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.582 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:49.582 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:49.842 true 00:06:49.842 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:06:49.842 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.102 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.362 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:50.362 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:50.362 true 00:06:50.362 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:06:50.362 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.621 17:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.881 17:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:50.881 17:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:50.881 true 00:06:50.881 17:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:06:50.881 17:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.141 17:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.401 17:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:51.401 17:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:51.401 true 00:06:51.401 17:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:06:51.401 17:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.660 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.920 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:51.920 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:51.920 true 00:06:51.920 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:06:51.920 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.179 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.440 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:52.440 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:52.440 true 00:06:52.699 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:06:52.699 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.699 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.959 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:52.959 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:53.238 true 00:06:53.238 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:06:53.238 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.238 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.542 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:53.542 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:53.542 true 00:06:53.836 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:06:53.836 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.836 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.101 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:54.101 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:54.101 true 00:06:54.361 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:06:54.361 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.361 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.620 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:54.620 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:54.881 true 00:06:54.881 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:06:54.881 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.881 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.141 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:55.141 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:55.401 true 00:06:55.401 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:06:55.401 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.662 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.662 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:55.662 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:55.923 true 00:06:55.923 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:06:55.923 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.183 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.183 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:56.183 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:56.443 true 00:06:56.443 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:06:56.443 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.704 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.704 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:56.704 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:56.965 true 00:06:56.965 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:06:56.965 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.226 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.486 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:57.486 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:57.486 true 00:06:57.486 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:06:57.486 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.747 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.006 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:58.006 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:58.006 true 00:06:58.007 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:06:58.007 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.266 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.526 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:58.526 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:58.526 true 00:06:58.786 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:06:58.786 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.786 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.046 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:59.046 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:59.307 true 00:06:59.307 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:06:59.307 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.307 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.567 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:59.567 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:59.827 true 00:06:59.827 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:06:59.827 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.827 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.087 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:00.087 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:00.347 true 00:07:00.347 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:07:00.347 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.608 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.608 17:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:00.608 17:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:00.869 true 00:07:00.869 17:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:07:00.869 17:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.129 17:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.129 17:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:01.129 17:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:01.390 true 00:07:01.390 17:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:07:01.390 17:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.651 17:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.911 17:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:01.911 17:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:01.911 true 00:07:01.911 17:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:07:01.911 17:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.171 17:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.432 17:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:02.432 17:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:02.432 true 00:07:02.432 17:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:07:02.432 17:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.693 17:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.954 17:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:02.954 17:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:02.954 true 00:07:02.954 17:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:07:02.954 17:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.213 17:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.473 17:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:03.473 17:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:03.473 true 00:07:03.734 17:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:07:03.734 17:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.734 17:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.993 17:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:03.993 17:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:04.253 true 00:07:04.253 17:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:07:04.253 17:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.253 17:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.513 17:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:07:04.513 17:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:07:04.773 true 00:07:04.773 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:07:04.773 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.032 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.032 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:07:05.032 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:07:05.292 true 00:07:05.292 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:07:05.292 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.552 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.552 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:07:05.552 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:07:05.811 true 00:07:05.811 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:07:05.811 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.071 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.071 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:07:06.071 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:07:06.330 true 00:07:06.330 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:07:06.330 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.590 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.850 17:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:07:06.850 17:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:07:06.850 true 00:07:06.850 17:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:07:06.850 17:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.109 17:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.369 17:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:07:07.369 17:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:07:07.369 true 00:07:07.369 17:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:07:07.369 17:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.629 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.889 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:07:07.889 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:07:07.889 true 00:07:08.149 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:07:08.149 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.149 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.408 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:07:08.408 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:07:08.668 true 00:07:08.668 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:07:08.668 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.668 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.927 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:07:08.928 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:07:09.188 true 00:07:09.188 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:07:09.188 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.449 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.449 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:07:09.449 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:07:09.709 true 00:07:09.709 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:07:09.709 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.968 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.968 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:07:09.968 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:07:10.227 true 00:07:10.227 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:07:10.227 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.486 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.746 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:07:10.746 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:07:10.746 true 00:07:10.746 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:07:10.746 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.005 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.265 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:07:11.265 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:07:11.265 true 00:07:11.265 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:07:11.265 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.525 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.785 17:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:07:11.785 17:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:07:11.785 true 00:07:11.785 17:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:07:11.785 17:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.044 17:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.307 17:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:07:12.307 17:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:07:12.307 true 00:07:12.567 17:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:07:12.567 17:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.567 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.827 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:07:12.827 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:07:13.086 true 00:07:13.086 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:07:13.086 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.086 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.346 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:07:13.346 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:07:13.606 true 00:07:13.606 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:07:13.606 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.606 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.865 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:07:13.865 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:07:14.123 true 00:07:14.123 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:07:14.123 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.381 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.381 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:07:14.381 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:07:14.640 true 00:07:14.640 17:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:07:14.640 17:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.899 17:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.899 17:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:07:14.899 17:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:07:15.158 true 00:07:15.158 17:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:07:15.158 17:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.417 17:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.675 17:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:07:15.675 17:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:07:15.675 true 00:07:15.675 17:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:07:15.675 17:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.934 17:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.194 17:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:07:16.194 17:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:07:16.194 true 00:07:16.194 17:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:07:16.194 17:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.454 17:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.714 17:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:07:16.714 17:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:07:16.714 true 00:07:16.974 17:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:07:16.974 17:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.974 17:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.233 17:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:07:17.233 17:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:07:17.493 true 00:07:17.493 17:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:07:17.493 17:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.493 17:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.754 17:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:07:17.754 17:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:07:18.014 true 00:07:18.014 Initializing NVMe Controllers 00:07:18.014 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:18.014 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:07:18.014 Controller IO queue size 128, less than required. 00:07:18.014 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:18.014 WARNING: Some requested NVMe devices were skipped 00:07:18.014 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:18.014 Initialization complete. Launching workers. 00:07:18.014 ======================================================== 00:07:18.014 Latency(us) 00:07:18.014 Device Information : IOPS MiB/s Average min max 00:07:18.014 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30716.53 15.00 4166.89 1441.59 8309.95 00:07:18.014 ======================================================== 00:07:18.014 Total : 30716.53 15.00 4166.89 1441.59 8309.95 00:07:18.014 00:07:18.014 17:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2800099 00:07:18.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2800099) - No such process 00:07:18.014 17:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2800099 00:07:18.014 17:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.014 17:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:18.274 17:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:18.274 17:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:18.274 17:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:18.274 17:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:18.274 17:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:18.534 null0 00:07:18.534 17:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:18.534 17:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:18.534 17:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:18.534 null1 00:07:18.794 17:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:18.794 17:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:18.794 17:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:18.794 null2 00:07:18.794 17:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:18.794 17:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:18.794 17:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:19.055 null3 00:07:19.055 17:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:19.055 17:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:19.055 17:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:19.316 null4 00:07:19.316 17:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:19.316 17:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:19.316 17:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:19.316 null5 00:07:19.316 17:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:19.316 17:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:19.316 17:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:19.578 null6 00:07:19.578 17:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:19.578 17:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:19.578 17:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:19.839 null7 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2806584 2806586 2806589 2806593 2806596 2806599 2806602 2806605 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.839 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:20.099 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:20.099 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:20.099 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:20.099 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:20.099 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.099 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.099 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.099 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.099 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:20.100 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:20.100 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.100 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.100 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:20.100 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.100 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.100 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:20.100 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.100 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.100 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:20.100 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.100 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.100 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:20.100 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.100 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.100 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:20.100 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.100 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.360 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:20.360 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:20.360 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:20.360 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:20.360 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:20.360 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:20.360 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.360 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:20.360 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:20.360 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.360 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.360 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:20.360 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.360 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.360 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:20.621 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.621 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.621 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:20.621 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.621 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.621 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:20.621 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.621 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.621 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:20.621 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.621 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.621 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:20.621 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.621 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.621 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:20.621 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.621 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.621 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:20.621 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:20.621 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:20.621 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:20.621 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.621 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:20.621 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:20.881 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:20.881 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:20.881 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.881 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.881 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:20.881 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.881 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.881 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:20.881 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.881 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.881 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:20.881 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.882 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.882 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:20.882 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.882 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.882 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:20.882 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.882 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.882 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:20.882 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.882 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.882 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:20.882 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.882 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.882 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:21.142 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:21.142 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:21.142 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:21.142 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:21.142 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.142 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:21.142 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:21.142 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:21.142 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.142 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.142 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:21.142 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.142 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.142 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:21.142 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.142 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.142 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:21.142 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.142 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.142 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:21.142 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.403 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.403 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:21.403 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.403 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.403 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:21.403 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.403 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.403 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:21.403 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.403 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.403 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:21.403 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:21.403 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.403 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:21.403 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:21.403 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:21.403 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:21.403 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:21.403 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:21.662 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.662 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.662 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:21.662 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.662 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.662 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:21.662 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.662 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.662 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:21.662 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.662 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.662 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:21.662 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.662 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.662 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:21.663 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.663 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.663 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:21.663 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.663 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.663 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:21.663 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.663 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.663 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:21.663 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:21.922 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:21.922 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.922 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:21.922 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:21.922 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:21.922 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:21.922 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:21.922 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.922 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.922 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:21.922 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.922 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.922 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:21.922 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.922 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.922 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:21.922 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.922 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.922 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:21.922 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.922 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.922 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:21.922 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.922 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.922 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:21.922 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.922 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.922 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:22.179 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.179 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.179 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:22.179 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:22.179 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.179 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:22.179 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:22.179 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:22.179 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:22.179 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:22.180 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:22.438 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.438 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.438 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:22.438 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.438 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.438 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:22.438 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.438 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.438 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:22.438 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.438 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.438 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:22.438 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.438 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.438 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:22.438 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.438 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.438 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.438 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:22.438 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.438 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:22.438 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.438 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.438 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:22.439 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:22.439 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.439 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:22.439 17:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:22.697 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:22.697 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:22.697 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:22.697 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:22.697 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.697 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.697 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:22.697 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.697 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.697 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:22.697 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.697 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.697 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:22.697 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.697 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.697 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:22.697 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.697 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.697 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.697 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:22.697 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.697 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:22.697 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.697 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.697 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:23.013 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.013 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.013 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:23.013 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.013 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:23.013 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:23.013 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:23.013 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:23.013 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:23.013 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:23.013 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:23.013 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.013 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.013 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:23.013 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.013 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.013 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:23.013 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.013 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.013 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:23.272 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.272 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.272 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:23.272 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.272 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.272 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:23.272 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.272 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.272 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:23.273 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.273 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.273 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:23.273 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.273 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.273 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:23.273 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.273 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:23.273 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:23.273 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:23.273 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:23.273 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:23.273 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:23.533 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:23.533 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.533 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.533 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.533 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.533 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.533 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.533 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.533 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.533 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.533 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.533 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.533 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.533 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.533 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.533 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.533 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.533 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:23.533 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:23.533 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:23.533 17:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:23.533 17:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:23.533 17:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:23.533 17:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:23.533 17:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:23.533 rmmod nvme_tcp 00:07:23.533 rmmod nvme_fabrics 00:07:23.533 rmmod nvme_keyring 00:07:23.533 17:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:23.533 17:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:23.533 17:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:23.533 17:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 2799409 ']' 00:07:23.533 17:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 2799409 00:07:23.533 17:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 2799409 ']' 00:07:23.533 17:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 2799409 00:07:23.533 17:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:07:23.533 17:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:23.792 17:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2799409 00:07:23.792 17:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:23.792 17:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:23.792 17:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2799409' 00:07:23.792 killing process with pid 2799409 00:07:23.792 17:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 2799409 00:07:23.792 17:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 2799409 00:07:23.792 17:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:23.792 17:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:23.792 17:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:23.792 17:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:23.792 17:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:07:23.792 17:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:23.792 17:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:07:23.792 17:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:23.792 17:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:23.792 17:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:23.792 17:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:23.792 17:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.334 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:26.334 00:07:26.334 real 0m48.416s 00:07:26.334 user 3m18.480s 00:07:26.334 sys 0m16.903s 00:07:26.334 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.334 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:26.334 ************************************ 00:07:26.334 END TEST nvmf_ns_hotplug_stress 00:07:26.334 ************************************ 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:26.335 ************************************ 00:07:26.335 START TEST nvmf_delete_subsystem 00:07:26.335 ************************************ 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:26.335 * Looking for test storage... 00:07:26.335 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:26.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.335 --rc genhtml_branch_coverage=1 00:07:26.335 --rc genhtml_function_coverage=1 00:07:26.335 --rc genhtml_legend=1 00:07:26.335 --rc geninfo_all_blocks=1 00:07:26.335 --rc geninfo_unexecuted_blocks=1 00:07:26.335 00:07:26.335 ' 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:26.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.335 --rc genhtml_branch_coverage=1 00:07:26.335 --rc genhtml_function_coverage=1 00:07:26.335 --rc genhtml_legend=1 00:07:26.335 --rc geninfo_all_blocks=1 00:07:26.335 --rc geninfo_unexecuted_blocks=1 00:07:26.335 00:07:26.335 ' 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:26.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.335 --rc genhtml_branch_coverage=1 00:07:26.335 --rc genhtml_function_coverage=1 00:07:26.335 --rc genhtml_legend=1 00:07:26.335 --rc geninfo_all_blocks=1 00:07:26.335 --rc geninfo_unexecuted_blocks=1 00:07:26.335 00:07:26.335 ' 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:26.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.335 --rc genhtml_branch_coverage=1 00:07:26.335 --rc genhtml_function_coverage=1 00:07:26.335 --rc genhtml_legend=1 00:07:26.335 --rc geninfo_all_blocks=1 00:07:26.335 --rc geninfo_unexecuted_blocks=1 00:07:26.335 00:07:26.335 ' 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.335 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:26.336 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.336 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:26.336 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:26.336 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:26.336 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:26.336 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.336 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.336 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:26.336 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:26.336 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:26.336 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:26.336 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:26.336 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:26.336 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:26.336 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:26.336 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:26.336 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:26.336 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:26.336 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.336 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:26.336 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.336 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:26.336 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:26.336 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:26.336 17:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:34.476 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:34.476 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:34.476 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:34.476 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:34.476 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:34.476 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:34.476 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:34.476 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:34.476 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:34.476 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:34.476 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:34.476 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:34.476 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:34.476 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:34.476 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:34.476 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:34.476 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:34.476 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:34.477 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:34.477 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:34.477 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:34.477 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:34.477 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:34.477 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:34.477 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.493 ms 00:07:34.477 00:07:34.477 --- 10.0.0.2 ping statistics --- 00:07:34.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.477 rtt min/avg/max/mdev = 0.493/0.493/0.493/0.000 ms 00:07:34.477 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:34.477 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:34.477 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:07:34.477 00:07:34.477 --- 10.0.0.1 ping statistics --- 00:07:34.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.477 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:07:34.477 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:34.477 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:07:34.477 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:34.477 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=2811826 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 2811826 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 2811826 ']' 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:34.478 [2024-10-01 17:07:32.127892] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:07:34.478 [2024-10-01 17:07:32.127943] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:34.478 [2024-10-01 17:07:32.194257] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:34.478 [2024-10-01 17:07:32.224430] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:34.478 [2024-10-01 17:07:32.224469] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:34.478 [2024-10-01 17:07:32.224477] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:34.478 [2024-10-01 17:07:32.224483] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:34.478 [2024-10-01 17:07:32.224489] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:34.478 [2024-10-01 17:07:32.224631] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.478 [2024-10-01 17:07:32.224632] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:34.478 [2024-10-01 17:07:32.340669] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:34.478 [2024-10-01 17:07:32.356884] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:34.478 NULL1 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:34.478 Delay0 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2811856 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:34.478 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:34.478 [2024-10-01 17:07:32.441646] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:35.888 17:07:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:35.888 17:07:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.888 17:07:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Write completed with error (sct=0, sc=8) 00:07:36.178 Write completed with error (sct=0, sc=8) 00:07:36.178 starting I/O failed: -6 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Write completed with error (sct=0, sc=8) 00:07:36.178 starting I/O failed: -6 00:07:36.178 Write completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Write completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 starting I/O failed: -6 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 starting I/O failed: -6 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Write completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 starting I/O failed: -6 00:07:36.178 Write completed with error (sct=0, sc=8) 00:07:36.178 Write completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 starting I/O failed: -6 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 starting I/O failed: -6 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Write completed with error (sct=0, sc=8) 00:07:36.178 Write completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 starting I/O failed: -6 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Write completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 starting I/O failed: -6 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 starting I/O failed: -6 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Write completed with error (sct=0, sc=8) 00:07:36.178 [2024-10-01 17:07:34.575680] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18faed0 is same with the state(6) to be set 00:07:36.178 Write completed with error (sct=0, sc=8) 00:07:36.178 Write completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Write completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Write completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Write completed with error (sct=0, sc=8) 00:07:36.178 Write completed with error (sct=0, sc=8) 00:07:36.178 Write completed with error (sct=0, sc=8) 00:07:36.178 Write completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Write completed with error (sct=0, sc=8) 00:07:36.178 Write completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Write completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Write completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Write completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Write completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Write completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Write completed with error (sct=0, sc=8) 00:07:36.178 Write completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Write completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Write completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Write completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Write completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.178 Read completed with error (sct=0, sc=8) 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 Write completed with error (sct=0, sc=8) 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 Write completed with error (sct=0, sc=8) 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 starting I/O failed: -6 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 Write completed with error (sct=0, sc=8) 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 starting I/O failed: -6 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 starting I/O failed: -6 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 starting I/O failed: -6 00:07:36.179 Write completed with error (sct=0, sc=8) 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 starting I/O failed: -6 00:07:36.179 Write completed with error (sct=0, sc=8) 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 Write completed with error (sct=0, sc=8) 00:07:36.179 starting I/O failed: -6 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 Write completed with error (sct=0, sc=8) 00:07:36.179 starting I/O failed: -6 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 Write completed with error (sct=0, sc=8) 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 Write completed with error (sct=0, sc=8) 00:07:36.179 starting I/O failed: -6 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 starting I/O failed: -6 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 Write completed with error (sct=0, sc=8) 00:07:36.179 Write completed with error (sct=0, sc=8) 00:07:36.179 starting I/O failed: -6 00:07:36.179 Write completed with error (sct=0, sc=8) 00:07:36.179 Write completed with error (sct=0, sc=8) 00:07:36.179 starting I/O failed: -6 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 starting I/O failed: -6 00:07:36.179 Write completed with error (sct=0, sc=8) 00:07:36.179 Write completed with error (sct=0, sc=8) 00:07:36.179 starting I/O failed: -6 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 starting I/O failed: -6 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 starting I/O failed: -6 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 starting I/O failed: -6 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 starting I/O failed: -6 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 starting I/O failed: -6 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 starting I/O failed: -6 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 Write completed with error (sct=0, sc=8) 00:07:36.179 starting I/O failed: -6 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 starting I/O failed: -6 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 starting I/O failed: -6 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 Write completed with error (sct=0, sc=8) 00:07:36.179 starting I/O failed: -6 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 starting I/O failed: -6 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 Write completed with error (sct=0, sc=8) 00:07:36.179 starting I/O failed: -6 00:07:36.179 Write completed with error (sct=0, sc=8) 00:07:36.179 Write completed with error (sct=0, sc=8) 00:07:36.179 starting I/O failed: -6 00:07:36.179 Write completed with error (sct=0, sc=8) 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 starting I/O failed: -6 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 starting I/O failed: -6 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 Write completed with error (sct=0, sc=8) 00:07:36.179 starting I/O failed: -6 00:07:36.179 Write completed with error (sct=0, sc=8) 00:07:36.179 Write completed with error (sct=0, sc=8) 00:07:36.179 starting I/O failed: -6 00:07:36.179 Write completed with error (sct=0, sc=8) 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 starting I/O failed: -6 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 Read completed with error (sct=0, sc=8) 00:07:36.179 starting I/O failed: -6 00:07:36.179 Write completed with error (sct=0, sc=8) 00:07:36.179 starting I/O failed: -6 00:07:36.179 starting I/O failed: -6 00:07:36.179 starting I/O failed: -6 00:07:36.179 starting I/O failed: -6 00:07:36.179 starting I/O failed: -6 00:07:36.179 starting I/O failed: -6 00:07:36.179 starting I/O failed: -6 00:07:36.179 starting I/O failed: -6 00:07:36.179 starting I/O failed: -6 00:07:36.179 starting I/O failed: -6 00:07:36.179 starting I/O failed: -6 00:07:36.179 starting I/O failed: -6 00:07:36.179 starting I/O failed: -6 00:07:37.163 [2024-10-01 17:07:35.541306] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8b20 is same with the state(6) to be set 00:07:37.163 Write completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Write completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Write completed with error (sct=0, sc=8) 00:07:37.163 Write completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Write completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Write completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Write completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 [2024-10-01 17:07:35.579062] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fb0b0 is same with the state(6) to be set 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Write completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Write completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Write completed with error (sct=0, sc=8) 00:07:37.163 Write completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Write completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Write completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Write completed with error (sct=0, sc=8) 00:07:37.163 [2024-10-01 17:07:35.579424] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9c50 is same with the state(6) to be set 00:07:37.163 Write completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Write completed with error (sct=0, sc=8) 00:07:37.163 Write completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Write completed with error (sct=0, sc=8) 00:07:37.163 Write completed with error (sct=0, sc=8) 00:07:37.163 Write completed with error (sct=0, sc=8) 00:07:37.163 Write completed with error (sct=0, sc=8) 00:07:37.163 Write completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Write completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Write completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Write completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Write completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Write completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Write completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Write completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Write completed with error (sct=0, sc=8) 00:07:37.163 Write completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 [2024-10-01 17:07:35.582840] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd20400d780 is same with the state(6) to be set 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Write completed with error (sct=0, sc=8) 00:07:37.163 Write completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Write completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Write completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Write completed with error (sct=0, sc=8) 00:07:37.163 Write completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Write completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Write completed with error (sct=0, sc=8) 00:07:37.163 Write completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Write completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Write completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 Write completed with error (sct=0, sc=8) 00:07:37.163 Read completed with error (sct=0, sc=8) 00:07:37.163 [2024-10-01 17:07:35.583075] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd20400cfe0 is same with the state(6) to be set 00:07:37.163 Initializing NVMe Controllers 00:07:37.163 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:37.163 Controller IO queue size 128, less than required. 00:07:37.163 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:37.163 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:37.163 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:37.163 Initialization complete. Launching workers. 00:07:37.163 ======================================================== 00:07:37.163 Latency(us) 00:07:37.163 Device Information : IOPS MiB/s Average min max 00:07:37.163 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 163.38 0.08 908623.13 216.67 1006609.30 00:07:37.163 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 170.35 0.08 995582.84 351.19 2002172.65 00:07:37.163 ======================================================== 00:07:37.163 Total : 333.73 0.16 953011.52 216.67 2002172.65 00:07:37.163 00:07:37.163 [2024-10-01 17:07:35.583598] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f8b20 (9): Bad file descriptor 00:07:37.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:37.163 17:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.163 17:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:37.163 17:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2811856 00:07:37.163 17:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:37.734 17:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:37.734 17:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2811856 00:07:37.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2811856) - No such process 00:07:37.734 17:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2811856 00:07:37.734 17:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:07:37.734 17:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2811856 00:07:37.734 17:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:07:37.734 17:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.734 17:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:07:37.734 17:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.734 17:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2811856 00:07:37.734 17:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:07:37.734 17:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:37.734 17:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:37.734 17:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:37.734 17:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:37.734 17:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.734 17:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.734 17:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.734 17:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:37.734 17:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.734 17:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.734 [2024-10-01 17:07:36.116689] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:37.734 17:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.734 17:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.734 17:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.734 17:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.734 17:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.734 17:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2812547 00:07:37.734 17:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:37.734 17:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:37.734 17:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2812547 00:07:37.734 17:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:37.734 [2024-10-01 17:07:36.183740] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:38.303 17:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:38.303 17:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2812547 00:07:38.303 17:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:38.874 17:07:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:38.874 17:07:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2812547 00:07:38.874 17:07:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:39.134 17:07:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:39.134 17:07:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2812547 00:07:39.134 17:07:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:39.703 17:07:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:39.703 17:07:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2812547 00:07:39.703 17:07:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:40.274 17:07:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:40.274 17:07:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2812547 00:07:40.274 17:07:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:40.845 17:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:40.845 17:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2812547 00:07:40.845 17:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:40.845 Initializing NVMe Controllers 00:07:40.845 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:40.845 Controller IO queue size 128, less than required. 00:07:40.845 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:40.845 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:40.845 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:40.845 Initialization complete. Launching workers. 00:07:40.845 ======================================================== 00:07:40.845 Latency(us) 00:07:40.845 Device Information : IOPS MiB/s Average min max 00:07:40.845 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002148.97 1000208.89 1042488.55 00:07:40.845 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003182.26 1000217.64 1041333.63 00:07:40.845 ======================================================== 00:07:40.845 Total : 256.00 0.12 1002665.61 1000208.89 1042488.55 00:07:40.845 00:07:41.416 17:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:41.416 17:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2812547 00:07:41.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2812547) - No such process 00:07:41.416 17:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2812547 00:07:41.416 17:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:41.416 17:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:41.416 17:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:41.416 17:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:41.416 17:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:41.416 17:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:41.416 17:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:41.416 17:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:41.416 rmmod nvme_tcp 00:07:41.416 rmmod nvme_fabrics 00:07:41.416 rmmod nvme_keyring 00:07:41.416 17:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:41.416 17:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:41.416 17:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:41.416 17:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 2811826 ']' 00:07:41.416 17:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 2811826 00:07:41.416 17:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 2811826 ']' 00:07:41.416 17:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 2811826 00:07:41.417 17:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:07:41.417 17:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:41.417 17:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2811826 00:07:41.417 17:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:41.417 17:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:41.417 17:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2811826' 00:07:41.417 killing process with pid 2811826 00:07:41.417 17:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 2811826 00:07:41.417 17:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 2811826 00:07:41.417 17:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:41.417 17:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:41.417 17:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:41.417 17:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:41.417 17:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:07:41.417 17:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:41.417 17:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:07:41.417 17:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:41.417 17:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:41.417 17:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.417 17:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:41.417 17:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:43.964 00:07:43.964 real 0m17.593s 00:07:43.964 user 0m29.304s 00:07:43.964 sys 0m6.694s 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:43.964 ************************************ 00:07:43.964 END TEST nvmf_delete_subsystem 00:07:43.964 ************************************ 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:43.964 ************************************ 00:07:43.964 START TEST nvmf_host_management 00:07:43.964 ************************************ 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:43.964 * Looking for test storage... 00:07:43.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:43.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.964 --rc genhtml_branch_coverage=1 00:07:43.964 --rc genhtml_function_coverage=1 00:07:43.964 --rc genhtml_legend=1 00:07:43.964 --rc geninfo_all_blocks=1 00:07:43.964 --rc geninfo_unexecuted_blocks=1 00:07:43.964 00:07:43.964 ' 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:43.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.964 --rc genhtml_branch_coverage=1 00:07:43.964 --rc genhtml_function_coverage=1 00:07:43.964 --rc genhtml_legend=1 00:07:43.964 --rc geninfo_all_blocks=1 00:07:43.964 --rc geninfo_unexecuted_blocks=1 00:07:43.964 00:07:43.964 ' 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:43.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.964 --rc genhtml_branch_coverage=1 00:07:43.964 --rc genhtml_function_coverage=1 00:07:43.964 --rc genhtml_legend=1 00:07:43.964 --rc geninfo_all_blocks=1 00:07:43.964 --rc geninfo_unexecuted_blocks=1 00:07:43.964 00:07:43.964 ' 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:43.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.964 --rc genhtml_branch_coverage=1 00:07:43.964 --rc genhtml_function_coverage=1 00:07:43.964 --rc genhtml_legend=1 00:07:43.964 --rc geninfo_all_blocks=1 00:07:43.964 --rc geninfo_unexecuted_blocks=1 00:07:43.964 00:07:43.964 ' 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:43.964 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:43.965 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:43.965 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:43.965 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:43.965 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:43.965 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:43.965 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:43.965 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.965 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.965 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.965 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.965 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.965 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.965 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:43.965 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.965 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:43.965 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:43.965 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:43.965 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:43.965 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:43.965 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:43.965 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:43.965 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:43.965 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:43.965 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:43.965 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:43.965 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:43.965 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:43.965 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:43.965 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:43.965 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:43.965 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:43.965 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:43.965 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:43.965 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.965 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:43.965 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.965 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:43.965 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:43.965 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:43.965 17:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:50.571 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:50.571 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.571 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:50.571 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:50.572 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.572 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:50.572 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.572 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:50.572 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:50.572 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:50.572 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:50.572 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.572 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:50.572 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:50.572 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.572 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:50.572 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:07:50.572 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:50.572 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:50.572 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:50.572 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:50.572 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:50.572 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:50.572 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:50.572 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:50.572 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:50.572 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:50.572 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:50.572 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:50.572 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:50.572 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:50.572 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:50.572 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:50.572 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:50.572 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:50.572 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:50.572 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:50.572 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:50.572 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:50.833 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:50.833 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:50.833 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:50.833 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:50.833 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:50.833 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:07:50.833 00:07:50.833 --- 10.0.0.2 ping statistics --- 00:07:50.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.833 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:07:50.833 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:50.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:50.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:07:50.833 00:07:50.833 --- 10.0.0.1 ping statistics --- 00:07:50.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.833 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:07:50.833 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:50.833 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:07:50.833 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:50.833 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:50.833 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:50.833 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:50.833 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:50.833 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:50.833 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:50.833 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:50.833 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:50.833 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:50.833 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:50.833 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:50.833 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.833 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=2817503 00:07:50.833 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 2817503 00:07:50.833 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:50.833 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2817503 ']' 00:07:50.833 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.833 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:50.833 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.833 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:50.833 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.833 [2024-10-01 17:07:49.303270] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:07:50.833 [2024-10-01 17:07:49.303335] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.093 [2024-10-01 17:07:49.395905] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:51.094 [2024-10-01 17:07:49.446663] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:51.094 [2024-10-01 17:07:49.446725] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:51.094 [2024-10-01 17:07:49.446738] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:51.094 [2024-10-01 17:07:49.446745] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:51.094 [2024-10-01 17:07:49.446751] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:51.094 [2024-10-01 17:07:49.446884] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.094 [2024-10-01 17:07:49.447054] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:51.094 [2024-10-01 17:07:49.447227] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.094 [2024-10-01 17:07:49.447228] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:07:51.663 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:51.663 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:51.663 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:51.663 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:51.663 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.663 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:51.663 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:51.663 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.663 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.663 [2024-10-01 17:07:50.161812] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:51.663 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.663 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:51.663 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:51.663 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.663 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:51.664 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:51.664 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:51.664 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.664 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.664 Malloc0 00:07:51.925 [2024-10-01 17:07:50.224987] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:51.925 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.925 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:51.925 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:51.925 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.925 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2817610 00:07:51.925 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2817610 /var/tmp/bdevperf.sock 00:07:51.925 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2817610 ']' 00:07:51.925 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:51.925 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:51.925 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:51.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:51.925 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:51.925 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:51.925 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:51.925 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.925 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:07:51.925 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:07:51.925 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:51.925 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:51.925 { 00:07:51.925 "params": { 00:07:51.925 "name": "Nvme$subsystem", 00:07:51.925 "trtype": "$TEST_TRANSPORT", 00:07:51.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:51.925 "adrfam": "ipv4", 00:07:51.925 "trsvcid": "$NVMF_PORT", 00:07:51.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:51.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:51.925 "hdgst": ${hdgst:-false}, 00:07:51.925 "ddgst": ${ddgst:-false} 00:07:51.925 }, 00:07:51.925 "method": "bdev_nvme_attach_controller" 00:07:51.925 } 00:07:51.925 EOF 00:07:51.925 )") 00:07:51.925 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:07:51.925 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:07:51.925 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:07:51.925 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:51.925 "params": { 00:07:51.925 "name": "Nvme0", 00:07:51.925 "trtype": "tcp", 00:07:51.925 "traddr": "10.0.0.2", 00:07:51.925 "adrfam": "ipv4", 00:07:51.925 "trsvcid": "4420", 00:07:51.925 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:51.925 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:51.925 "hdgst": false, 00:07:51.925 "ddgst": false 00:07:51.925 }, 00:07:51.925 "method": "bdev_nvme_attach_controller" 00:07:51.925 }' 00:07:51.925 [2024-10-01 17:07:50.330395] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:07:51.925 [2024-10-01 17:07:50.330447] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2817610 ] 00:07:51.925 [2024-10-01 17:07:50.392094] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.925 [2024-10-01 17:07:50.423276] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.186 Running I/O for 10 seconds... 00:07:52.186 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:52.186 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:52.186 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:52.186 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.186 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:52.186 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.186 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:52.186 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:52.186 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:52.186 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:52.186 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:52.186 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:52.186 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:52.186 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:52.186 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:52.186 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:52.187 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.187 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:52.187 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.187 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:07:52.187 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:07:52.187 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:52.450 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:52.450 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:52.450 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:52.450 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:52.450 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.450 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:52.450 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.450 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:07:52.450 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:07:52.450 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:52.450 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:52.450 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:52.450 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:52.450 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.450 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:52.450 [2024-10-01 17:07:50.990106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.450 [2024-10-01 17:07:50.990151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.450 [2024-10-01 17:07:50.990170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.450 [2024-10-01 17:07:50.990180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.450 [2024-10-01 17:07:50.990191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.450 [2024-10-01 17:07:50.990204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.450 [2024-10-01 17:07:50.990214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.450 [2024-10-01 17:07:50.990222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.450 [2024-10-01 17:07:50.990231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.450 [2024-10-01 17:07:50.990239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.450 [2024-10-01 17:07:50.990248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.450 [2024-10-01 17:07:50.990255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.450 [2024-10-01 17:07:50.990265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.450 [2024-10-01 17:07:50.990273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.450 [2024-10-01 17:07:50.990283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.450 [2024-10-01 17:07:50.990290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.450 [2024-10-01 17:07:50.990300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.450 [2024-10-01 17:07:50.990308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.450 [2024-10-01 17:07:50.990318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.450 [2024-10-01 17:07:50.990325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.450 [2024-10-01 17:07:50.990335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.450 [2024-10-01 17:07:50.990343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.450 [2024-10-01 17:07:50.990352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.450 [2024-10-01 17:07:50.990360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.450 [2024-10-01 17:07:50.990369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.450 [2024-10-01 17:07:50.990377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.450 [2024-10-01 17:07:50.990386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.450 [2024-10-01 17:07:50.990393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.450 [2024-10-01 17:07:50.990403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.450 [2024-10-01 17:07:50.990410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.450 [2024-10-01 17:07:50.990421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.450 [2024-10-01 17:07:50.990429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.450 [2024-10-01 17:07:50.990438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.450 [2024-10-01 17:07:50.990445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.450 [2024-10-01 17:07:50.990455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.450 [2024-10-01 17:07:50.990462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.450 [2024-10-01 17:07:50.990473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.450 [2024-10-01 17:07:50.990481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.450 [2024-10-01 17:07:50.990490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.450 [2024-10-01 17:07:50.990498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.450 [2024-10-01 17:07:50.990507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.450 [2024-10-01 17:07:50.990514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.450 [2024-10-01 17:07:50.990524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.450 [2024-10-01 17:07:50.990532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.450 [2024-10-01 17:07:50.990541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.450 [2024-10-01 17:07:50.990549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.450 [2024-10-01 17:07:50.990559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.451 [2024-10-01 17:07:50.990566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.451 [2024-10-01 17:07:50.990575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.451 [2024-10-01 17:07:50.990583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.451 [2024-10-01 17:07:50.990592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.451 [2024-10-01 17:07:50.990600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.451 [2024-10-01 17:07:50.990610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.451 [2024-10-01 17:07:50.990618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.451 [2024-10-01 17:07:50.990628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.451 [2024-10-01 17:07:50.990637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.451 [2024-10-01 17:07:50.990646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.451 [2024-10-01 17:07:50.990654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.451 [2024-10-01 17:07:50.990663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.451 [2024-10-01 17:07:50.990671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.451 [2024-10-01 17:07:50.990681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.451 [2024-10-01 17:07:50.990689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.451 [2024-10-01 17:07:50.990698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.451 [2024-10-01 17:07:50.990706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.451 [2024-10-01 17:07:50.990715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.451 [2024-10-01 17:07:50.990724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.451 [2024-10-01 17:07:50.990734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.451 [2024-10-01 17:07:50.990742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.451 [2024-10-01 17:07:50.990751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.451 [2024-10-01 17:07:50.990759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.451 [2024-10-01 17:07:50.990769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.451 [2024-10-01 17:07:50.990777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.451 [2024-10-01 17:07:50.990787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.451 [2024-10-01 17:07:50.990795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.451 [2024-10-01 17:07:50.990805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.451 [2024-10-01 17:07:50.990812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.451 [2024-10-01 17:07:50.990822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.451 [2024-10-01 17:07:50.990829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.451 [2024-10-01 17:07:50.990839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.451 [2024-10-01 17:07:50.990846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.451 [2024-10-01 17:07:50.990856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.451 [2024-10-01 17:07:50.990865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.451 [2024-10-01 17:07:50.990875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.451 [2024-10-01 17:07:50.990882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.451 [2024-10-01 17:07:50.990891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.451 [2024-10-01 17:07:50.990900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.451 [2024-10-01 17:07:50.990909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.451 [2024-10-01 17:07:50.990916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.451 [2024-10-01 17:07:50.990926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.451 [2024-10-01 17:07:50.990933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.451 [2024-10-01 17:07:50.990943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.451 [2024-10-01 17:07:50.990950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.451 [2024-10-01 17:07:50.990960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.451 [2024-10-01 17:07:50.990968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.451 [2024-10-01 17:07:50.990977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.451 [2024-10-01 17:07:50.990984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.451 [2024-10-01 17:07:50.990998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.451 [2024-10-01 17:07:50.991006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.451 [2024-10-01 17:07:50.991015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.451 [2024-10-01 17:07:50.991022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.451 [2024-10-01 17:07:50.991032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.451 [2024-10-01 17:07:50.991040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.451 [2024-10-01 17:07:50.991049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.451 [2024-10-01 17:07:50.991057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.451 [2024-10-01 17:07:50.991067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.451 [2024-10-01 17:07:50.991074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.451 [2024-10-01 17:07:50.991085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.451 [2024-10-01 17:07:50.991094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.451 [2024-10-01 17:07:50.991103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.451 [2024-10-01 17:07:50.991111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.451 [2024-10-01 17:07:50.991120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.451 [2024-10-01 17:07:50.991128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.451 [2024-10-01 17:07:50.991137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.451 [2024-10-01 17:07:50.991145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.451 [2024-10-01 17:07:50.991154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.451 [2024-10-01 17:07:50.991162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.451 [2024-10-01 17:07:50.991171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.451 [2024-10-01 17:07:50.991178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.451 [2024-10-01 17:07:50.991188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.451 [2024-10-01 17:07:50.991195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.451 [2024-10-01 17:07:50.991205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.452 [2024-10-01 17:07:50.991213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.452 [2024-10-01 17:07:50.991223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.452 [2024-10-01 17:07:50.991230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.452 [2024-10-01 17:07:50.991239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.452 [2024-10-01 17:07:50.991246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.452 [2024-10-01 17:07:50.991256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.452 [2024-10-01 17:07:50.991264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.452 [2024-10-01 17:07:50.991273] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c15f20 is same with the state(6) to be set 00:07:52.452 [2024-10-01 17:07:50.991313] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c15f20 was disconnected and freed. reset controller. 00:07:52.452 [2024-10-01 17:07:50.992540] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:52.452 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.452 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:52.452 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.452 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:52.452 task offset: 89984 on job bdev=Nvme0n1 fails 00:07:52.452 00:07:52.452 Latency(us) 00:07:52.452 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:52.452 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:52.452 Job: Nvme0n1 ended in about 0.43 seconds with error 00:07:52.452 Verification LBA range: start 0x0 length 0x400 00:07:52.452 Nvme0n1 : 0.43 1488.07 93.00 148.81 0.00 37950.35 5270.19 34078.72 00:07:52.452 =================================================================================================================== 00:07:52.452 Total : 1488.07 93.00 148.81 0.00 37950.35 5270.19 34078.72 00:07:52.452 [2024-10-01 17:07:50.994594] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:52.452 [2024-10-01 17:07:50.994620] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19fced0 (9): Bad file descriptor 00:07:52.712 [2024-10-01 17:07:50.997368] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:07:52.712 [2024-10-01 17:07:50.997451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:07:52.712 [2024-10-01 17:07:50.997483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.712 [2024-10-01 17:07:50.997498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:07:52.712 [2024-10-01 17:07:50.997506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:07:52.712 [2024-10-01 17:07:50.997514] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:07:52.712 [2024-10-01 17:07:50.997522] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19fced0 00:07:52.712 [2024-10-01 17:07:50.997544] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19fced0 (9): Bad file descriptor 00:07:52.712 [2024-10-01 17:07:50.997557] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:07:52.712 [2024-10-01 17:07:50.997565] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:07:52.712 [2024-10-01 17:07:50.997573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:07:52.712 [2024-10-01 17:07:50.997587] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:52.712 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.712 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:53.654 17:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2817610 00:07:53.654 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2817610) - No such process 00:07:53.654 17:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:53.654 17:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:53.654 17:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:53.654 17:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:53.654 17:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:07:53.654 17:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:07:53.654 17:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:53.654 17:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:53.654 { 00:07:53.654 "params": { 00:07:53.654 "name": "Nvme$subsystem", 00:07:53.654 "trtype": "$TEST_TRANSPORT", 00:07:53.654 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:53.654 "adrfam": "ipv4", 00:07:53.654 "trsvcid": "$NVMF_PORT", 00:07:53.654 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:53.654 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:53.654 "hdgst": ${hdgst:-false}, 00:07:53.654 "ddgst": ${ddgst:-false} 00:07:53.654 }, 00:07:53.654 "method": "bdev_nvme_attach_controller" 00:07:53.654 } 00:07:53.654 EOF 00:07:53.654 )") 00:07:53.654 17:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:07:53.654 17:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:07:53.654 17:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:07:53.654 17:07:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:53.654 "params": { 00:07:53.654 "name": "Nvme0", 00:07:53.654 "trtype": "tcp", 00:07:53.654 "traddr": "10.0.0.2", 00:07:53.654 "adrfam": "ipv4", 00:07:53.654 "trsvcid": "4420", 00:07:53.654 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:53.654 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:53.654 "hdgst": false, 00:07:53.654 "ddgst": false 00:07:53.654 }, 00:07:53.654 "method": "bdev_nvme_attach_controller" 00:07:53.654 }' 00:07:53.654 [2024-10-01 17:07:52.064580] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:07:53.654 [2024-10-01 17:07:52.064633] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2817966 ] 00:07:53.654 [2024-10-01 17:07:52.125309] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.654 [2024-10-01 17:07:52.155122] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.915 Running I/O for 1 seconds... 00:07:54.854 1663.00 IOPS, 103.94 MiB/s 00:07:54.854 Latency(us) 00:07:54.854 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:54.854 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:54.854 Verification LBA range: start 0x0 length 0x400 00:07:54.854 Nvme0n1 : 1.03 1669.94 104.37 0.00 0.00 37662.99 6990.51 32112.64 00:07:54.854 =================================================================================================================== 00:07:54.854 Total : 1669.94 104.37 0.00 0.00 37662.99 6990.51 32112.64 00:07:55.115 17:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:55.115 17:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:55.115 17:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:55.115 17:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:55.115 17:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:55.115 17:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:55.115 17:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:55.115 17:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:55.115 17:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:55.115 17:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:55.115 17:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:55.115 rmmod nvme_tcp 00:07:55.115 rmmod nvme_fabrics 00:07:55.115 rmmod nvme_keyring 00:07:55.115 17:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:55.115 17:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:55.115 17:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:55.115 17:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 2817503 ']' 00:07:55.115 17:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 2817503 00:07:55.115 17:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 2817503 ']' 00:07:55.115 17:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 2817503 00:07:55.115 17:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:07:55.115 17:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:55.115 17:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2817503 00:07:55.115 17:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:55.115 17:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:55.115 17:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2817503' 00:07:55.115 killing process with pid 2817503 00:07:55.115 17:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 2817503 00:07:55.115 17:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 2817503 00:07:55.376 [2024-10-01 17:07:53.705555] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:55.376 17:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:55.376 17:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:55.376 17:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:55.376 17:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:55.376 17:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:07:55.376 17:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:07:55.376 17:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:55.376 17:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:55.376 17:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:55.376 17:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.376 17:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:55.376 17:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.290 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:57.290 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:57.290 00:07:57.290 real 0m13.718s 00:07:57.290 user 0m21.139s 00:07:57.290 sys 0m6.321s 00:07:57.290 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:57.290 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:57.290 ************************************ 00:07:57.290 END TEST nvmf_host_management 00:07:57.290 ************************************ 00:07:57.551 17:07:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:57.552 17:07:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:57.552 17:07:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:57.552 17:07:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:57.552 ************************************ 00:07:57.552 START TEST nvmf_lvol 00:07:57.552 ************************************ 00:07:57.552 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:57.552 * Looking for test storage... 00:07:57.552 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:57.552 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:57.552 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:07:57.552 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:57.552 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:57.552 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:57.552 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:57.552 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:57.552 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:57.552 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:57.552 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:57.552 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:57.552 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:57.552 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:57.552 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:57.552 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:57.552 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:57.552 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:57.552 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:57.552 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:57.552 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:57.552 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:57.552 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:57.552 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:57.552 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:57.552 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:57.552 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:57.552 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:57.552 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:57.552 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:57.552 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:57.552 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:57.552 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:57.552 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:57.552 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:57.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.552 --rc genhtml_branch_coverage=1 00:07:57.552 --rc genhtml_function_coverage=1 00:07:57.552 --rc genhtml_legend=1 00:07:57.552 --rc geninfo_all_blocks=1 00:07:57.552 --rc geninfo_unexecuted_blocks=1 00:07:57.552 00:07:57.552 ' 00:07:57.552 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:57.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.552 --rc genhtml_branch_coverage=1 00:07:57.552 --rc genhtml_function_coverage=1 00:07:57.552 --rc genhtml_legend=1 00:07:57.552 --rc geninfo_all_blocks=1 00:07:57.552 --rc geninfo_unexecuted_blocks=1 00:07:57.552 00:07:57.552 ' 00:07:57.552 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:57.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.552 --rc genhtml_branch_coverage=1 00:07:57.552 --rc genhtml_function_coverage=1 00:07:57.552 --rc genhtml_legend=1 00:07:57.552 --rc geninfo_all_blocks=1 00:07:57.552 --rc geninfo_unexecuted_blocks=1 00:07:57.552 00:07:57.552 ' 00:07:57.552 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:57.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.552 --rc genhtml_branch_coverage=1 00:07:57.552 --rc genhtml_function_coverage=1 00:07:57.552 --rc genhtml_legend=1 00:07:57.552 --rc geninfo_all_blocks=1 00:07:57.552 --rc geninfo_unexecuted_blocks=1 00:07:57.552 00:07:57.552 ' 00:07:57.552 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:57.552 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:57.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.814 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:57.815 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.815 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:57.815 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:57.815 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:57.815 17:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:05.962 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:05.962 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:05.962 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:05.962 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:05.962 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:05.962 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:05.962 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:05.962 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:05.962 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:05.962 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:05.962 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:05.962 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:05.962 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:05.962 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:05.962 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:05.962 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:05.962 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:05.962 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:05.962 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:05.962 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:05.962 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:05.962 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:05.962 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:05.962 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:05.962 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:05.962 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:05.962 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:05.962 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:05.962 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:05.962 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:05.962 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:05.962 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:05.962 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:05.962 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:05.962 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:05.962 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:05.962 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:05.962 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:05.962 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:05.962 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:05.962 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:05.962 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:05.962 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:05.962 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:05.962 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:05.962 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:05.963 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:05.963 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:05.963 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:05.963 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.564 ms 00:08:05.963 00:08:05.963 --- 10.0.0.2 ping statistics --- 00:08:05.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.963 rtt min/avg/max/mdev = 0.564/0.564/0.564/0.000 ms 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:05.963 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:05.963 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:08:05.963 00:08:05.963 --- 10.0.0.1 ping statistics --- 00:08:05.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.963 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=2822640 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 2822640 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 2822640 ']' 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:05.963 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:05.963 [2024-10-01 17:08:03.540538] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:08:05.963 [2024-10-01 17:08:03.540590] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:05.964 [2024-10-01 17:08:03.607843] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:05.964 [2024-10-01 17:08:03.638673] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:05.964 [2024-10-01 17:08:03.638710] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:05.964 [2024-10-01 17:08:03.638718] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:05.964 [2024-10-01 17:08:03.638725] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:05.964 [2024-10-01 17:08:03.638730] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:05.964 [2024-10-01 17:08:03.638870] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.964 [2024-10-01 17:08:03.638983] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:05.964 [2024-10-01 17:08:03.638985] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.964 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:05.964 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:08:05.964 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:05.964 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:05.964 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:05.964 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:05.964 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:05.964 [2024-10-01 17:08:03.916043] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:05.964 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:05.964 17:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:05.964 17:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:05.964 17:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:05.964 17:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:06.226 17:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:06.226 17:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=f64ab790-a085-4fca-a3aa-a6eb4cce65c3 00:08:06.226 17:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f64ab790-a085-4fca-a3aa-a6eb4cce65c3 lvol 20 00:08:06.486 17:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=500620d9-1fc1-48f0-8511-d00255fe6061 00:08:06.486 17:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:06.747 17:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 500620d9-1fc1-48f0-8511-d00255fe6061 00:08:06.747 17:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:07.007 [2024-10-01 17:08:05.426235] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:07.007 17:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:07.267 17:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2823013 00:08:07.267 17:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:07.267 17:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:08.209 17:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 500620d9-1fc1-48f0-8511-d00255fe6061 MY_SNAPSHOT 00:08:08.468 17:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1d3bf677-f38e-402e-8eea-6b0e8ce1ee5a 00:08:08.468 17:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 500620d9-1fc1-48f0-8511-d00255fe6061 30 00:08:08.728 17:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 1d3bf677-f38e-402e-8eea-6b0e8ce1ee5a MY_CLONE 00:08:08.989 17:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=ab9020e5-26c0-48e2-8f2b-2e58c8bcbe57 00:08:08.989 17:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate ab9020e5-26c0-48e2-8f2b-2e58c8bcbe57 00:08:09.250 17:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2823013 00:08:17.403 Initializing NVMe Controllers 00:08:17.403 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:17.403 Controller IO queue size 128, less than required. 00:08:17.403 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:17.403 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:17.403 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:17.403 Initialization complete. Launching workers. 00:08:17.403 ======================================================== 00:08:17.403 Latency(us) 00:08:17.403 Device Information : IOPS MiB/s Average min max 00:08:17.403 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12298.30 48.04 10410.93 1503.07 45185.61 00:08:17.403 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17988.90 70.27 7115.70 387.98 58209.46 00:08:17.403 ======================================================== 00:08:17.403 Total : 30287.20 118.31 8453.75 387.98 58209.46 00:08:17.403 00:08:17.403 17:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:17.663 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 500620d9-1fc1-48f0-8511-d00255fe6061 00:08:17.924 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f64ab790-a085-4fca-a3aa-a6eb4cce65c3 00:08:18.185 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:18.185 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:18.185 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:18.185 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:18.185 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:18.185 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:18.185 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:18.185 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:18.185 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:18.185 rmmod nvme_tcp 00:08:18.185 rmmod nvme_fabrics 00:08:18.185 rmmod nvme_keyring 00:08:18.185 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:18.185 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:18.185 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:18.185 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 2822640 ']' 00:08:18.185 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 2822640 00:08:18.185 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 2822640 ']' 00:08:18.185 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 2822640 00:08:18.185 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:08:18.185 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:18.185 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2822640 00:08:18.185 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:18.185 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:18.185 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2822640' 00:08:18.185 killing process with pid 2822640 00:08:18.185 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 2822640 00:08:18.185 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 2822640 00:08:18.446 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:18.446 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:18.446 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:18.446 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:18.447 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:08:18.447 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:18.447 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:08:18.447 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:18.447 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:18.447 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.447 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:18.447 17:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.358 17:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:20.358 00:08:20.358 real 0m22.985s 00:08:20.358 user 1m2.443s 00:08:20.358 sys 0m8.300s 00:08:20.358 17:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.358 17:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:20.358 ************************************ 00:08:20.358 END TEST nvmf_lvol 00:08:20.358 ************************************ 00:08:20.622 17:08:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:20.622 17:08:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:20.622 17:08:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.622 17:08:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:20.622 ************************************ 00:08:20.622 START TEST nvmf_lvs_grow 00:08:20.622 ************************************ 00:08:20.622 17:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:20.622 * Looking for test storage... 00:08:20.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:20.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.622 --rc genhtml_branch_coverage=1 00:08:20.622 --rc genhtml_function_coverage=1 00:08:20.622 --rc genhtml_legend=1 00:08:20.622 --rc geninfo_all_blocks=1 00:08:20.622 --rc geninfo_unexecuted_blocks=1 00:08:20.622 00:08:20.622 ' 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:20.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.622 --rc genhtml_branch_coverage=1 00:08:20.622 --rc genhtml_function_coverage=1 00:08:20.622 --rc genhtml_legend=1 00:08:20.622 --rc geninfo_all_blocks=1 00:08:20.622 --rc geninfo_unexecuted_blocks=1 00:08:20.622 00:08:20.622 ' 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:20.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.622 --rc genhtml_branch_coverage=1 00:08:20.622 --rc genhtml_function_coverage=1 00:08:20.622 --rc genhtml_legend=1 00:08:20.622 --rc geninfo_all_blocks=1 00:08:20.622 --rc geninfo_unexecuted_blocks=1 00:08:20.622 00:08:20.622 ' 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:20.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.622 --rc genhtml_branch_coverage=1 00:08:20.622 --rc genhtml_function_coverage=1 00:08:20.622 --rc genhtml_legend=1 00:08:20.622 --rc geninfo_all_blocks=1 00:08:20.622 --rc geninfo_unexecuted_blocks=1 00:08:20.622 00:08:20.622 ' 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:20.622 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:20.885 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:20.885 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:20.885 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:20.885 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:20.885 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:20.885 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:20.885 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:20.885 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:20.885 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.885 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.885 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.885 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.885 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.885 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.885 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:20.885 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.885 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:20.885 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:20.885 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:20.885 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:20.885 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:20.885 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:20.885 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:20.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:20.885 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:20.885 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:20.885 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:20.885 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:20.885 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:20.885 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:20.885 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:20.885 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:20.885 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:20.885 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:20.885 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:20.886 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.886 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.886 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.886 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:20.886 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:20.886 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:20.886 17:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:29.036 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:29.036 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:29.036 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:29.036 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:29.036 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:29.036 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:29.036 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:29.036 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:29.036 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:29.036 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:29.036 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:29.036 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:29.036 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:29.036 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:29.036 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:29.036 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:29.036 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:29.036 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:29.036 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:29.036 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:29.037 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:29.037 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:29.037 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:29.037 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:29.037 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:29.037 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.527 ms 00:08:29.037 00:08:29.037 --- 10.0.0.2 ping statistics --- 00:08:29.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.037 rtt min/avg/max/mdev = 0.527/0.527/0.527/0.000 ms 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:29.037 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:29.037 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:08:29.037 00:08:29.037 --- 10.0.0.1 ping statistics --- 00:08:29.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.037 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=2829392 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 2829392 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:29.037 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 2829392 ']' 00:08:29.038 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.038 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:29.038 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.038 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:29.038 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:29.038 [2024-10-01 17:08:26.502558] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:08:29.038 [2024-10-01 17:08:26.502614] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:29.038 [2024-10-01 17:08:26.571894] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.038 [2024-10-01 17:08:26.605111] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:29.038 [2024-10-01 17:08:26.605155] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:29.038 [2024-10-01 17:08:26.605165] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:29.038 [2024-10-01 17:08:26.605173] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:29.038 [2024-10-01 17:08:26.605181] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:29.038 [2024-10-01 17:08:26.605204] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.038 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:29.038 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:08:29.038 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:29.038 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:29.038 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:29.038 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:29.038 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:29.038 [2024-10-01 17:08:26.882615] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:29.038 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:29.038 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:29.038 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:29.038 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:29.038 ************************************ 00:08:29.038 START TEST lvs_grow_clean 00:08:29.038 ************************************ 00:08:29.038 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:08:29.038 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:29.038 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:29.038 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:29.038 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:29.038 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:29.038 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:29.038 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:29.038 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:29.038 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:29.038 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:29.038 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:29.038 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=0bfb9d92-da18-426b-a18f-33ad52cd4fb5 00:08:29.038 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0bfb9d92-da18-426b-a18f-33ad52cd4fb5 00:08:29.038 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:29.038 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:29.038 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:29.038 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0bfb9d92-da18-426b-a18f-33ad52cd4fb5 lvol 150 00:08:29.299 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=4817059b-a64d-49cc-b791-5a7c8b7c3450 00:08:29.299 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:29.299 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:29.299 [2024-10-01 17:08:27.823644] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:29.299 [2024-10-01 17:08:27.823698] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:29.299 true 00:08:29.299 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0bfb9d92-da18-426b-a18f-33ad52cd4fb5 00:08:29.299 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:29.559 17:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:29.559 17:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:29.819 17:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4817059b-a64d-49cc-b791-5a7c8b7c3450 00:08:29.819 17:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:30.080 [2024-10-01 17:08:28.469629] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:30.080 17:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:30.339 17:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2830023 00:08:30.339 17:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:30.339 17:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:30.339 17:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2830023 /var/tmp/bdevperf.sock 00:08:30.339 17:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 2830023 ']' 00:08:30.339 17:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:30.339 17:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:30.340 17:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:30.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:30.340 17:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:30.340 17:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:30.340 [2024-10-01 17:08:28.702017] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:08:30.340 [2024-10-01 17:08:28.702069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2830023 ] 00:08:30.340 [2024-10-01 17:08:28.779737] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.340 [2024-10-01 17:08:28.810762] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.288 17:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:31.288 17:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:08:31.288 17:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:31.288 Nvme0n1 00:08:31.288 17:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:31.548 [ 00:08:31.548 { 00:08:31.548 "name": "Nvme0n1", 00:08:31.548 "aliases": [ 00:08:31.548 "4817059b-a64d-49cc-b791-5a7c8b7c3450" 00:08:31.548 ], 00:08:31.548 "product_name": "NVMe disk", 00:08:31.548 "block_size": 4096, 00:08:31.548 "num_blocks": 38912, 00:08:31.548 "uuid": "4817059b-a64d-49cc-b791-5a7c8b7c3450", 00:08:31.548 "numa_id": 0, 00:08:31.548 "assigned_rate_limits": { 00:08:31.548 "rw_ios_per_sec": 0, 00:08:31.548 "rw_mbytes_per_sec": 0, 00:08:31.548 "r_mbytes_per_sec": 0, 00:08:31.548 "w_mbytes_per_sec": 0 00:08:31.548 }, 00:08:31.548 "claimed": false, 00:08:31.548 "zoned": false, 00:08:31.548 "supported_io_types": { 00:08:31.548 "read": true, 00:08:31.548 "write": true, 00:08:31.548 "unmap": true, 00:08:31.548 "flush": true, 00:08:31.548 "reset": true, 00:08:31.548 "nvme_admin": true, 00:08:31.548 "nvme_io": true, 00:08:31.548 "nvme_io_md": false, 00:08:31.548 "write_zeroes": true, 00:08:31.548 "zcopy": false, 00:08:31.548 "get_zone_info": false, 00:08:31.548 "zone_management": false, 00:08:31.548 "zone_append": false, 00:08:31.548 "compare": true, 00:08:31.548 "compare_and_write": true, 00:08:31.548 "abort": true, 00:08:31.548 "seek_hole": false, 00:08:31.548 "seek_data": false, 00:08:31.548 "copy": true, 00:08:31.548 "nvme_iov_md": false 00:08:31.548 }, 00:08:31.548 "memory_domains": [ 00:08:31.548 { 00:08:31.548 "dma_device_id": "system", 00:08:31.548 "dma_device_type": 1 00:08:31.548 } 00:08:31.548 ], 00:08:31.548 "driver_specific": { 00:08:31.548 "nvme": [ 00:08:31.548 { 00:08:31.548 "trid": { 00:08:31.548 "trtype": "TCP", 00:08:31.548 "adrfam": "IPv4", 00:08:31.548 "traddr": "10.0.0.2", 00:08:31.548 "trsvcid": "4420", 00:08:31.548 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:31.548 }, 00:08:31.548 "ctrlr_data": { 00:08:31.548 "cntlid": 1, 00:08:31.548 "vendor_id": "0x8086", 00:08:31.548 "model_number": "SPDK bdev Controller", 00:08:31.548 "serial_number": "SPDK0", 00:08:31.548 "firmware_revision": "25.01", 00:08:31.548 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:31.548 "oacs": { 00:08:31.548 "security": 0, 00:08:31.548 "format": 0, 00:08:31.548 "firmware": 0, 00:08:31.548 "ns_manage": 0 00:08:31.548 }, 00:08:31.549 "multi_ctrlr": true, 00:08:31.549 "ana_reporting": false 00:08:31.549 }, 00:08:31.549 "vs": { 00:08:31.549 "nvme_version": "1.3" 00:08:31.549 }, 00:08:31.549 "ns_data": { 00:08:31.549 "id": 1, 00:08:31.549 "can_share": true 00:08:31.549 } 00:08:31.549 } 00:08:31.549 ], 00:08:31.549 "mp_policy": "active_passive" 00:08:31.549 } 00:08:31.549 } 00:08:31.549 ] 00:08:31.549 17:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:31.549 17:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2830159 00:08:31.549 17:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:31.549 Running I/O for 10 seconds... 00:08:32.489 Latency(us) 00:08:32.489 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:32.489 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.489 Nvme0n1 : 1.00 17382.00 67.90 0.00 0.00 0.00 0.00 0.00 00:08:32.489 =================================================================================================================== 00:08:32.489 Total : 17382.00 67.90 0.00 0.00 0.00 0.00 0.00 00:08:32.489 00:08:33.428 17:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0bfb9d92-da18-426b-a18f-33ad52cd4fb5 00:08:33.688 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.688 Nvme0n1 : 2.00 17475.00 68.26 0.00 0.00 0.00 0.00 0.00 00:08:33.688 =================================================================================================================== 00:08:33.688 Total : 17475.00 68.26 0.00 0.00 0.00 0.00 0.00 00:08:33.688 00:08:33.688 true 00:08:33.688 17:08:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0bfb9d92-da18-426b-a18f-33ad52cd4fb5 00:08:33.688 17:08:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:33.948 17:08:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:33.948 17:08:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:33.948 17:08:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2830159 00:08:34.519 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.519 Nvme0n1 : 3.00 17514.00 68.41 0.00 0.00 0.00 0.00 0.00 00:08:34.519 =================================================================================================================== 00:08:34.519 Total : 17514.00 68.41 0.00 0.00 0.00 0.00 0.00 00:08:34.519 00:08:35.907 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.907 Nvme0n1 : 4.00 17545.50 68.54 0.00 0.00 0.00 0.00 0.00 00:08:35.907 =================================================================================================================== 00:08:35.907 Total : 17545.50 68.54 0.00 0.00 0.00 0.00 0.00 00:08:35.907 00:08:36.850 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.850 Nvme0n1 : 5.00 17572.40 68.64 0.00 0.00 0.00 0.00 0.00 00:08:36.850 =================================================================================================================== 00:08:36.850 Total : 17572.40 68.64 0.00 0.00 0.00 0.00 0.00 00:08:36.850 00:08:37.923 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.923 Nvme0n1 : 6.00 17593.00 68.72 0.00 0.00 0.00 0.00 0.00 00:08:37.923 =================================================================================================================== 00:08:37.923 Total : 17593.00 68.72 0.00 0.00 0.00 0.00 0.00 00:08:37.923 00:08:38.518 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.518 Nvme0n1 : 7.00 17612.29 68.80 0.00 0.00 0.00 0.00 0.00 00:08:38.518 =================================================================================================================== 00:08:38.518 Total : 17612.29 68.80 0.00 0.00 0.00 0.00 0.00 00:08:38.518 00:08:39.902 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.902 Nvme0n1 : 8.00 17626.75 68.85 0.00 0.00 0.00 0.00 0.00 00:08:39.902 =================================================================================================================== 00:08:39.902 Total : 17626.75 68.85 0.00 0.00 0.00 0.00 0.00 00:08:39.902 00:08:40.843 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.843 Nvme0n1 : 9.00 17638.00 68.90 0.00 0.00 0.00 0.00 0.00 00:08:40.843 =================================================================================================================== 00:08:40.843 Total : 17638.00 68.90 0.00 0.00 0.00 0.00 0.00 00:08:40.843 00:08:41.782 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.782 Nvme0n1 : 10.00 17648.60 68.94 0.00 0.00 0.00 0.00 0.00 00:08:41.782 =================================================================================================================== 00:08:41.782 Total : 17648.60 68.94 0.00 0.00 0.00 0.00 0.00 00:08:41.782 00:08:41.782 00:08:41.782 Latency(us) 00:08:41.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:41.782 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.782 Nvme0n1 : 10.01 17648.95 68.94 0.00 0.00 7247.47 3741.01 11086.51 00:08:41.782 =================================================================================================================== 00:08:41.782 Total : 17648.95 68.94 0.00 0.00 7247.47 3741.01 11086.51 00:08:41.782 { 00:08:41.782 "results": [ 00:08:41.782 { 00:08:41.782 "job": "Nvme0n1", 00:08:41.782 "core_mask": "0x2", 00:08:41.782 "workload": "randwrite", 00:08:41.782 "status": "finished", 00:08:41.782 "queue_depth": 128, 00:08:41.782 "io_size": 4096, 00:08:41.782 "runtime": 10.007057, 00:08:41.782 "iops": 17648.945139415115, 00:08:41.782 "mibps": 68.94119195084029, 00:08:41.782 "io_failed": 0, 00:08:41.782 "io_timeout": 0, 00:08:41.782 "avg_latency_us": 7247.467387787302, 00:08:41.782 "min_latency_us": 3741.0133333333333, 00:08:41.782 "max_latency_us": 11086.506666666666 00:08:41.782 } 00:08:41.782 ], 00:08:41.782 "core_count": 1 00:08:41.782 } 00:08:41.782 17:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2830023 00:08:41.783 17:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 2830023 ']' 00:08:41.783 17:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 2830023 00:08:41.783 17:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:08:41.783 17:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:41.783 17:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2830023 00:08:41.783 17:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:41.783 17:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:41.783 17:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2830023' 00:08:41.783 killing process with pid 2830023 00:08:41.783 17:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 2830023 00:08:41.783 Received shutdown signal, test time was about 10.000000 seconds 00:08:41.783 00:08:41.783 Latency(us) 00:08:41.783 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:41.783 =================================================================================================================== 00:08:41.783 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:41.783 17:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 2830023 00:08:41.783 17:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:42.042 17:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:42.303 17:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0bfb9d92-da18-426b-a18f-33ad52cd4fb5 00:08:42.303 17:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:42.303 17:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:42.303 17:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:42.303 17:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:42.563 [2024-10-01 17:08:40.958247] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:42.563 17:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0bfb9d92-da18-426b-a18f-33ad52cd4fb5 00:08:42.563 17:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:42.563 17:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0bfb9d92-da18-426b-a18f-33ad52cd4fb5 00:08:42.563 17:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:42.563 17:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:42.563 17:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:42.563 17:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:42.563 17:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:42.563 17:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:42.563 17:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:42.563 17:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:42.563 17:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0bfb9d92-da18-426b-a18f-33ad52cd4fb5 00:08:42.823 request: 00:08:42.823 { 00:08:42.823 "uuid": "0bfb9d92-da18-426b-a18f-33ad52cd4fb5", 00:08:42.823 "method": "bdev_lvol_get_lvstores", 00:08:42.823 "req_id": 1 00:08:42.823 } 00:08:42.823 Got JSON-RPC error response 00:08:42.823 response: 00:08:42.823 { 00:08:42.823 "code": -19, 00:08:42.823 "message": "No such device" 00:08:42.823 } 00:08:42.823 17:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:42.823 17:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:42.824 17:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:42.824 17:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:42.824 17:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:42.824 aio_bdev 00:08:42.824 17:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4817059b-a64d-49cc-b791-5a7c8b7c3450 00:08:42.824 17:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=4817059b-a64d-49cc-b791-5a7c8b7c3450 00:08:42.824 17:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:42.824 17:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:08:42.824 17:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:42.824 17:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:42.824 17:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:43.084 17:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4817059b-a64d-49cc-b791-5a7c8b7c3450 -t 2000 00:08:43.084 [ 00:08:43.084 { 00:08:43.084 "name": "4817059b-a64d-49cc-b791-5a7c8b7c3450", 00:08:43.084 "aliases": [ 00:08:43.084 "lvs/lvol" 00:08:43.084 ], 00:08:43.084 "product_name": "Logical Volume", 00:08:43.084 "block_size": 4096, 00:08:43.084 "num_blocks": 38912, 00:08:43.084 "uuid": "4817059b-a64d-49cc-b791-5a7c8b7c3450", 00:08:43.084 "assigned_rate_limits": { 00:08:43.084 "rw_ios_per_sec": 0, 00:08:43.084 "rw_mbytes_per_sec": 0, 00:08:43.084 "r_mbytes_per_sec": 0, 00:08:43.084 "w_mbytes_per_sec": 0 00:08:43.084 }, 00:08:43.084 "claimed": false, 00:08:43.084 "zoned": false, 00:08:43.084 "supported_io_types": { 00:08:43.084 "read": true, 00:08:43.084 "write": true, 00:08:43.084 "unmap": true, 00:08:43.084 "flush": false, 00:08:43.084 "reset": true, 00:08:43.084 "nvme_admin": false, 00:08:43.084 "nvme_io": false, 00:08:43.084 "nvme_io_md": false, 00:08:43.084 "write_zeroes": true, 00:08:43.084 "zcopy": false, 00:08:43.084 "get_zone_info": false, 00:08:43.084 "zone_management": false, 00:08:43.084 "zone_append": false, 00:08:43.084 "compare": false, 00:08:43.084 "compare_and_write": false, 00:08:43.084 "abort": false, 00:08:43.084 "seek_hole": true, 00:08:43.084 "seek_data": true, 00:08:43.084 "copy": false, 00:08:43.084 "nvme_iov_md": false 00:08:43.084 }, 00:08:43.084 "driver_specific": { 00:08:43.084 "lvol": { 00:08:43.084 "lvol_store_uuid": "0bfb9d92-da18-426b-a18f-33ad52cd4fb5", 00:08:43.084 "base_bdev": "aio_bdev", 00:08:43.084 "thin_provision": false, 00:08:43.084 "num_allocated_clusters": 38, 00:08:43.084 "snapshot": false, 00:08:43.084 "clone": false, 00:08:43.084 "esnap_clone": false 00:08:43.084 } 00:08:43.084 } 00:08:43.084 } 00:08:43.084 ] 00:08:43.344 17:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:08:43.344 17:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0bfb9d92-da18-426b-a18f-33ad52cd4fb5 00:08:43.344 17:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:43.344 17:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:43.344 17:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0bfb9d92-da18-426b-a18f-33ad52cd4fb5 00:08:43.344 17:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:43.604 17:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:43.604 17:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4817059b-a64d-49cc-b791-5a7c8b7c3450 00:08:43.865 17:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0bfb9d92-da18-426b-a18f-33ad52cd4fb5 00:08:44.125 17:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:44.125 17:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:44.125 00:08:44.125 real 0m15.667s 00:08:44.125 user 0m15.391s 00:08:44.125 sys 0m1.327s 00:08:44.125 17:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:44.125 17:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:44.125 ************************************ 00:08:44.125 END TEST lvs_grow_clean 00:08:44.125 ************************************ 00:08:44.125 17:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:44.125 17:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:44.125 17:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:44.125 17:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:44.385 ************************************ 00:08:44.385 START TEST lvs_grow_dirty 00:08:44.385 ************************************ 00:08:44.385 17:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:44.385 17:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:44.385 17:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:44.385 17:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:44.385 17:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:44.385 17:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:44.385 17:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:44.385 17:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:44.385 17:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:44.385 17:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:44.385 17:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:44.385 17:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:44.647 17:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=c064de33-260b-4b74-984d-19422d73a1ce 00:08:44.647 17:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c064de33-260b-4b74-984d-19422d73a1ce 00:08:44.647 17:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:44.908 17:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:44.908 17:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:44.908 17:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c064de33-260b-4b74-984d-19422d73a1ce lvol 150 00:08:44.908 17:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=58692b14-9f63-45c8-b96f-17da0998e0be 00:08:44.908 17:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:44.908 17:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:45.169 [2024-10-01 17:08:43.538529] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:45.169 [2024-10-01 17:08:43.538584] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:45.169 true 00:08:45.169 17:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c064de33-260b-4b74-984d-19422d73a1ce 00:08:45.169 17:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:45.430 17:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:45.430 17:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:45.430 17:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 58692b14-9f63-45c8-b96f-17da0998e0be 00:08:45.690 17:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:45.690 [2024-10-01 17:08:44.192532] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:45.690 17:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:45.953 17:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2833198 00:08:45.953 17:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:45.953 17:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:45.953 17:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2833198 /var/tmp/bdevperf.sock 00:08:45.953 17:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2833198 ']' 00:08:45.953 17:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:45.953 17:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:45.953 17:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:45.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:45.953 17:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:45.953 17:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:45.953 [2024-10-01 17:08:44.425172] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:08:45.954 [2024-10-01 17:08:44.425223] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2833198 ] 00:08:46.216 [2024-10-01 17:08:44.502213] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.216 [2024-10-01 17:08:44.533105] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.787 17:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:46.787 17:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:46.787 17:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:47.048 Nvme0n1 00:08:47.308 17:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:47.308 [ 00:08:47.308 { 00:08:47.308 "name": "Nvme0n1", 00:08:47.308 "aliases": [ 00:08:47.308 "58692b14-9f63-45c8-b96f-17da0998e0be" 00:08:47.308 ], 00:08:47.308 "product_name": "NVMe disk", 00:08:47.308 "block_size": 4096, 00:08:47.308 "num_blocks": 38912, 00:08:47.308 "uuid": "58692b14-9f63-45c8-b96f-17da0998e0be", 00:08:47.308 "numa_id": 0, 00:08:47.308 "assigned_rate_limits": { 00:08:47.308 "rw_ios_per_sec": 0, 00:08:47.308 "rw_mbytes_per_sec": 0, 00:08:47.308 "r_mbytes_per_sec": 0, 00:08:47.308 "w_mbytes_per_sec": 0 00:08:47.308 }, 00:08:47.308 "claimed": false, 00:08:47.308 "zoned": false, 00:08:47.308 "supported_io_types": { 00:08:47.308 "read": true, 00:08:47.308 "write": true, 00:08:47.308 "unmap": true, 00:08:47.308 "flush": true, 00:08:47.308 "reset": true, 00:08:47.308 "nvme_admin": true, 00:08:47.308 "nvme_io": true, 00:08:47.308 "nvme_io_md": false, 00:08:47.308 "write_zeroes": true, 00:08:47.308 "zcopy": false, 00:08:47.308 "get_zone_info": false, 00:08:47.308 "zone_management": false, 00:08:47.308 "zone_append": false, 00:08:47.308 "compare": true, 00:08:47.308 "compare_and_write": true, 00:08:47.308 "abort": true, 00:08:47.308 "seek_hole": false, 00:08:47.308 "seek_data": false, 00:08:47.308 "copy": true, 00:08:47.308 "nvme_iov_md": false 00:08:47.308 }, 00:08:47.308 "memory_domains": [ 00:08:47.308 { 00:08:47.308 "dma_device_id": "system", 00:08:47.308 "dma_device_type": 1 00:08:47.308 } 00:08:47.308 ], 00:08:47.308 "driver_specific": { 00:08:47.308 "nvme": [ 00:08:47.308 { 00:08:47.308 "trid": { 00:08:47.308 "trtype": "TCP", 00:08:47.308 "adrfam": "IPv4", 00:08:47.308 "traddr": "10.0.0.2", 00:08:47.308 "trsvcid": "4420", 00:08:47.308 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:47.308 }, 00:08:47.308 "ctrlr_data": { 00:08:47.308 "cntlid": 1, 00:08:47.308 "vendor_id": "0x8086", 00:08:47.308 "model_number": "SPDK bdev Controller", 00:08:47.308 "serial_number": "SPDK0", 00:08:47.308 "firmware_revision": "25.01", 00:08:47.308 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:47.308 "oacs": { 00:08:47.308 "security": 0, 00:08:47.308 "format": 0, 00:08:47.308 "firmware": 0, 00:08:47.308 "ns_manage": 0 00:08:47.308 }, 00:08:47.308 "multi_ctrlr": true, 00:08:47.308 "ana_reporting": false 00:08:47.308 }, 00:08:47.308 "vs": { 00:08:47.308 "nvme_version": "1.3" 00:08:47.308 }, 00:08:47.308 "ns_data": { 00:08:47.308 "id": 1, 00:08:47.308 "can_share": true 00:08:47.308 } 00:08:47.308 } 00:08:47.308 ], 00:08:47.308 "mp_policy": "active_passive" 00:08:47.308 } 00:08:47.308 } 00:08:47.308 ] 00:08:47.308 17:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2833482 00:08:47.308 17:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:47.308 17:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:47.567 Running I/O for 10 seconds... 00:08:48.506 Latency(us) 00:08:48.506 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:48.506 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:48.506 Nvme0n1 : 1.00 17783.00 69.46 0.00 0.00 0.00 0.00 0.00 00:08:48.506 =================================================================================================================== 00:08:48.506 Total : 17783.00 69.46 0.00 0.00 0.00 0.00 0.00 00:08:48.506 00:08:49.448 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c064de33-260b-4b74-984d-19422d73a1ce 00:08:49.448 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.448 Nvme0n1 : 2.00 17903.50 69.94 0.00 0.00 0.00 0.00 0.00 00:08:49.448 =================================================================================================================== 00:08:49.448 Total : 17903.50 69.94 0.00 0.00 0.00 0.00 0.00 00:08:49.448 00:08:49.448 true 00:08:49.448 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c064de33-260b-4b74-984d-19422d73a1ce 00:08:49.448 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:49.708 17:08:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:49.708 17:08:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:49.708 17:08:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2833482 00:08:50.649 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:50.649 Nvme0n1 : 3.00 17944.33 70.10 0.00 0.00 0.00 0.00 0.00 00:08:50.649 =================================================================================================================== 00:08:50.649 Total : 17944.33 70.10 0.00 0.00 0.00 0.00 0.00 00:08:50.649 00:08:51.591 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.591 Nvme0n1 : 4.00 17987.00 70.26 0.00 0.00 0.00 0.00 0.00 00:08:51.591 =================================================================================================================== 00:08:51.591 Total : 17987.00 70.26 0.00 0.00 0.00 0.00 0.00 00:08:51.591 00:08:52.531 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.531 Nvme0n1 : 5.00 18012.80 70.36 0.00 0.00 0.00 0.00 0.00 00:08:52.531 =================================================================================================================== 00:08:52.531 Total : 18012.80 70.36 0.00 0.00 0.00 0.00 0.00 00:08:52.531 00:08:53.469 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.469 Nvme0n1 : 6.00 18037.67 70.46 0.00 0.00 0.00 0.00 0.00 00:08:53.469 =================================================================================================================== 00:08:53.469 Total : 18037.67 70.46 0.00 0.00 0.00 0.00 0.00 00:08:53.469 00:08:54.408 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.408 Nvme0n1 : 7.00 18064.14 70.56 0.00 0.00 0.00 0.00 0.00 00:08:54.408 =================================================================================================================== 00:08:54.408 Total : 18064.14 70.56 0.00 0.00 0.00 0.00 0.00 00:08:54.408 00:08:55.347 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.347 Nvme0n1 : 8.00 18080.00 70.62 0.00 0.00 0.00 0.00 0.00 00:08:55.347 =================================================================================================================== 00:08:55.347 Total : 18080.00 70.62 0.00 0.00 0.00 0.00 0.00 00:08:55.347 00:08:56.731 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.731 Nvme0n1 : 9.00 18090.44 70.67 0.00 0.00 0.00 0.00 0.00 00:08:56.731 =================================================================================================================== 00:08:56.731 Total : 18090.44 70.67 0.00 0.00 0.00 0.00 0.00 00:08:56.731 00:08:57.669 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.669 Nvme0n1 : 10.00 18098.10 70.70 0.00 0.00 0.00 0.00 0.00 00:08:57.669 =================================================================================================================== 00:08:57.669 Total : 18098.10 70.70 0.00 0.00 0.00 0.00 0.00 00:08:57.669 00:08:57.669 00:08:57.669 Latency(us) 00:08:57.669 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.669 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.669 Nvme0n1 : 10.01 18101.17 70.71 0.00 0.00 7069.48 4314.45 14636.37 00:08:57.669 =================================================================================================================== 00:08:57.669 Total : 18101.17 70.71 0.00 0.00 7069.48 4314.45 14636.37 00:08:57.669 { 00:08:57.669 "results": [ 00:08:57.669 { 00:08:57.669 "job": "Nvme0n1", 00:08:57.669 "core_mask": "0x2", 00:08:57.669 "workload": "randwrite", 00:08:57.669 "status": "finished", 00:08:57.669 "queue_depth": 128, 00:08:57.669 "io_size": 4096, 00:08:57.669 "runtime": 10.005376, 00:08:57.669 "iops": 18101.168811646858, 00:08:57.669 "mibps": 70.70769067049554, 00:08:57.669 "io_failed": 0, 00:08:57.669 "io_timeout": 0, 00:08:57.669 "avg_latency_us": 7069.480678560057, 00:08:57.669 "min_latency_us": 4314.453333333333, 00:08:57.669 "max_latency_us": 14636.373333333333 00:08:57.669 } 00:08:57.669 ], 00:08:57.669 "core_count": 1 00:08:57.669 } 00:08:57.669 17:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2833198 00:08:57.669 17:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 2833198 ']' 00:08:57.669 17:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 2833198 00:08:57.669 17:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:08:57.670 17:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:57.670 17:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2833198 00:08:57.670 17:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:57.670 17:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:57.670 17:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2833198' 00:08:57.670 killing process with pid 2833198 00:08:57.670 17:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 2833198 00:08:57.670 Received shutdown signal, test time was about 10.000000 seconds 00:08:57.670 00:08:57.670 Latency(us) 00:08:57.670 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.670 =================================================================================================================== 00:08:57.670 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:57.670 17:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 2833198 00:08:57.670 17:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:57.930 17:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:57.930 17:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c064de33-260b-4b74-984d-19422d73a1ce 00:08:57.930 17:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:58.190 17:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:58.190 17:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:58.190 17:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2829392 00:08:58.190 17:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2829392 00:08:58.190 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2829392 Killed "${NVMF_APP[@]}" "$@" 00:08:58.190 17:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:58.190 17:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:58.190 17:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:58.190 17:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:58.190 17:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:58.190 17:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=2835571 00:08:58.190 17:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 2835571 00:08:58.190 17:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:58.190 17:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2835571 ']' 00:08:58.190 17:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.190 17:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:58.190 17:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.190 17:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:58.190 17:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:58.190 [2024-10-01 17:08:56.699870] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:08:58.190 [2024-10-01 17:08:56.699927] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:58.451 [2024-10-01 17:08:56.766493] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.451 [2024-10-01 17:08:56.797199] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:58.451 [2024-10-01 17:08:56.797237] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:58.451 [2024-10-01 17:08:56.797245] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:58.451 [2024-10-01 17:08:56.797252] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:58.451 [2024-10-01 17:08:56.797257] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:58.451 [2024-10-01 17:08:56.797276] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.451 17:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:58.451 17:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:58.451 17:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:58.451 17:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:58.451 17:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:58.451 17:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:58.451 17:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:58.712 [2024-10-01 17:08:57.079295] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:58.712 [2024-10-01 17:08:57.079382] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:58.712 [2024-10-01 17:08:57.079412] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:58.712 17:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:58.712 17:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 58692b14-9f63-45c8-b96f-17da0998e0be 00:08:58.712 17:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=58692b14-9f63-45c8-b96f-17da0998e0be 00:08:58.712 17:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:58.712 17:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:58.712 17:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:58.712 17:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:58.712 17:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:58.972 17:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 58692b14-9f63-45c8-b96f-17da0998e0be -t 2000 00:08:58.972 [ 00:08:58.972 { 00:08:58.972 "name": "58692b14-9f63-45c8-b96f-17da0998e0be", 00:08:58.972 "aliases": [ 00:08:58.972 "lvs/lvol" 00:08:58.972 ], 00:08:58.972 "product_name": "Logical Volume", 00:08:58.972 "block_size": 4096, 00:08:58.972 "num_blocks": 38912, 00:08:58.972 "uuid": "58692b14-9f63-45c8-b96f-17da0998e0be", 00:08:58.972 "assigned_rate_limits": { 00:08:58.972 "rw_ios_per_sec": 0, 00:08:58.972 "rw_mbytes_per_sec": 0, 00:08:58.972 "r_mbytes_per_sec": 0, 00:08:58.972 "w_mbytes_per_sec": 0 00:08:58.972 }, 00:08:58.972 "claimed": false, 00:08:58.972 "zoned": false, 00:08:58.972 "supported_io_types": { 00:08:58.972 "read": true, 00:08:58.972 "write": true, 00:08:58.972 "unmap": true, 00:08:58.972 "flush": false, 00:08:58.972 "reset": true, 00:08:58.972 "nvme_admin": false, 00:08:58.972 "nvme_io": false, 00:08:58.972 "nvme_io_md": false, 00:08:58.972 "write_zeroes": true, 00:08:58.972 "zcopy": false, 00:08:58.972 "get_zone_info": false, 00:08:58.972 "zone_management": false, 00:08:58.972 "zone_append": false, 00:08:58.972 "compare": false, 00:08:58.972 "compare_and_write": false, 00:08:58.972 "abort": false, 00:08:58.972 "seek_hole": true, 00:08:58.972 "seek_data": true, 00:08:58.972 "copy": false, 00:08:58.972 "nvme_iov_md": false 00:08:58.972 }, 00:08:58.972 "driver_specific": { 00:08:58.972 "lvol": { 00:08:58.972 "lvol_store_uuid": "c064de33-260b-4b74-984d-19422d73a1ce", 00:08:58.972 "base_bdev": "aio_bdev", 00:08:58.972 "thin_provision": false, 00:08:58.972 "num_allocated_clusters": 38, 00:08:58.972 "snapshot": false, 00:08:58.972 "clone": false, 00:08:58.972 "esnap_clone": false 00:08:58.972 } 00:08:58.972 } 00:08:58.972 } 00:08:58.972 ] 00:08:58.972 17:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:58.972 17:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c064de33-260b-4b74-984d-19422d73a1ce 00:08:58.972 17:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:59.233 17:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:59.233 17:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c064de33-260b-4b74-984d-19422d73a1ce 00:08:59.233 17:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:59.493 17:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:59.493 17:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:59.493 [2024-10-01 17:08:57.935578] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:59.493 17:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c064de33-260b-4b74-984d-19422d73a1ce 00:08:59.493 17:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:59.493 17:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c064de33-260b-4b74-984d-19422d73a1ce 00:08:59.493 17:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:59.493 17:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:59.493 17:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:59.493 17:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:59.493 17:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:59.493 17:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:59.493 17:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:59.493 17:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:59.493 17:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c064de33-260b-4b74-984d-19422d73a1ce 00:08:59.753 request: 00:08:59.753 { 00:08:59.753 "uuid": "c064de33-260b-4b74-984d-19422d73a1ce", 00:08:59.753 "method": "bdev_lvol_get_lvstores", 00:08:59.753 "req_id": 1 00:08:59.753 } 00:08:59.753 Got JSON-RPC error response 00:08:59.753 response: 00:08:59.753 { 00:08:59.753 "code": -19, 00:08:59.753 "message": "No such device" 00:08:59.753 } 00:08:59.753 17:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:59.753 17:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:59.753 17:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:59.753 17:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:59.753 17:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:59.753 aio_bdev 00:09:00.014 17:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 58692b14-9f63-45c8-b96f-17da0998e0be 00:09:00.014 17:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=58692b14-9f63-45c8-b96f-17da0998e0be 00:09:00.014 17:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:00.014 17:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:00.014 17:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:00.014 17:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:00.014 17:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:00.014 17:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 58692b14-9f63-45c8-b96f-17da0998e0be -t 2000 00:09:00.273 [ 00:09:00.273 { 00:09:00.273 "name": "58692b14-9f63-45c8-b96f-17da0998e0be", 00:09:00.273 "aliases": [ 00:09:00.273 "lvs/lvol" 00:09:00.273 ], 00:09:00.273 "product_name": "Logical Volume", 00:09:00.273 "block_size": 4096, 00:09:00.273 "num_blocks": 38912, 00:09:00.273 "uuid": "58692b14-9f63-45c8-b96f-17da0998e0be", 00:09:00.273 "assigned_rate_limits": { 00:09:00.273 "rw_ios_per_sec": 0, 00:09:00.273 "rw_mbytes_per_sec": 0, 00:09:00.273 "r_mbytes_per_sec": 0, 00:09:00.273 "w_mbytes_per_sec": 0 00:09:00.273 }, 00:09:00.273 "claimed": false, 00:09:00.273 "zoned": false, 00:09:00.273 "supported_io_types": { 00:09:00.273 "read": true, 00:09:00.273 "write": true, 00:09:00.273 "unmap": true, 00:09:00.273 "flush": false, 00:09:00.273 "reset": true, 00:09:00.273 "nvme_admin": false, 00:09:00.273 "nvme_io": false, 00:09:00.273 "nvme_io_md": false, 00:09:00.274 "write_zeroes": true, 00:09:00.274 "zcopy": false, 00:09:00.274 "get_zone_info": false, 00:09:00.274 "zone_management": false, 00:09:00.274 "zone_append": false, 00:09:00.274 "compare": false, 00:09:00.274 "compare_and_write": false, 00:09:00.274 "abort": false, 00:09:00.274 "seek_hole": true, 00:09:00.274 "seek_data": true, 00:09:00.274 "copy": false, 00:09:00.274 "nvme_iov_md": false 00:09:00.274 }, 00:09:00.274 "driver_specific": { 00:09:00.274 "lvol": { 00:09:00.274 "lvol_store_uuid": "c064de33-260b-4b74-984d-19422d73a1ce", 00:09:00.274 "base_bdev": "aio_bdev", 00:09:00.274 "thin_provision": false, 00:09:00.274 "num_allocated_clusters": 38, 00:09:00.274 "snapshot": false, 00:09:00.274 "clone": false, 00:09:00.274 "esnap_clone": false 00:09:00.274 } 00:09:00.274 } 00:09:00.274 } 00:09:00.274 ] 00:09:00.274 17:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:00.274 17:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c064de33-260b-4b74-984d-19422d73a1ce 00:09:00.274 17:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:00.274 17:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:00.274 17:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c064de33-260b-4b74-984d-19422d73a1ce 00:09:00.274 17:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:00.535 17:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:00.535 17:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 58692b14-9f63-45c8-b96f-17da0998e0be 00:09:00.795 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c064de33-260b-4b74-984d-19422d73a1ce 00:09:00.795 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:01.056 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:01.056 00:09:01.056 real 0m16.821s 00:09:01.056 user 0m44.861s 00:09:01.056 sys 0m2.901s 00:09:01.056 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:01.056 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:01.056 ************************************ 00:09:01.056 END TEST lvs_grow_dirty 00:09:01.056 ************************************ 00:09:01.056 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:01.056 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:09:01.056 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:09:01.056 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:01.056 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:01.056 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:01.056 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:01.056 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:01.056 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:01.056 nvmf_trace.0 00:09:01.315 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:09:01.315 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:01.315 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:01.315 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:01.315 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:01.315 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:01.315 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:01.315 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:01.315 rmmod nvme_tcp 00:09:01.315 rmmod nvme_fabrics 00:09:01.315 rmmod nvme_keyring 00:09:01.315 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:01.315 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:01.315 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:01.315 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 2835571 ']' 00:09:01.315 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 2835571 00:09:01.315 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 2835571 ']' 00:09:01.315 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 2835571 00:09:01.315 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:09:01.315 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:01.315 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2835571 00:09:01.315 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:01.315 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:01.315 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2835571' 00:09:01.315 killing process with pid 2835571 00:09:01.315 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 2835571 00:09:01.315 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 2835571 00:09:01.575 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:01.575 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:01.575 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:01.575 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:01.575 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:09:01.575 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:01.575 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:09:01.575 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:01.575 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:01.575 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.575 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:01.575 17:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.486 17:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:03.486 00:09:03.486 real 0m42.990s 00:09:03.486 user 1m5.816s 00:09:03.486 sys 0m10.086s 00:09:03.486 17:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:03.486 17:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:03.486 ************************************ 00:09:03.486 END TEST nvmf_lvs_grow 00:09:03.486 ************************************ 00:09:03.486 17:09:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:03.486 17:09:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:03.486 17:09:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:03.486 17:09:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:03.486 ************************************ 00:09:03.486 START TEST nvmf_bdev_io_wait 00:09:03.486 ************************************ 00:09:03.486 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:03.747 * Looking for test storage... 00:09:03.747 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:03.747 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:03.747 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:09:03.747 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:03.747 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:03.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.748 --rc genhtml_branch_coverage=1 00:09:03.748 --rc genhtml_function_coverage=1 00:09:03.748 --rc genhtml_legend=1 00:09:03.748 --rc geninfo_all_blocks=1 00:09:03.748 --rc geninfo_unexecuted_blocks=1 00:09:03.748 00:09:03.748 ' 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:03.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.748 --rc genhtml_branch_coverage=1 00:09:03.748 --rc genhtml_function_coverage=1 00:09:03.748 --rc genhtml_legend=1 00:09:03.748 --rc geninfo_all_blocks=1 00:09:03.748 --rc geninfo_unexecuted_blocks=1 00:09:03.748 00:09:03.748 ' 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:03.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.748 --rc genhtml_branch_coverage=1 00:09:03.748 --rc genhtml_function_coverage=1 00:09:03.748 --rc genhtml_legend=1 00:09:03.748 --rc geninfo_all_blocks=1 00:09:03.748 --rc geninfo_unexecuted_blocks=1 00:09:03.748 00:09:03.748 ' 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:03.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.748 --rc genhtml_branch_coverage=1 00:09:03.748 --rc genhtml_function_coverage=1 00:09:03.748 --rc genhtml_legend=1 00:09:03.748 --rc geninfo_all_blocks=1 00:09:03.748 --rc geninfo_unexecuted_blocks=1 00:09:03.748 00:09:03.748 ' 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:03.748 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:03.748 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:03.749 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:03.749 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:03.749 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:03.749 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:03.749 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:03.749 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:03.749 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:03.749 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.749 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:03.749 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.749 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:03.749 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:03.749 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:03.749 17:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:11.894 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:11.894 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:11.894 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:11.894 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:11.895 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:11.895 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:11.895 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:09:11.895 00:09:11.895 --- 10.0.0.2 ping statistics --- 00:09:11.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.895 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:11.895 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:11.895 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:09:11.895 00:09:11.895 --- 10.0.0.1 ping statistics --- 00:09:11.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.895 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=2840656 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 2840656 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 2840656 ']' 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:11.895 17:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.895 [2024-10-01 17:09:09.617035] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:09:11.895 [2024-10-01 17:09:09.617107] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:11.895 [2024-10-01 17:09:09.689404] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:11.895 [2024-10-01 17:09:09.731656] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:11.895 [2024-10-01 17:09:09.731700] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:11.895 [2024-10-01 17:09:09.731709] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:11.895 [2024-10-01 17:09:09.731716] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:11.895 [2024-10-01 17:09:09.731725] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:11.895 [2024-10-01 17:09:09.731873] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:11.895 [2024-10-01 17:09:09.732019] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:11.895 [2024-10-01 17:09:09.732271] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.895 [2024-10-01 17:09:09.732271] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:11.895 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:11.895 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:09:11.895 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:11.895 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:11.895 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.158 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:12.158 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:12.158 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.158 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.158 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.158 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:12.158 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.158 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.158 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.158 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:12.158 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.158 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.158 [2024-10-01 17:09:10.516293] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:12.158 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.158 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:12.158 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.158 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.158 Malloc0 00:09:12.158 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.158 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:12.158 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.158 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.158 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.158 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:12.158 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.158 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.158 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.158 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:12.158 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.158 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.158 [2024-10-01 17:09:10.587023] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:12.158 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.158 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2841129 00:09:12.158 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:12.158 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:12.158 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2841131 00:09:12.158 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:09:12.158 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:09:12.158 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:12.158 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:12.158 { 00:09:12.158 "params": { 00:09:12.158 "name": "Nvme$subsystem", 00:09:12.158 "trtype": "$TEST_TRANSPORT", 00:09:12.158 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:12.159 "adrfam": "ipv4", 00:09:12.159 "trsvcid": "$NVMF_PORT", 00:09:12.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:12.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:12.159 "hdgst": ${hdgst:-false}, 00:09:12.159 "ddgst": ${ddgst:-false} 00:09:12.159 }, 00:09:12.159 "method": "bdev_nvme_attach_controller" 00:09:12.159 } 00:09:12.159 EOF 00:09:12.159 )") 00:09:12.159 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2841133 00:09:12.159 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:12.159 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:12.159 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:09:12.159 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:09:12.159 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:12.159 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:12.159 { 00:09:12.159 "params": { 00:09:12.159 "name": "Nvme$subsystem", 00:09:12.159 "trtype": "$TEST_TRANSPORT", 00:09:12.159 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:12.159 "adrfam": "ipv4", 00:09:12.159 "trsvcid": "$NVMF_PORT", 00:09:12.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:12.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:12.159 "hdgst": ${hdgst:-false}, 00:09:12.159 "ddgst": ${ddgst:-false} 00:09:12.159 }, 00:09:12.159 "method": "bdev_nvme_attach_controller" 00:09:12.159 } 00:09:12.159 EOF 00:09:12.159 )") 00:09:12.159 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2841136 00:09:12.159 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:12.159 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:12.159 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:12.159 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:09:12.159 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:09:12.159 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:09:12.159 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:12.159 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:12.159 { 00:09:12.159 "params": { 00:09:12.159 "name": "Nvme$subsystem", 00:09:12.159 "trtype": "$TEST_TRANSPORT", 00:09:12.159 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:12.159 "adrfam": "ipv4", 00:09:12.159 "trsvcid": "$NVMF_PORT", 00:09:12.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:12.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:12.159 "hdgst": ${hdgst:-false}, 00:09:12.159 "ddgst": ${ddgst:-false} 00:09:12.159 }, 00:09:12.159 "method": "bdev_nvme_attach_controller" 00:09:12.159 } 00:09:12.159 EOF 00:09:12.159 )") 00:09:12.159 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:12.159 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:12.159 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:09:12.159 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:09:12.159 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:09:12.159 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:12.159 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:12.159 { 00:09:12.159 "params": { 00:09:12.159 "name": "Nvme$subsystem", 00:09:12.159 "trtype": "$TEST_TRANSPORT", 00:09:12.159 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:12.159 "adrfam": "ipv4", 00:09:12.159 "trsvcid": "$NVMF_PORT", 00:09:12.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:12.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:12.159 "hdgst": ${hdgst:-false}, 00:09:12.159 "ddgst": ${ddgst:-false} 00:09:12.159 }, 00:09:12.159 "method": "bdev_nvme_attach_controller" 00:09:12.159 } 00:09:12.159 EOF 00:09:12.159 )") 00:09:12.159 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:09:12.159 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2841129 00:09:12.159 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:09:12.159 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:09:12.159 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:09:12.159 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:09:12.159 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:09:12.159 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:12.159 "params": { 00:09:12.159 "name": "Nvme1", 00:09:12.159 "trtype": "tcp", 00:09:12.159 "traddr": "10.0.0.2", 00:09:12.159 "adrfam": "ipv4", 00:09:12.159 "trsvcid": "4420", 00:09:12.159 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:12.159 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:12.159 "hdgst": false, 00:09:12.159 "ddgst": false 00:09:12.159 }, 00:09:12.159 "method": "bdev_nvme_attach_controller" 00:09:12.159 }' 00:09:12.159 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:09:12.159 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:09:12.159 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:12.159 "params": { 00:09:12.159 "name": "Nvme1", 00:09:12.159 "trtype": "tcp", 00:09:12.159 "traddr": "10.0.0.2", 00:09:12.159 "adrfam": "ipv4", 00:09:12.159 "trsvcid": "4420", 00:09:12.159 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:12.159 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:12.159 "hdgst": false, 00:09:12.159 "ddgst": false 00:09:12.159 }, 00:09:12.159 "method": "bdev_nvme_attach_controller" 00:09:12.159 }' 00:09:12.159 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:09:12.159 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:12.159 "params": { 00:09:12.159 "name": "Nvme1", 00:09:12.159 "trtype": "tcp", 00:09:12.159 "traddr": "10.0.0.2", 00:09:12.159 "adrfam": "ipv4", 00:09:12.159 "trsvcid": "4420", 00:09:12.159 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:12.159 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:12.159 "hdgst": false, 00:09:12.159 "ddgst": false 00:09:12.159 }, 00:09:12.159 "method": "bdev_nvme_attach_controller" 00:09:12.159 }' 00:09:12.159 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:09:12.159 17:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:12.159 "params": { 00:09:12.159 "name": "Nvme1", 00:09:12.159 "trtype": "tcp", 00:09:12.159 "traddr": "10.0.0.2", 00:09:12.159 "adrfam": "ipv4", 00:09:12.159 "trsvcid": "4420", 00:09:12.159 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:12.159 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:12.159 "hdgst": false, 00:09:12.159 "ddgst": false 00:09:12.159 }, 00:09:12.159 "method": "bdev_nvme_attach_controller" 00:09:12.159 }' 00:09:12.159 [2024-10-01 17:09:10.646022] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:09:12.159 [2024-10-01 17:09:10.646074] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:12.159 [2024-10-01 17:09:10.646965] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:09:12.159 [2024-10-01 17:09:10.647025] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:12.159 [2024-10-01 17:09:10.651746] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:09:12.159 [2024-10-01 17:09:10.651793] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:12.159 [2024-10-01 17:09:10.653950] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:09:12.159 [2024-10-01 17:09:10.654005] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:12.420 [2024-10-01 17:09:10.791889] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.420 [2024-10-01 17:09:10.809880] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:09:12.420 [2024-10-01 17:09:10.850129] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.420 [2024-10-01 17:09:10.868339] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:09:12.420 [2024-10-01 17:09:10.910040] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.420 [2024-10-01 17:09:10.929900] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:09:12.420 [2024-10-01 17:09:10.958690] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.681 [2024-10-01 17:09:10.977320] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:09:12.942 Running I/O for 1 seconds... 00:09:12.942 Running I/O for 1 seconds... 00:09:12.942 Running I/O for 1 seconds... 00:09:12.942 Running I/O for 1 seconds... 00:09:13.886 12471.00 IOPS, 48.71 MiB/s 00:09:13.886 Latency(us) 00:09:13.886 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.886 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:13.886 Nvme1n1 : 1.01 12533.66 48.96 0.00 0.00 10178.57 5079.04 18896.21 00:09:13.886 =================================================================================================================== 00:09:13.886 Total : 12533.66 48.96 0.00 0.00 10178.57 5079.04 18896.21 00:09:13.886 10961.00 IOPS, 42.82 MiB/s 00:09:13.886 Latency(us) 00:09:13.886 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.886 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:13.886 Nvme1n1 : 1.01 11015.15 43.03 0.00 0.00 11579.77 5133.65 19114.67 00:09:13.886 =================================================================================================================== 00:09:13.886 Total : 11015.15 43.03 0.00 0.00 11579.77 5133.65 19114.67 00:09:13.886 18381.00 IOPS, 71.80 MiB/s 00:09:13.886 Latency(us) 00:09:13.886 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.886 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:13.886 Nvme1n1 : 1.01 18462.89 72.12 0.00 0.00 6917.03 2703.36 14527.15 00:09:13.886 =================================================================================================================== 00:09:13.886 Total : 18462.89 72.12 0.00 0.00 6917.03 2703.36 14527.15 00:09:13.886 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2841131 00:09:13.886 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2841133 00:09:13.886 188528.00 IOPS, 736.44 MiB/s 00:09:13.886 Latency(us) 00:09:13.886 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.886 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:13.886 Nvme1n1 : 1.00 188111.24 734.81 0.00 0.00 676.91 303.79 2225.49 00:09:13.886 =================================================================================================================== 00:09:13.886 Total : 188111.24 734.81 0.00 0.00 676.91 303.79 2225.49 00:09:14.147 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2841136 00:09:14.147 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:14.147 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.147 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:14.147 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.147 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:14.147 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:14.147 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:14.147 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:14.147 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:14.147 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:14.147 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:14.147 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:14.147 rmmod nvme_tcp 00:09:14.147 rmmod nvme_fabrics 00:09:14.147 rmmod nvme_keyring 00:09:14.147 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:14.147 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:14.147 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:14.147 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 2840656 ']' 00:09:14.147 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 2840656 00:09:14.147 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 2840656 ']' 00:09:14.147 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 2840656 00:09:14.147 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:09:14.147 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:14.147 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2840656 00:09:14.408 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:14.408 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:14.408 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2840656' 00:09:14.408 killing process with pid 2840656 00:09:14.408 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 2840656 00:09:14.408 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 2840656 00:09:14.408 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:14.408 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:14.408 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:14.408 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:14.408 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:09:14.408 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:14.408 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:09:14.408 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:14.408 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:14.408 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.408 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:14.408 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.956 17:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:16.956 00:09:16.956 real 0m12.864s 00:09:16.956 user 0m19.919s 00:09:16.956 sys 0m6.994s 00:09:16.956 17:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:16.956 17:09:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:16.956 ************************************ 00:09:16.956 END TEST nvmf_bdev_io_wait 00:09:16.956 ************************************ 00:09:16.956 17:09:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:16.956 17:09:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:16.956 17:09:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:16.956 17:09:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:16.956 ************************************ 00:09:16.956 START TEST nvmf_queue_depth 00:09:16.956 ************************************ 00:09:16.956 17:09:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:16.956 * Looking for test storage... 00:09:16.956 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:16.956 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:16.956 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:09:16.956 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:16.956 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:16.956 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:16.956 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:16.956 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:16.956 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:16.956 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:16.956 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:16.956 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:16.956 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:16.956 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:16.956 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:16.956 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:16.956 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:16.956 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:16.956 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:16.956 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:16.956 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:16.956 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:16.956 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:16.956 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:16.956 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:16.956 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:16.956 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:16.956 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:16.956 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:16.956 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:16.956 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:16.956 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:16.956 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:16.956 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:16.956 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:16.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.956 --rc genhtml_branch_coverage=1 00:09:16.956 --rc genhtml_function_coverage=1 00:09:16.956 --rc genhtml_legend=1 00:09:16.956 --rc geninfo_all_blocks=1 00:09:16.956 --rc geninfo_unexecuted_blocks=1 00:09:16.956 00:09:16.956 ' 00:09:16.956 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:16.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.956 --rc genhtml_branch_coverage=1 00:09:16.956 --rc genhtml_function_coverage=1 00:09:16.956 --rc genhtml_legend=1 00:09:16.956 --rc geninfo_all_blocks=1 00:09:16.956 --rc geninfo_unexecuted_blocks=1 00:09:16.956 00:09:16.956 ' 00:09:16.956 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:16.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.956 --rc genhtml_branch_coverage=1 00:09:16.956 --rc genhtml_function_coverage=1 00:09:16.956 --rc genhtml_legend=1 00:09:16.956 --rc geninfo_all_blocks=1 00:09:16.956 --rc geninfo_unexecuted_blocks=1 00:09:16.956 00:09:16.956 ' 00:09:16.956 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:16.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.956 --rc genhtml_branch_coverage=1 00:09:16.956 --rc genhtml_function_coverage=1 00:09:16.957 --rc genhtml_legend=1 00:09:16.957 --rc geninfo_all_blocks=1 00:09:16.957 --rc geninfo_unexecuted_blocks=1 00:09:16.957 00:09:16.957 ' 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:16.957 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:16.957 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:25.101 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:25.101 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:25.101 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:25.101 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:25.102 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:25.102 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:25.102 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:09:25.102 00:09:25.102 --- 10.0.0.2 ping statistics --- 00:09:25.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.102 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:25.102 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:25.102 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:09:25.102 00:09:25.102 --- 10.0.0.1 ping statistics --- 00:09:25.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.102 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=2845944 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 2845944 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2845944 ']' 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:25.102 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:25.102 [2024-10-01 17:09:22.650119] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:09:25.102 [2024-10-01 17:09:22.650168] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:25.102 [2024-10-01 17:09:22.736576] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.102 [2024-10-01 17:09:22.766671] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:25.102 [2024-10-01 17:09:22.766714] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:25.102 [2024-10-01 17:09:22.766721] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:25.102 [2024-10-01 17:09:22.766728] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:25.102 [2024-10-01 17:09:22.766734] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:25.102 [2024-10-01 17:09:22.766761] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:25.102 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:25.102 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:25.102 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:25.102 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:25.102 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:25.102 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:25.102 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:25.102 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.102 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:25.102 [2024-10-01 17:09:23.484950] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:25.102 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.102 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:25.102 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.102 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:25.102 Malloc0 00:09:25.102 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.102 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:25.102 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.102 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:25.102 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.102 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:25.102 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.102 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:25.102 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.102 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:25.102 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.102 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:25.102 [2024-10-01 17:09:23.547586] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:25.103 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.103 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2846079 00:09:25.103 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:25.103 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:25.103 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2846079 /var/tmp/bdevperf.sock 00:09:25.103 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2846079 ']' 00:09:25.103 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:25.103 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:25.103 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:25.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:25.103 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:25.103 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:25.103 [2024-10-01 17:09:23.605676] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:09:25.103 [2024-10-01 17:09:23.605743] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2846079 ] 00:09:25.364 [2024-10-01 17:09:23.670407] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.364 [2024-10-01 17:09:23.709829] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.364 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:25.364 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:25.364 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:25.364 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.364 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:25.625 NVMe0n1 00:09:25.625 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.625 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:25.625 Running I/O for 10 seconds... 00:09:35.868 8939.00 IOPS, 34.92 MiB/s 9193.00 IOPS, 35.91 MiB/s 9224.00 IOPS, 36.03 MiB/s 9334.50 IOPS, 36.46 MiB/s 9814.60 IOPS, 38.34 MiB/s 10141.50 IOPS, 39.62 MiB/s 10385.00 IOPS, 40.57 MiB/s 10613.25 IOPS, 41.46 MiB/s 10714.89 IOPS, 41.86 MiB/s 10846.50 IOPS, 42.37 MiB/s 00:09:35.868 Latency(us) 00:09:35.868 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:35.868 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:35.868 Verification LBA range: start 0x0 length 0x4000 00:09:35.868 NVMe0n1 : 10.08 10858.02 42.41 0.00 0.00 93953.84 24576.00 68594.35 00:09:35.868 =================================================================================================================== 00:09:35.868 Total : 10858.02 42.41 0.00 0.00 93953.84 24576.00 68594.35 00:09:35.868 { 00:09:35.868 "results": [ 00:09:35.868 { 00:09:35.868 "job": "NVMe0n1", 00:09:35.868 "core_mask": "0x1", 00:09:35.868 "workload": "verify", 00:09:35.868 "status": "finished", 00:09:35.868 "verify_range": { 00:09:35.868 "start": 0, 00:09:35.868 "length": 16384 00:09:35.868 }, 00:09:35.868 "queue_depth": 1024, 00:09:35.868 "io_size": 4096, 00:09:35.868 "runtime": 10.078168, 00:09:35.868 "iops": 10858.024990256166, 00:09:35.868 "mibps": 42.41416011818815, 00:09:35.868 "io_failed": 0, 00:09:35.868 "io_timeout": 0, 00:09:35.868 "avg_latency_us": 93953.84330418201, 00:09:35.868 "min_latency_us": 24576.0, 00:09:35.868 "max_latency_us": 68594.34666666666 00:09:35.868 } 00:09:35.868 ], 00:09:35.868 "core_count": 1 00:09:35.868 } 00:09:35.868 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2846079 00:09:35.868 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2846079 ']' 00:09:35.868 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2846079 00:09:35.868 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:35.868 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:35.868 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2846079 00:09:35.868 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:35.868 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:35.868 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2846079' 00:09:35.868 killing process with pid 2846079 00:09:35.868 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2846079 00:09:35.868 Received shutdown signal, test time was about 10.000000 seconds 00:09:35.868 00:09:35.868 Latency(us) 00:09:35.868 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:35.868 =================================================================================================================== 00:09:35.868 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:35.868 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2846079 00:09:35.868 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:35.868 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:35.868 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:35.868 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:35.868 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:35.868 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:35.868 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:35.868 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:35.868 rmmod nvme_tcp 00:09:35.868 rmmod nvme_fabrics 00:09:36.128 rmmod nvme_keyring 00:09:36.128 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:36.128 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:36.128 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:36.128 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 2845944 ']' 00:09:36.128 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 2845944 00:09:36.128 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2845944 ']' 00:09:36.128 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2845944 00:09:36.128 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:36.128 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:36.128 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2845944 00:09:36.128 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:36.128 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:36.128 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2845944' 00:09:36.128 killing process with pid 2845944 00:09:36.128 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2845944 00:09:36.128 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2845944 00:09:36.128 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:36.128 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:36.128 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:36.128 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:36.128 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:09:36.128 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:36.128 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:09:36.128 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:36.128 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:36.128 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.128 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:36.128 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.674 17:09:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:38.674 00:09:38.674 real 0m21.747s 00:09:38.674 user 0m24.506s 00:09:38.674 sys 0m6.779s 00:09:38.674 17:09:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:38.674 17:09:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:38.674 ************************************ 00:09:38.674 END TEST nvmf_queue_depth 00:09:38.674 ************************************ 00:09:38.674 17:09:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:38.674 17:09:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:38.674 17:09:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:38.674 17:09:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:38.674 ************************************ 00:09:38.674 START TEST nvmf_target_multipath 00:09:38.674 ************************************ 00:09:38.674 17:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:38.674 * Looking for test storage... 00:09:38.674 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:38.674 17:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:38.674 17:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:09:38.674 17:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:38.674 17:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:38.674 17:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:38.674 17:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:38.674 17:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:38.674 17:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:38.674 17:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:38.674 17:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:38.674 17:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:38.674 17:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:38.674 17:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:38.674 17:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:38.674 17:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:38.674 17:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:38.674 17:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:38.674 17:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:38.674 17:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:38.674 17:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:38.675 17:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:38.675 17:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:38.675 17:09:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:38.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.675 --rc genhtml_branch_coverage=1 00:09:38.675 --rc genhtml_function_coverage=1 00:09:38.675 --rc genhtml_legend=1 00:09:38.675 --rc geninfo_all_blocks=1 00:09:38.675 --rc geninfo_unexecuted_blocks=1 00:09:38.675 00:09:38.675 ' 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:38.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.675 --rc genhtml_branch_coverage=1 00:09:38.675 --rc genhtml_function_coverage=1 00:09:38.675 --rc genhtml_legend=1 00:09:38.675 --rc geninfo_all_blocks=1 00:09:38.675 --rc geninfo_unexecuted_blocks=1 00:09:38.675 00:09:38.675 ' 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:38.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.675 --rc genhtml_branch_coverage=1 00:09:38.675 --rc genhtml_function_coverage=1 00:09:38.675 --rc genhtml_legend=1 00:09:38.675 --rc geninfo_all_blocks=1 00:09:38.675 --rc geninfo_unexecuted_blocks=1 00:09:38.675 00:09:38.675 ' 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:38.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.675 --rc genhtml_branch_coverage=1 00:09:38.675 --rc genhtml_function_coverage=1 00:09:38.675 --rc genhtml_legend=1 00:09:38.675 --rc geninfo_all_blocks=1 00:09:38.675 --rc geninfo_unexecuted_blocks=1 00:09:38.675 00:09:38.675 ' 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:38.675 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:38.676 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:38.676 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:38.676 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:38.676 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:38.676 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:38.676 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:38.676 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:38.676 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:38.676 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:38.676 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:38.676 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:38.676 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:38.676 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:38.676 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:38.676 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.676 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:38.676 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.676 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:38.676 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:38.676 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:38.676 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:46.817 17:09:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:46.817 17:09:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:46.818 17:09:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:46.818 17:09:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:46.818 17:09:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:46.818 17:09:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:46.818 17:09:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:46.818 17:09:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:46.818 17:09:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:46.818 17:09:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:46.818 17:09:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:46.818 17:09:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:46.818 17:09:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:46.818 17:09:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:46.818 17:09:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:46.818 17:09:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:46.818 17:09:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:46.818 17:09:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:46.818 17:09:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:46.818 17:09:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:46.818 17:09:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:46.818 17:09:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:46.818 17:09:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:46.818 17:09:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:46.818 17:09:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:46.818 17:09:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:46.818 17:09:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:46.818 17:09:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:46.818 17:09:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:46.818 17:09:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:46.818 17:09:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:46.818 17:09:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:46.818 17:09:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:46.818 17:09:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:46.818 17:09:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:46.818 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:46.818 17:09:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:46.818 17:09:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:46.818 17:09:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:46.818 17:09:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:46.818 17:09:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:46.818 17:09:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:46.818 17:09:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:46.818 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:46.818 17:09:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:46.818 17:09:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:46.818 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:46.818 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:46.818 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:46.819 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:46.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:46.819 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:09:46.819 00:09:46.819 --- 10.0.0.2 ping statistics --- 00:09:46.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.819 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:09:46.819 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:46.819 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:46.819 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:09:46.819 00:09:46.819 --- 10.0.0.1 ping statistics --- 00:09:46.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.819 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:09:46.819 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:46.819 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:09:46.819 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:46.819 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:46.819 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:46.819 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:46.819 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:46.819 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:46.819 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:46.819 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:46.819 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:46.819 only one NIC for nvmf test 00:09:46.819 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:46.819 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:46.819 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:46.819 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:46.819 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:46.819 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:46.819 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:46.819 rmmod nvme_tcp 00:09:46.819 rmmod nvme_fabrics 00:09:46.819 rmmod nvme_keyring 00:09:46.819 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:46.819 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:46.819 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:46.819 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:09:46.819 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:46.819 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:46.819 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:46.819 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:46.819 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:09:46.819 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:46.819 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:09:46.819 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:46.819 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:46.819 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.819 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:46.819 17:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:48.207 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:48.207 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:48.207 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:48.207 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:48.207 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:48.207 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:48.207 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:48.207 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:48.207 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:48.207 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:48.207 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:48.207 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:48.207 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:09:48.207 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:48.207 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:48.207 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:48.207 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:48.207 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:09:48.207 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:48.207 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:09:48.207 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:48.207 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:48.207 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.207 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:48.207 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:48.207 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:48.207 00:09:48.207 real 0m9.782s 00:09:48.207 user 0m1.965s 00:09:48.207 sys 0m5.716s 00:09:48.207 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:48.207 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:48.207 ************************************ 00:09:48.207 END TEST nvmf_target_multipath 00:09:48.207 ************************************ 00:09:48.207 17:09:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:48.207 17:09:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:48.207 17:09:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:48.207 17:09:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:48.207 ************************************ 00:09:48.207 START TEST nvmf_zcopy 00:09:48.207 ************************************ 00:09:48.207 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:48.469 * Looking for test storage... 00:09:48.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:48.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.469 --rc genhtml_branch_coverage=1 00:09:48.469 --rc genhtml_function_coverage=1 00:09:48.469 --rc genhtml_legend=1 00:09:48.469 --rc geninfo_all_blocks=1 00:09:48.469 --rc geninfo_unexecuted_blocks=1 00:09:48.469 00:09:48.469 ' 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:48.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.469 --rc genhtml_branch_coverage=1 00:09:48.469 --rc genhtml_function_coverage=1 00:09:48.469 --rc genhtml_legend=1 00:09:48.469 --rc geninfo_all_blocks=1 00:09:48.469 --rc geninfo_unexecuted_blocks=1 00:09:48.469 00:09:48.469 ' 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:48.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.469 --rc genhtml_branch_coverage=1 00:09:48.469 --rc genhtml_function_coverage=1 00:09:48.469 --rc genhtml_legend=1 00:09:48.469 --rc geninfo_all_blocks=1 00:09:48.469 --rc geninfo_unexecuted_blocks=1 00:09:48.469 00:09:48.469 ' 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:48.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.469 --rc genhtml_branch_coverage=1 00:09:48.469 --rc genhtml_function_coverage=1 00:09:48.469 --rc genhtml_legend=1 00:09:48.469 --rc geninfo_all_blocks=1 00:09:48.469 --rc geninfo_unexecuted_blocks=1 00:09:48.469 00:09:48.469 ' 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:48.469 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.470 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.470 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.470 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:48.470 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.470 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:48.470 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:48.470 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:48.470 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:48.470 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:48.470 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:48.470 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:48.470 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:48.470 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:48.470 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:48.470 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:48.470 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:48.470 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:48.470 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:48.470 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:48.470 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:48.470 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:48.470 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.470 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:48.470 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:48.470 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:48.470 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:48.470 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:48.470 17:09:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:56.619 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:56.619 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:56.619 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:56.619 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:56.619 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:56.619 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:56.619 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:56.619 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:56.619 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:56.619 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:56.619 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:56.619 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:56.619 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:56.619 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:56.619 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:56.619 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:56.619 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:56.619 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:56.619 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:56.619 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:56.619 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:56.619 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:56.619 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:56.619 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:56.619 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:56.619 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:56.619 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:56.619 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:56.619 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:56.619 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:56.619 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:56.619 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:56.620 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:56.620 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:56.620 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:56.620 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:56.620 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:56.620 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:09:56.620 00:09:56.620 --- 10.0.0.2 ping statistics --- 00:09:56.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.620 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:56.620 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:56.620 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:09:56.620 00:09:56.620 --- 10.0.0.1 ping statistics --- 00:09:56.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.620 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:56.620 17:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:56.620 17:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:56.620 17:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:56.620 17:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:56.620 17:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:56.620 17:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=2856667 00:09:56.620 17:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 2856667 00:09:56.620 17:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:56.620 17:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 2856667 ']' 00:09:56.620 17:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.620 17:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:56.620 17:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.620 17:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:56.620 17:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:56.620 [2024-10-01 17:09:54.090663] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:09:56.620 [2024-10-01 17:09:54.090717] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:56.620 [2024-10-01 17:09:54.177397] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.620 [2024-10-01 17:09:54.215281] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:56.620 [2024-10-01 17:09:54.215339] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:56.620 [2024-10-01 17:09:54.215348] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:56.620 [2024-10-01 17:09:54.215355] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:56.621 [2024-10-01 17:09:54.215361] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:56.621 [2024-10-01 17:09:54.215386] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.621 17:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:56.621 17:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:09:56.621 17:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:56.621 17:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:56.621 17:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:56.621 17:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:56.621 17:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:56.621 17:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:56.621 17:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.621 17:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:56.621 [2024-10-01 17:09:54.944863] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:56.621 17:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.621 17:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:56.621 17:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.621 17:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:56.621 17:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.621 17:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:56.621 17:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.621 17:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:56.621 [2024-10-01 17:09:54.969199] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:56.621 17:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.621 17:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:56.621 17:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.621 17:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:56.621 17:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.621 17:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:56.621 17:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.621 17:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:56.621 malloc0 00:09:56.621 17:09:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.621 17:09:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:56.621 17:09:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.621 17:09:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:56.621 17:09:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.621 17:09:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:56.621 17:09:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:56.621 17:09:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:09:56.621 17:09:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:09:56.621 17:09:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:56.621 17:09:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:56.621 { 00:09:56.621 "params": { 00:09:56.621 "name": "Nvme$subsystem", 00:09:56.621 "trtype": "$TEST_TRANSPORT", 00:09:56.621 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:56.621 "adrfam": "ipv4", 00:09:56.621 "trsvcid": "$NVMF_PORT", 00:09:56.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:56.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:56.621 "hdgst": ${hdgst:-false}, 00:09:56.621 "ddgst": ${ddgst:-false} 00:09:56.621 }, 00:09:56.621 "method": "bdev_nvme_attach_controller" 00:09:56.621 } 00:09:56.621 EOF 00:09:56.621 )") 00:09:56.621 17:09:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:09:56.621 17:09:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:09:56.621 17:09:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:09:56.621 17:09:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:56.621 "params": { 00:09:56.621 "name": "Nvme1", 00:09:56.621 "trtype": "tcp", 00:09:56.621 "traddr": "10.0.0.2", 00:09:56.621 "adrfam": "ipv4", 00:09:56.621 "trsvcid": "4420", 00:09:56.621 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:56.621 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:56.621 "hdgst": false, 00:09:56.621 "ddgst": false 00:09:56.621 }, 00:09:56.621 "method": "bdev_nvme_attach_controller" 00:09:56.621 }' 00:09:56.621 [2024-10-01 17:09:55.083856] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:09:56.621 [2024-10-01 17:09:55.083909] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2856852 ] 00:09:56.621 [2024-10-01 17:09:55.144714] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.881 [2024-10-01 17:09:55.177975] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.142 Running I/O for 10 seconds... 00:10:07.005 6647.00 IOPS, 51.93 MiB/s 6708.00 IOPS, 52.41 MiB/s 6730.33 IOPS, 52.58 MiB/s 6741.00 IOPS, 52.66 MiB/s 7010.60 IOPS, 54.77 MiB/s 7473.83 IOPS, 58.39 MiB/s 7807.14 IOPS, 60.99 MiB/s 8057.50 IOPS, 62.95 MiB/s 8247.33 IOPS, 64.43 MiB/s 8402.30 IOPS, 65.64 MiB/s 00:10:07.005 Latency(us) 00:10:07.005 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:07.005 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:07.005 Verification LBA range: start 0x0 length 0x1000 00:10:07.005 Nvme1n1 : 10.05 8373.44 65.42 0.00 0.00 15181.19 3058.35 43690.67 00:10:07.005 =================================================================================================================== 00:10:07.005 Total : 8373.44 65.42 0.00 0.00 15181.19 3058.35 43690.67 00:10:07.266 17:10:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2859041 00:10:07.266 17:10:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:07.266 17:10:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.266 17:10:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:07.266 17:10:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:07.266 17:10:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:10:07.266 17:10:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:10:07.266 17:10:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:07.266 17:10:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:07.266 { 00:10:07.266 "params": { 00:10:07.266 "name": "Nvme$subsystem", 00:10:07.266 "trtype": "$TEST_TRANSPORT", 00:10:07.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:07.266 "adrfam": "ipv4", 00:10:07.266 "trsvcid": "$NVMF_PORT", 00:10:07.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:07.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:07.266 "hdgst": ${hdgst:-false}, 00:10:07.266 "ddgst": ${ddgst:-false} 00:10:07.266 }, 00:10:07.266 "method": "bdev_nvme_attach_controller" 00:10:07.266 } 00:10:07.266 EOF 00:10:07.266 )") 00:10:07.266 17:10:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:10:07.266 [2024-10-01 17:10:05.677340] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.266 [2024-10-01 17:10:05.677373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.266 17:10:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:10:07.266 17:10:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:10:07.266 17:10:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:07.266 "params": { 00:10:07.266 "name": "Nvme1", 00:10:07.266 "trtype": "tcp", 00:10:07.266 "traddr": "10.0.0.2", 00:10:07.266 "adrfam": "ipv4", 00:10:07.266 "trsvcid": "4420", 00:10:07.266 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:07.266 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:07.266 "hdgst": false, 00:10:07.266 "ddgst": false 00:10:07.266 }, 00:10:07.266 "method": "bdev_nvme_attach_controller" 00:10:07.266 }' 00:10:07.266 [2024-10-01 17:10:05.689337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.266 [2024-10-01 17:10:05.689346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.266 [2024-10-01 17:10:05.701365] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.266 [2024-10-01 17:10:05.701373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.266 [2024-10-01 17:10:05.713394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.266 [2024-10-01 17:10:05.713407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.266 [2024-10-01 17:10:05.723469] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:10:07.266 [2024-10-01 17:10:05.723524] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2859041 ] 00:10:07.266 [2024-10-01 17:10:05.725424] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.266 [2024-10-01 17:10:05.725434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.266 [2024-10-01 17:10:05.737454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.266 [2024-10-01 17:10:05.737462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.266 [2024-10-01 17:10:05.749485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.266 [2024-10-01 17:10:05.749493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.266 [2024-10-01 17:10:05.761516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.266 [2024-10-01 17:10:05.761524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.266 [2024-10-01 17:10:05.773548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.266 [2024-10-01 17:10:05.773555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.266 [2024-10-01 17:10:05.784371] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.266 [2024-10-01 17:10:05.785579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.266 [2024-10-01 17:10:05.785586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.266 [2024-10-01 17:10:05.797612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.266 [2024-10-01 17:10:05.797624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.266 [2024-10-01 17:10:05.809644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.266 [2024-10-01 17:10:05.809657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.526 [2024-10-01 17:10:05.814143] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.526 [2024-10-01 17:10:05.821671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.526 [2024-10-01 17:10:05.821680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.526 [2024-10-01 17:10:05.833707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.526 [2024-10-01 17:10:05.833720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.526 [2024-10-01 17:10:05.845736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.526 [2024-10-01 17:10:05.845746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.526 [2024-10-01 17:10:05.857765] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.526 [2024-10-01 17:10:05.857774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.526 [2024-10-01 17:10:05.869794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.526 [2024-10-01 17:10:05.869803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.526 [2024-10-01 17:10:05.881835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.526 [2024-10-01 17:10:05.881850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.526 [2024-10-01 17:10:05.893858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.526 [2024-10-01 17:10:05.893869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.526 [2024-10-01 17:10:05.905894] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.526 [2024-10-01 17:10:05.905904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.526 [2024-10-01 17:10:05.917919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.526 [2024-10-01 17:10:05.917929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.526 [2024-10-01 17:10:05.929952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.526 [2024-10-01 17:10:05.929962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.527 [2024-10-01 17:10:05.942217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.527 [2024-10-01 17:10:05.942234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.527 [2024-10-01 17:10:05.954021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.527 [2024-10-01 17:10:05.954034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.527 Running I/O for 5 seconds... 00:10:07.527 [2024-10-01 17:10:05.969070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.527 [2024-10-01 17:10:05.969087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.527 [2024-10-01 17:10:05.982102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.527 [2024-10-01 17:10:05.982120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.527 [2024-10-01 17:10:05.994743] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.527 [2024-10-01 17:10:05.994760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.527 [2024-10-01 17:10:06.007730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.527 [2024-10-01 17:10:06.007747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.527 [2024-10-01 17:10:06.020192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.527 [2024-10-01 17:10:06.020208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.527 [2024-10-01 17:10:06.033026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.527 [2024-10-01 17:10:06.033041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.527 [2024-10-01 17:10:06.045502] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.527 [2024-10-01 17:10:06.045517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.527 [2024-10-01 17:10:06.058290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.527 [2024-10-01 17:10:06.058305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.527 [2024-10-01 17:10:06.070879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.527 [2024-10-01 17:10:06.070894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.787 [2024-10-01 17:10:06.084027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.787 [2024-10-01 17:10:06.084042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.787 [2024-10-01 17:10:06.097315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.787 [2024-10-01 17:10:06.097330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.787 [2024-10-01 17:10:06.110674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.787 [2024-10-01 17:10:06.110691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.787 [2024-10-01 17:10:06.124469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.787 [2024-10-01 17:10:06.124485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.787 [2024-10-01 17:10:06.138344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.787 [2024-10-01 17:10:06.138360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.787 [2024-10-01 17:10:06.151606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.787 [2024-10-01 17:10:06.151625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.787 [2024-10-01 17:10:06.164932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.787 [2024-10-01 17:10:06.164948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.787 [2024-10-01 17:10:06.177976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.787 [2024-10-01 17:10:06.177991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.787 [2024-10-01 17:10:06.191485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.787 [2024-10-01 17:10:06.191501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.787 [2024-10-01 17:10:06.204453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.787 [2024-10-01 17:10:06.204468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.787 [2024-10-01 17:10:06.216923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.787 [2024-10-01 17:10:06.216939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.788 [2024-10-01 17:10:06.230215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.788 [2024-10-01 17:10:06.230230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.788 [2024-10-01 17:10:06.243173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.788 [2024-10-01 17:10:06.243188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.788 [2024-10-01 17:10:06.255933] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.788 [2024-10-01 17:10:06.255948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.788 [2024-10-01 17:10:06.269554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.788 [2024-10-01 17:10:06.269569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.788 [2024-10-01 17:10:06.282366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.788 [2024-10-01 17:10:06.282381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.788 [2024-10-01 17:10:06.295668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.788 [2024-10-01 17:10:06.295683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.788 [2024-10-01 17:10:06.308339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.788 [2024-10-01 17:10:06.308354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.788 [2024-10-01 17:10:06.321411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.788 [2024-10-01 17:10:06.321426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.788 [2024-10-01 17:10:06.334144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.788 [2024-10-01 17:10:06.334159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.048 [2024-10-01 17:10:06.346705] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.048 [2024-10-01 17:10:06.346720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.048 [2024-10-01 17:10:06.360016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.049 [2024-10-01 17:10:06.360031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.049 [2024-10-01 17:10:06.372851] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.049 [2024-10-01 17:10:06.372867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.049 [2024-10-01 17:10:06.385963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.049 [2024-10-01 17:10:06.385978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.049 [2024-10-01 17:10:06.398475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.049 [2024-10-01 17:10:06.398494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.049 [2024-10-01 17:10:06.411972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.049 [2024-10-01 17:10:06.411987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.049 [2024-10-01 17:10:06.424881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.049 [2024-10-01 17:10:06.424896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.049 [2024-10-01 17:10:06.437991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.049 [2024-10-01 17:10:06.438010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.049 [2024-10-01 17:10:06.450518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.049 [2024-10-01 17:10:06.450533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.049 [2024-10-01 17:10:06.463220] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.049 [2024-10-01 17:10:06.463236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.049 [2024-10-01 17:10:06.476586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.049 [2024-10-01 17:10:06.476602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.049 [2024-10-01 17:10:06.489890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.049 [2024-10-01 17:10:06.489906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.049 [2024-10-01 17:10:06.503137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.049 [2024-10-01 17:10:06.503152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.049 [2024-10-01 17:10:06.516757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.049 [2024-10-01 17:10:06.516772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.049 [2024-10-01 17:10:06.530441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.049 [2024-10-01 17:10:06.530457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.049 [2024-10-01 17:10:06.543541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.049 [2024-10-01 17:10:06.543557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.049 [2024-10-01 17:10:06.556109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.049 [2024-10-01 17:10:06.556124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.049 [2024-10-01 17:10:06.569278] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.049 [2024-10-01 17:10:06.569293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.049 [2024-10-01 17:10:06.582049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.049 [2024-10-01 17:10:06.582064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.310 [2024-10-01 17:10:06.595847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.310 [2024-10-01 17:10:06.595862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.310 [2024-10-01 17:10:06.609146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.310 [2024-10-01 17:10:06.609161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.310 [2024-10-01 17:10:06.621470] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.310 [2024-10-01 17:10:06.621485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.310 [2024-10-01 17:10:06.635186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.310 [2024-10-01 17:10:06.635201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.310 [2024-10-01 17:10:06.647955] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.310 [2024-10-01 17:10:06.647975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.310 [2024-10-01 17:10:06.661287] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.310 [2024-10-01 17:10:06.661303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.310 [2024-10-01 17:10:06.674588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.310 [2024-10-01 17:10:06.674604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.310 [2024-10-01 17:10:06.687696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.310 [2024-10-01 17:10:06.687712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.310 [2024-10-01 17:10:06.700238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.310 [2024-10-01 17:10:06.700254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.310 [2024-10-01 17:10:06.713251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.310 [2024-10-01 17:10:06.713267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.310 [2024-10-01 17:10:06.725757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.310 [2024-10-01 17:10:06.725772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.310 [2024-10-01 17:10:06.739379] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.310 [2024-10-01 17:10:06.739394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.310 [2024-10-01 17:10:06.752297] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.310 [2024-10-01 17:10:06.752313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.310 [2024-10-01 17:10:06.765577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.310 [2024-10-01 17:10:06.765593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.310 [2024-10-01 17:10:06.778220] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.310 [2024-10-01 17:10:06.778236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.310 [2024-10-01 17:10:06.791750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.310 [2024-10-01 17:10:06.791766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.310 [2024-10-01 17:10:06.804694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.310 [2024-10-01 17:10:06.804710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.310 [2024-10-01 17:10:06.817720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.310 [2024-10-01 17:10:06.817736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.310 [2024-10-01 17:10:06.830260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.310 [2024-10-01 17:10:06.830275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.310 [2024-10-01 17:10:06.842734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.310 [2024-10-01 17:10:06.842750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.310 [2024-10-01 17:10:06.856108] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.310 [2024-10-01 17:10:06.856124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.572 [2024-10-01 17:10:06.868917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.572 [2024-10-01 17:10:06.868933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.572 [2024-10-01 17:10:06.882094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.572 [2024-10-01 17:10:06.882110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.572 [2024-10-01 17:10:06.895793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.572 [2024-10-01 17:10:06.895816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.572 [2024-10-01 17:10:06.908921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.572 [2024-10-01 17:10:06.908937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.572 [2024-10-01 17:10:06.922523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.572 [2024-10-01 17:10:06.922539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.572 [2024-10-01 17:10:06.935732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.572 [2024-10-01 17:10:06.935748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.572 [2024-10-01 17:10:06.949180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.572 [2024-10-01 17:10:06.949195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.572 19091.00 IOPS, 149.15 MiB/s [2024-10-01 17:10:06.962062] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.572 [2024-10-01 17:10:06.962078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.572 [2024-10-01 17:10:06.974862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.572 [2024-10-01 17:10:06.974878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.572 [2024-10-01 17:10:06.988331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.572 [2024-10-01 17:10:06.988347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.572 [2024-10-01 17:10:07.001630] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.572 [2024-10-01 17:10:07.001646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.572 [2024-10-01 17:10:07.015264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.572 [2024-10-01 17:10:07.015281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.572 [2024-10-01 17:10:07.027635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.572 [2024-10-01 17:10:07.027651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.572 [2024-10-01 17:10:07.040750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.572 [2024-10-01 17:10:07.040766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.572 [2024-10-01 17:10:07.053958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.572 [2024-10-01 17:10:07.053974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.572 [2024-10-01 17:10:07.066909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.572 [2024-10-01 17:10:07.066924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.572 [2024-10-01 17:10:07.079399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.572 [2024-10-01 17:10:07.079414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.572 [2024-10-01 17:10:07.093195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.572 [2024-10-01 17:10:07.093212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.572 [2024-10-01 17:10:07.106005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.572 [2024-10-01 17:10:07.106021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.834 [2024-10-01 17:10:07.119574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.834 [2024-10-01 17:10:07.119590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.834 [2024-10-01 17:10:07.133049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.834 [2024-10-01 17:10:07.133064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.834 [2024-10-01 17:10:07.146561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.834 [2024-10-01 17:10:07.146577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.834 [2024-10-01 17:10:07.159904] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.834 [2024-10-01 17:10:07.159920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.834 [2024-10-01 17:10:07.172868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.834 [2024-10-01 17:10:07.172884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.834 [2024-10-01 17:10:07.185664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.834 [2024-10-01 17:10:07.185679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.834 [2024-10-01 17:10:07.198636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.834 [2024-10-01 17:10:07.198652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.834 [2024-10-01 17:10:07.211416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.834 [2024-10-01 17:10:07.211432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.834 [2024-10-01 17:10:07.224107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.834 [2024-10-01 17:10:07.224123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.834 [2024-10-01 17:10:07.237363] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.834 [2024-10-01 17:10:07.237378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.834 [2024-10-01 17:10:07.250405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.834 [2024-10-01 17:10:07.250420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.834 [2024-10-01 17:10:07.263927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.834 [2024-10-01 17:10:07.263943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.834 [2024-10-01 17:10:07.276540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.834 [2024-10-01 17:10:07.276555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.834 [2024-10-01 17:10:07.289127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.834 [2024-10-01 17:10:07.289143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.834 [2024-10-01 17:10:07.301675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.834 [2024-10-01 17:10:07.301690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.834 [2024-10-01 17:10:07.315071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.834 [2024-10-01 17:10:07.315086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.834 [2024-10-01 17:10:07.328440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.834 [2024-10-01 17:10:07.328455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.834 [2024-10-01 17:10:07.342200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.834 [2024-10-01 17:10:07.342215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.834 [2024-10-01 17:10:07.354891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.834 [2024-10-01 17:10:07.354907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.834 [2024-10-01 17:10:07.368356] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.834 [2024-10-01 17:10:07.368371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.095 [2024-10-01 17:10:07.382126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.095 [2024-10-01 17:10:07.382142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.095 [2024-10-01 17:10:07.395388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.095 [2024-10-01 17:10:07.395403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.095 [2024-10-01 17:10:07.408025] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.095 [2024-10-01 17:10:07.408040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.095 [2024-10-01 17:10:07.420795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.095 [2024-10-01 17:10:07.420810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.095 [2024-10-01 17:10:07.433768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.095 [2024-10-01 17:10:07.433783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.095 [2024-10-01 17:10:07.446490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.095 [2024-10-01 17:10:07.446505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.095 [2024-10-01 17:10:07.459786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.095 [2024-10-01 17:10:07.459802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.095 [2024-10-01 17:10:07.473261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.095 [2024-10-01 17:10:07.473277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.095 [2024-10-01 17:10:07.486384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.095 [2024-10-01 17:10:07.486399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.095 [2024-10-01 17:10:07.499576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.095 [2024-10-01 17:10:07.499592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.095 [2024-10-01 17:10:07.512886] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.095 [2024-10-01 17:10:07.512903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.095 [2024-10-01 17:10:07.525889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.096 [2024-10-01 17:10:07.525905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.096 [2024-10-01 17:10:07.538615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.096 [2024-10-01 17:10:07.538630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.096 [2024-10-01 17:10:07.550980] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.096 [2024-10-01 17:10:07.551001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.096 [2024-10-01 17:10:07.563802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.096 [2024-10-01 17:10:07.563817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.096 [2024-10-01 17:10:07.577095] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.096 [2024-10-01 17:10:07.577110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.096 [2024-10-01 17:10:07.590577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.096 [2024-10-01 17:10:07.590592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.096 [2024-10-01 17:10:07.603735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.096 [2024-10-01 17:10:07.603750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.096 [2024-10-01 17:10:07.616817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.096 [2024-10-01 17:10:07.616832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.096 [2024-10-01 17:10:07.630384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.096 [2024-10-01 17:10:07.630399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.357 [2024-10-01 17:10:07.643101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.357 [2024-10-01 17:10:07.643116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.357 [2024-10-01 17:10:07.655762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.357 [2024-10-01 17:10:07.655777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.357 [2024-10-01 17:10:07.668112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.357 [2024-10-01 17:10:07.668126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.357 [2024-10-01 17:10:07.680915] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.357 [2024-10-01 17:10:07.680930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.357 [2024-10-01 17:10:07.693933] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.357 [2024-10-01 17:10:07.693948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.357 [2024-10-01 17:10:07.707710] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.357 [2024-10-01 17:10:07.707725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.357 [2024-10-01 17:10:07.720065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.357 [2024-10-01 17:10:07.720080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.357 [2024-10-01 17:10:07.733178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.357 [2024-10-01 17:10:07.733193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.357 [2024-10-01 17:10:07.746012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.357 [2024-10-01 17:10:07.746028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.357 [2024-10-01 17:10:07.758539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.357 [2024-10-01 17:10:07.758554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.357 [2024-10-01 17:10:07.771822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.357 [2024-10-01 17:10:07.771837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.357 [2024-10-01 17:10:07.785369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.357 [2024-10-01 17:10:07.785385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.357 [2024-10-01 17:10:07.798242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.357 [2024-10-01 17:10:07.798258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.357 [2024-10-01 17:10:07.811090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.357 [2024-10-01 17:10:07.811106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.357 [2024-10-01 17:10:07.823443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.357 [2024-10-01 17:10:07.823458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.358 [2024-10-01 17:10:07.836026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.358 [2024-10-01 17:10:07.836041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.358 [2024-10-01 17:10:07.849327] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.358 [2024-10-01 17:10:07.849342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.358 [2024-10-01 17:10:07.861953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.358 [2024-10-01 17:10:07.861968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.358 [2024-10-01 17:10:07.874825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.358 [2024-10-01 17:10:07.874844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.358 [2024-10-01 17:10:07.888152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.358 [2024-10-01 17:10:07.888167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.358 [2024-10-01 17:10:07.901881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.358 [2024-10-01 17:10:07.901897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.618 [2024-10-01 17:10:07.914595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.618 [2024-10-01 17:10:07.914611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.618 [2024-10-01 17:10:07.928128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.618 [2024-10-01 17:10:07.928143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.618 [2024-10-01 17:10:07.940428] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.618 [2024-10-01 17:10:07.940444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.618 [2024-10-01 17:10:07.953345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.618 [2024-10-01 17:10:07.953360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.618 19136.50 IOPS, 149.50 MiB/s [2024-10-01 17:10:07.965922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.618 [2024-10-01 17:10:07.965937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.618 [2024-10-01 17:10:07.978501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.618 [2024-10-01 17:10:07.978515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.618 [2024-10-01 17:10:07.990820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.618 [2024-10-01 17:10:07.990835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.618 [2024-10-01 17:10:08.003608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.618 [2024-10-01 17:10:08.003624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.618 [2024-10-01 17:10:08.017132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.618 [2024-10-01 17:10:08.017148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.618 [2024-10-01 17:10:08.029771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.618 [2024-10-01 17:10:08.029786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.618 [2024-10-01 17:10:08.043461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.618 [2024-10-01 17:10:08.043476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.618 [2024-10-01 17:10:08.056939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.618 [2024-10-01 17:10:08.056954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.618 [2024-10-01 17:10:08.070172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.618 [2024-10-01 17:10:08.070187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.618 [2024-10-01 17:10:08.082904] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.618 [2024-10-01 17:10:08.082920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.618 [2024-10-01 17:10:08.096244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.618 [2024-10-01 17:10:08.096259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.618 [2024-10-01 17:10:08.108760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.618 [2024-10-01 17:10:08.108775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.618 [2024-10-01 17:10:08.121438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.618 [2024-10-01 17:10:08.121457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.618 [2024-10-01 17:10:08.134288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.618 [2024-10-01 17:10:08.134303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.618 [2024-10-01 17:10:08.147630] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.618 [2024-10-01 17:10:08.147645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.618 [2024-10-01 17:10:08.160419] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.618 [2024-10-01 17:10:08.160434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.878 [2024-10-01 17:10:08.173689] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.878 [2024-10-01 17:10:08.173705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.878 [2024-10-01 17:10:08.186427] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.878 [2024-10-01 17:10:08.186442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.878 [2024-10-01 17:10:08.200027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.878 [2024-10-01 17:10:08.200042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.878 [2024-10-01 17:10:08.212607] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.878 [2024-10-01 17:10:08.212622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.878 [2024-10-01 17:10:08.226167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.878 [2024-10-01 17:10:08.226182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.878 [2024-10-01 17:10:08.239668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.878 [2024-10-01 17:10:08.239683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.878 [2024-10-01 17:10:08.253041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.878 [2024-10-01 17:10:08.253056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.878 [2024-10-01 17:10:08.265692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.878 [2024-10-01 17:10:08.265707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.878 [2024-10-01 17:10:08.279033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.878 [2024-10-01 17:10:08.279048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.878 [2024-10-01 17:10:08.292738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.878 [2024-10-01 17:10:08.292754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.878 [2024-10-01 17:10:08.306105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.878 [2024-10-01 17:10:08.306120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.878 [2024-10-01 17:10:08.319654] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.878 [2024-10-01 17:10:08.319669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.878 [2024-10-01 17:10:08.332873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.878 [2024-10-01 17:10:08.332889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.878 [2024-10-01 17:10:08.345877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.878 [2024-10-01 17:10:08.345893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.878 [2024-10-01 17:10:08.358788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.878 [2024-10-01 17:10:08.358804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.878 [2024-10-01 17:10:08.372264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.878 [2024-10-01 17:10:08.372284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.878 [2024-10-01 17:10:08.384957] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.878 [2024-10-01 17:10:08.384972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.878 [2024-10-01 17:10:08.398043] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.878 [2024-10-01 17:10:08.398059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.878 [2024-10-01 17:10:08.411382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.878 [2024-10-01 17:10:08.411398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.878 [2024-10-01 17:10:08.425166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.878 [2024-10-01 17:10:08.425182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.138 [2024-10-01 17:10:08.438839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.138 [2024-10-01 17:10:08.438855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.138 [2024-10-01 17:10:08.452634] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.138 [2024-10-01 17:10:08.452649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.138 [2024-10-01 17:10:08.466003] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.138 [2024-10-01 17:10:08.466019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.138 [2024-10-01 17:10:08.478935] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.138 [2024-10-01 17:10:08.478951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.138 [2024-10-01 17:10:08.490950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.138 [2024-10-01 17:10:08.490966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.138 [2024-10-01 17:10:08.504220] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.138 [2024-10-01 17:10:08.504236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.138 [2024-10-01 17:10:08.517745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.138 [2024-10-01 17:10:08.517761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.138 [2024-10-01 17:10:08.531313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.138 [2024-10-01 17:10:08.531329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.138 [2024-10-01 17:10:08.543892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.138 [2024-10-01 17:10:08.543908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.138 [2024-10-01 17:10:08.556327] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.138 [2024-10-01 17:10:08.556342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.138 [2024-10-01 17:10:08.569732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.138 [2024-10-01 17:10:08.569748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.139 [2024-10-01 17:10:08.582910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.139 [2024-10-01 17:10:08.582925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.139 [2024-10-01 17:10:08.596349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.139 [2024-10-01 17:10:08.596365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.139 [2024-10-01 17:10:08.609474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.139 [2024-10-01 17:10:08.609490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.139 [2024-10-01 17:10:08.622542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.139 [2024-10-01 17:10:08.622558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.139 [2024-10-01 17:10:08.635843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.139 [2024-10-01 17:10:08.635859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.139 [2024-10-01 17:10:08.649096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.139 [2024-10-01 17:10:08.649112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.139 [2024-10-01 17:10:08.662569] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.139 [2024-10-01 17:10:08.662584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.139 [2024-10-01 17:10:08.675172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.139 [2024-10-01 17:10:08.675188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.399 [2024-10-01 17:10:08.687468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.399 [2024-10-01 17:10:08.687484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.399 [2024-10-01 17:10:08.701020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.399 [2024-10-01 17:10:08.701035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.399 [2024-10-01 17:10:08.714154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.399 [2024-10-01 17:10:08.714170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.399 [2024-10-01 17:10:08.727402] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.399 [2024-10-01 17:10:08.727417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.399 [2024-10-01 17:10:08.740924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.399 [2024-10-01 17:10:08.740939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.399 [2024-10-01 17:10:08.754195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.399 [2024-10-01 17:10:08.754210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.399 [2024-10-01 17:10:08.767182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.399 [2024-10-01 17:10:08.767198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.399 [2024-10-01 17:10:08.780418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.399 [2024-10-01 17:10:08.780433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.399 [2024-10-01 17:10:08.793745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.399 [2024-10-01 17:10:08.793761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.399 [2024-10-01 17:10:08.807210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.399 [2024-10-01 17:10:08.807227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.399 [2024-10-01 17:10:08.820488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.399 [2024-10-01 17:10:08.820504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.399 [2024-10-01 17:10:08.833677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.399 [2024-10-01 17:10:08.833693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.399 [2024-10-01 17:10:08.846725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.399 [2024-10-01 17:10:08.846741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.399 [2024-10-01 17:10:08.859904] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.399 [2024-10-01 17:10:08.859919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.399 [2024-10-01 17:10:08.873031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.399 [2024-10-01 17:10:08.873047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.399 [2024-10-01 17:10:08.886417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.399 [2024-10-01 17:10:08.886433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.399 [2024-10-01 17:10:08.899818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.399 [2024-10-01 17:10:08.899833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.399 [2024-10-01 17:10:08.912513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.399 [2024-10-01 17:10:08.912529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.399 [2024-10-01 17:10:08.925279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.399 [2024-10-01 17:10:08.925295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.399 [2024-10-01 17:10:08.937648] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.399 [2024-10-01 17:10:08.937663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.658 [2024-10-01 17:10:08.950185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.658 [2024-10-01 17:10:08.950201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.658 [2024-10-01 17:10:08.963332] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.658 [2024-10-01 17:10:08.963348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.658 19177.33 IOPS, 149.82 MiB/s [2024-10-01 17:10:08.977111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.658 [2024-10-01 17:10:08.977126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.658 [2024-10-01 17:10:08.989416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.659 [2024-10-01 17:10:08.989432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.659 [2024-10-01 17:10:09.002901] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.659 [2024-10-01 17:10:09.002917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.659 [2024-10-01 17:10:09.015835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.659 [2024-10-01 17:10:09.015851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.659 [2024-10-01 17:10:09.029329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.659 [2024-10-01 17:10:09.029345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.659 [2024-10-01 17:10:09.042182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.659 [2024-10-01 17:10:09.042197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.659 [2024-10-01 17:10:09.055311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.659 [2024-10-01 17:10:09.055326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.659 [2024-10-01 17:10:09.068357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.659 [2024-10-01 17:10:09.068372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.659 [2024-10-01 17:10:09.081038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.659 [2024-10-01 17:10:09.081053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.659 [2024-10-01 17:10:09.093371] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.659 [2024-10-01 17:10:09.093386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.659 [2024-10-01 17:10:09.106674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.659 [2024-10-01 17:10:09.106694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.659 [2024-10-01 17:10:09.120134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.659 [2024-10-01 17:10:09.120149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.659 [2024-10-01 17:10:09.133231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.659 [2024-10-01 17:10:09.133246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.659 [2024-10-01 17:10:09.146352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.659 [2024-10-01 17:10:09.146369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.659 [2024-10-01 17:10:09.159696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.659 [2024-10-01 17:10:09.159711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.659 [2024-10-01 17:10:09.173218] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.659 [2024-10-01 17:10:09.173233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.659 [2024-10-01 17:10:09.185754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.659 [2024-10-01 17:10:09.185769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.659 [2024-10-01 17:10:09.198719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.659 [2024-10-01 17:10:09.198734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.918 [2024-10-01 17:10:09.211721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.918 [2024-10-01 17:10:09.211737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.918 [2024-10-01 17:10:09.224969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.918 [2024-10-01 17:10:09.224984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.918 [2024-10-01 17:10:09.238617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.918 [2024-10-01 17:10:09.238632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.918 [2024-10-01 17:10:09.252006] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.918 [2024-10-01 17:10:09.252021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.918 [2024-10-01 17:10:09.265337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.918 [2024-10-01 17:10:09.265352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.918 [2024-10-01 17:10:09.278487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.918 [2024-10-01 17:10:09.278502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.918 [2024-10-01 17:10:09.291649] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.918 [2024-10-01 17:10:09.291664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.918 [2024-10-01 17:10:09.305134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.918 [2024-10-01 17:10:09.305149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.918 [2024-10-01 17:10:09.317801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.918 [2024-10-01 17:10:09.317816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.918 [2024-10-01 17:10:09.330316] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.918 [2024-10-01 17:10:09.330332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.918 [2024-10-01 17:10:09.343897] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.918 [2024-10-01 17:10:09.343913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.918 [2024-10-01 17:10:09.356686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.918 [2024-10-01 17:10:09.356705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.918 [2024-10-01 17:10:09.370042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.918 [2024-10-01 17:10:09.370057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.918 [2024-10-01 17:10:09.382467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.918 [2024-10-01 17:10:09.382482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.918 [2024-10-01 17:10:09.395728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.918 [2024-10-01 17:10:09.395744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.918 [2024-10-01 17:10:09.409272] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.918 [2024-10-01 17:10:09.409287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.918 [2024-10-01 17:10:09.422491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.918 [2024-10-01 17:10:09.422506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.918 [2024-10-01 17:10:09.435648] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.918 [2024-10-01 17:10:09.435663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.918 [2024-10-01 17:10:09.448668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.918 [2024-10-01 17:10:09.448684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.918 [2024-10-01 17:10:09.461951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.918 [2024-10-01 17:10:09.461966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.179 [2024-10-01 17:10:09.475138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.179 [2024-10-01 17:10:09.475154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.179 [2024-10-01 17:10:09.488661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.179 [2024-10-01 17:10:09.488676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.179 [2024-10-01 17:10:09.501146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.179 [2024-10-01 17:10:09.501161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.179 [2024-10-01 17:10:09.514700] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.179 [2024-10-01 17:10:09.514715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.179 [2024-10-01 17:10:09.527887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.179 [2024-10-01 17:10:09.527902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.179 [2024-10-01 17:10:09.540596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.179 [2024-10-01 17:10:09.540611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.179 [2024-10-01 17:10:09.554050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.179 [2024-10-01 17:10:09.554066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.179 [2024-10-01 17:10:09.566997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.179 [2024-10-01 17:10:09.567012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.179 [2024-10-01 17:10:09.580339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.179 [2024-10-01 17:10:09.580355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.179 [2024-10-01 17:10:09.593442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.179 [2024-10-01 17:10:09.593458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.179 [2024-10-01 17:10:09.606476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.179 [2024-10-01 17:10:09.606495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.179 [2024-10-01 17:10:09.619923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.179 [2024-10-01 17:10:09.619938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.179 [2024-10-01 17:10:09.632944] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.179 [2024-10-01 17:10:09.632959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.179 [2024-10-01 17:10:09.645879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.179 [2024-10-01 17:10:09.645894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.179 [2024-10-01 17:10:09.658612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.179 [2024-10-01 17:10:09.658627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.179 [2024-10-01 17:10:09.671174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.179 [2024-10-01 17:10:09.671189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.179 [2024-10-01 17:10:09.684780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.179 [2024-10-01 17:10:09.684795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.179 [2024-10-01 17:10:09.698205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.179 [2024-10-01 17:10:09.698220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.179 [2024-10-01 17:10:09.711603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.179 [2024-10-01 17:10:09.711618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.179 [2024-10-01 17:10:09.725159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.179 [2024-10-01 17:10:09.725174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.439 [2024-10-01 17:10:09.738853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.439 [2024-10-01 17:10:09.738869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.439 [2024-10-01 17:10:09.752142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.439 [2024-10-01 17:10:09.752157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.439 [2024-10-01 17:10:09.765277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.439 [2024-10-01 17:10:09.765292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.439 [2024-10-01 17:10:09.778092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.439 [2024-10-01 17:10:09.778107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.439 [2024-10-01 17:10:09.791090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.439 [2024-10-01 17:10:09.791105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.439 [2024-10-01 17:10:09.804306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.439 [2024-10-01 17:10:09.804321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.439 [2024-10-01 17:10:09.817718] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.439 [2024-10-01 17:10:09.817733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.439 [2024-10-01 17:10:09.831416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.439 [2024-10-01 17:10:09.831431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.439 [2024-10-01 17:10:09.844717] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.439 [2024-10-01 17:10:09.844733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.439 [2024-10-01 17:10:09.858018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.439 [2024-10-01 17:10:09.858037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.439 [2024-10-01 17:10:09.871538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.439 [2024-10-01 17:10:09.871553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.439 [2024-10-01 17:10:09.885414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.439 [2024-10-01 17:10:09.885429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.439 [2024-10-01 17:10:09.898246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.439 [2024-10-01 17:10:09.898261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.439 [2024-10-01 17:10:09.911784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.439 [2024-10-01 17:10:09.911799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.439 [2024-10-01 17:10:09.925126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.439 [2024-10-01 17:10:09.925141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.439 [2024-10-01 17:10:09.938536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.439 [2024-10-01 17:10:09.938551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.439 [2024-10-01 17:10:09.951625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.439 [2024-10-01 17:10:09.951640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.439 [2024-10-01 17:10:09.965168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.439 [2024-10-01 17:10:09.965184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.439 19215.50 IOPS, 150.12 MiB/s [2024-10-01 17:10:09.978189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.439 [2024-10-01 17:10:09.978205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.700 [2024-10-01 17:10:09.990681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.700 [2024-10-01 17:10:09.990697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.700 [2024-10-01 17:10:10.003821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.700 [2024-10-01 17:10:10.003839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.700 [2024-10-01 17:10:10.017414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.700 [2024-10-01 17:10:10.017430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.700 [2024-10-01 17:10:10.030566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.700 [2024-10-01 17:10:10.030582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.700 [2024-10-01 17:10:10.043288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.700 [2024-10-01 17:10:10.043304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.700 [2024-10-01 17:10:10.057073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.700 [2024-10-01 17:10:10.057089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.700 [2024-10-01 17:10:10.070713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.700 [2024-10-01 17:10:10.070730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.700 [2024-10-01 17:10:10.083451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.700 [2024-10-01 17:10:10.083467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.700 [2024-10-01 17:10:10.096430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.700 [2024-10-01 17:10:10.096446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.700 [2024-10-01 17:10:10.108934] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.700 [2024-10-01 17:10:10.108949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.700 [2024-10-01 17:10:10.122423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.700 [2024-10-01 17:10:10.122439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.700 [2024-10-01 17:10:10.135560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.700 [2024-10-01 17:10:10.135575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.700 [2024-10-01 17:10:10.149033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.700 [2024-10-01 17:10:10.149048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.700 [2024-10-01 17:10:10.162002] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.700 [2024-10-01 17:10:10.162017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.700 [2024-10-01 17:10:10.175068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.700 [2024-10-01 17:10:10.175084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.700 [2024-10-01 17:10:10.188269] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.700 [2024-10-01 17:10:10.188285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.700 [2024-10-01 17:10:10.201402] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.700 [2024-10-01 17:10:10.201418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.700 [2024-10-01 17:10:10.214396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.700 [2024-10-01 17:10:10.214411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.700 [2024-10-01 17:10:10.227890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.700 [2024-10-01 17:10:10.227906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.700 [2024-10-01 17:10:10.240680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.700 [2024-10-01 17:10:10.240695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.961 [2024-10-01 17:10:10.254144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.961 [2024-10-01 17:10:10.254160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.961 [2024-10-01 17:10:10.267359] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.961 [2024-10-01 17:10:10.267375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.961 [2024-10-01 17:10:10.280060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.961 [2024-10-01 17:10:10.280076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.961 [2024-10-01 17:10:10.293756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.961 [2024-10-01 17:10:10.293772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.961 [2024-10-01 17:10:10.307260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.961 [2024-10-01 17:10:10.307275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.961 [2024-10-01 17:10:10.320688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.961 [2024-10-01 17:10:10.320704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.961 [2024-10-01 17:10:10.334217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.961 [2024-10-01 17:10:10.334232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.961 [2024-10-01 17:10:10.347635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.961 [2024-10-01 17:10:10.347651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.961 [2024-10-01 17:10:10.360533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.961 [2024-10-01 17:10:10.360548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.961 [2024-10-01 17:10:10.372990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.961 [2024-10-01 17:10:10.373011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.961 [2024-10-01 17:10:10.386626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.961 [2024-10-01 17:10:10.386642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.961 [2024-10-01 17:10:10.400408] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.961 [2024-10-01 17:10:10.400423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.961 [2024-10-01 17:10:10.412914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.961 [2024-10-01 17:10:10.412929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.961 [2024-10-01 17:10:10.425460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.961 [2024-10-01 17:10:10.425476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.961 [2024-10-01 17:10:10.437983] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.961 [2024-10-01 17:10:10.438003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.961 [2024-10-01 17:10:10.450656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.961 [2024-10-01 17:10:10.450672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.961 [2024-10-01 17:10:10.463558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.961 [2024-10-01 17:10:10.463574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.961 [2024-10-01 17:10:10.477009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.961 [2024-10-01 17:10:10.477024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.962 [2024-10-01 17:10:10.489869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.962 [2024-10-01 17:10:10.489884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.962 [2024-10-01 17:10:10.502763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.962 [2024-10-01 17:10:10.502779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.221 [2024-10-01 17:10:10.514789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.221 [2024-10-01 17:10:10.514805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.222 [2024-10-01 17:10:10.527740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.222 [2024-10-01 17:10:10.527756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.222 [2024-10-01 17:10:10.541000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.222 [2024-10-01 17:10:10.541016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.222 [2024-10-01 17:10:10.554473] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.222 [2024-10-01 17:10:10.554488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.222 [2024-10-01 17:10:10.567212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.222 [2024-10-01 17:10:10.567228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.222 [2024-10-01 17:10:10.579642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.222 [2024-10-01 17:10:10.579658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.222 [2024-10-01 17:10:10.592722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.222 [2024-10-01 17:10:10.592738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.222 [2024-10-01 17:10:10.605443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.222 [2024-10-01 17:10:10.605459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.222 [2024-10-01 17:10:10.619304] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.222 [2024-10-01 17:10:10.619320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.222 [2024-10-01 17:10:10.632223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.222 [2024-10-01 17:10:10.632238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.222 [2024-10-01 17:10:10.645602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.222 [2024-10-01 17:10:10.645617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.222 [2024-10-01 17:10:10.658016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.222 [2024-10-01 17:10:10.658031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.222 [2024-10-01 17:10:10.671601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.222 [2024-10-01 17:10:10.671616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.222 [2024-10-01 17:10:10.684967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.222 [2024-10-01 17:10:10.684982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.222 [2024-10-01 17:10:10.698400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.222 [2024-10-01 17:10:10.698414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.222 [2024-10-01 17:10:10.711377] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.222 [2024-10-01 17:10:10.711393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.222 [2024-10-01 17:10:10.724924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.222 [2024-10-01 17:10:10.724940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.222 [2024-10-01 17:10:10.737617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.222 [2024-10-01 17:10:10.737632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.222 [2024-10-01 17:10:10.750262] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.222 [2024-10-01 17:10:10.750278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.222 [2024-10-01 17:10:10.763399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.222 [2024-10-01 17:10:10.763415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-10-01 17:10:10.775778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-10-01 17:10:10.775793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-10-01 17:10:10.789218] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-10-01 17:10:10.789233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-10-01 17:10:10.801603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-10-01 17:10:10.801618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-10-01 17:10:10.814922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-10-01 17:10:10.814937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-10-01 17:10:10.827641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-10-01 17:10:10.827656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-10-01 17:10:10.840236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-10-01 17:10:10.840255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-10-01 17:10:10.853621] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-10-01 17:10:10.853636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-10-01 17:10:10.866935] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-10-01 17:10:10.866950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-10-01 17:10:10.880291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-10-01 17:10:10.880307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-10-01 17:10:10.893069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-10-01 17:10:10.893083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-10-01 17:10:10.906442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-10-01 17:10:10.906457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-10-01 17:10:10.919938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-10-01 17:10:10.919953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-10-01 17:10:10.933282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-10-01 17:10:10.933297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-10-01 17:10:10.946176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-10-01 17:10:10.946191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-10-01 17:10:10.959542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-10-01 17:10:10.959557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 19218.20 IOPS, 150.14 MiB/s [2024-10-01 17:10:10.971431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-10-01 17:10:10.971445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 00:10:12.482 Latency(us) 00:10:12.482 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:12.482 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:12.482 Nvme1n1 : 5.01 19218.92 150.15 0.00 0.00 6652.95 3017.39 18459.31 00:10:12.482 =================================================================================================================== 00:10:12.482 Total : 19218.92 150.15 0.00 0.00 6652.95 3017.39 18459.31 00:10:12.482 [2024-10-01 17:10:10.981540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-10-01 17:10:10.981554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-10-01 17:10:10.993574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-10-01 17:10:10.993587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-10-01 17:10:11.005601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-10-01 17:10:11.005613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-10-01 17:10:11.017634] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-10-01 17:10:11.017646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.742 [2024-10-01 17:10:11.029659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.742 [2024-10-01 17:10:11.029669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.742 [2024-10-01 17:10:11.041687] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.742 [2024-10-01 17:10:11.041702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.742 [2024-10-01 17:10:11.053719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.742 [2024-10-01 17:10:11.053728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.742 [2024-10-01 17:10:11.065751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.742 [2024-10-01 17:10:11.065761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.742 [2024-10-01 17:10:11.077780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.742 [2024-10-01 17:10:11.077791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.742 [2024-10-01 17:10:11.089809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.742 [2024-10-01 17:10:11.089816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2859041) - No such process 00:10:12.742 17:10:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2859041 00:10:12.742 17:10:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.742 17:10:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.742 17:10:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:12.742 17:10:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.742 17:10:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:12.742 17:10:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.742 17:10:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:12.742 delay0 00:10:12.742 17:10:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.742 17:10:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:12.742 17:10:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.742 17:10:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:12.742 17:10:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.742 17:10:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:12.742 [2024-10-01 17:10:11.271203] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:20.878 [2024-10-01 17:10:18.437037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168000 is same with the state(6) to be set 00:10:20.878 Initializing NVMe Controllers 00:10:20.878 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:20.878 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:20.878 Initialization complete. Launching workers. 00:10:20.878 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 268, failed: 23387 00:10:20.878 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 23543, failed to submit 112 00:10:20.878 success 23452, unsuccessful 91, failed 0 00:10:20.879 17:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:20.879 17:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:20.879 17:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:20.879 17:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:20.879 17:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:20.879 17:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:20.879 17:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:20.879 17:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:20.879 rmmod nvme_tcp 00:10:20.879 rmmod nvme_fabrics 00:10:20.879 rmmod nvme_keyring 00:10:20.879 17:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:20.879 17:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:20.879 17:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:20.879 17:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 2856667 ']' 00:10:20.879 17:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 2856667 00:10:20.879 17:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 2856667 ']' 00:10:20.879 17:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 2856667 00:10:20.879 17:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:10:20.879 17:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:20.879 17:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2856667 00:10:20.879 17:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:20.879 17:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:20.879 17:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2856667' 00:10:20.879 killing process with pid 2856667 00:10:20.879 17:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 2856667 00:10:20.879 17:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 2856667 00:10:20.879 17:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:20.879 17:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:20.879 17:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:20.879 17:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:20.879 17:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:10:20.879 17:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:20.879 17:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:10:20.879 17:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:20.879 17:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:20.879 17:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.879 17:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:20.879 17:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.264 17:10:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:22.264 00:10:22.264 real 0m34.103s 00:10:22.264 user 0m45.686s 00:10:22.264 sys 0m11.165s 00:10:22.264 17:10:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:22.264 17:10:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:22.264 ************************************ 00:10:22.264 END TEST nvmf_zcopy 00:10:22.264 ************************************ 00:10:22.525 17:10:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:22.526 17:10:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:22.526 17:10:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:22.526 17:10:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:22.526 ************************************ 00:10:22.526 START TEST nvmf_nmic 00:10:22.526 ************************************ 00:10:22.526 17:10:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:22.526 * Looking for test storage... 00:10:22.526 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:22.526 17:10:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:22.526 17:10:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:10:22.526 17:10:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:22.526 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:22.526 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:22.526 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:22.526 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:22.526 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:22.526 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:22.526 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:22.526 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:22.526 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:22.526 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:22.526 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:22.526 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:22.526 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:22.526 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:22.526 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:22.526 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:22.526 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:22.526 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:22.526 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:22.526 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:22.526 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:22.526 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:22.526 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:22.526 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:22.526 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:22.526 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:22.526 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:22.526 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:22.526 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:22.526 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:22.526 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:22.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.526 --rc genhtml_branch_coverage=1 00:10:22.526 --rc genhtml_function_coverage=1 00:10:22.526 --rc genhtml_legend=1 00:10:22.526 --rc geninfo_all_blocks=1 00:10:22.526 --rc geninfo_unexecuted_blocks=1 00:10:22.526 00:10:22.526 ' 00:10:22.526 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:22.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.526 --rc genhtml_branch_coverage=1 00:10:22.526 --rc genhtml_function_coverage=1 00:10:22.526 --rc genhtml_legend=1 00:10:22.526 --rc geninfo_all_blocks=1 00:10:22.526 --rc geninfo_unexecuted_blocks=1 00:10:22.526 00:10:22.526 ' 00:10:22.526 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:22.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.526 --rc genhtml_branch_coverage=1 00:10:22.526 --rc genhtml_function_coverage=1 00:10:22.526 --rc genhtml_legend=1 00:10:22.526 --rc geninfo_all_blocks=1 00:10:22.526 --rc geninfo_unexecuted_blocks=1 00:10:22.526 00:10:22.526 ' 00:10:22.526 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:22.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.526 --rc genhtml_branch_coverage=1 00:10:22.526 --rc genhtml_function_coverage=1 00:10:22.526 --rc genhtml_legend=1 00:10:22.526 --rc geninfo_all_blocks=1 00:10:22.526 --rc geninfo_unexecuted_blocks=1 00:10:22.526 00:10:22.526 ' 00:10:22.526 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:22.526 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:22.526 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:22.526 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:22.526 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:22.526 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:22.526 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:22.526 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:22.526 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:22.526 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:22.526 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:22.787 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:22.787 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:22.787 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:22.787 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:22.787 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:22.787 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:22.787 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:22.787 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:22.787 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:22.787 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:22.787 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:22.787 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:22.787 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.787 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.788 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.788 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:22.788 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.788 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:22.788 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:22.788 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:22.788 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:22.788 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:22.788 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:22.788 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:22.788 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:22.788 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:22.788 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:22.788 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:22.788 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:22.788 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:22.788 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:22.788 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:22.788 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:22.788 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:22.788 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:22.788 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:22.788 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.788 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:22.788 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.788 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:22.788 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:22.788 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:22.788 17:10:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:29.397 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:29.397 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:29.397 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:29.397 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:29.397 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:29.397 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:29.397 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:29.397 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:29.397 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:29.397 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:29.397 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:29.397 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:29.397 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:29.397 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:29.397 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:29.397 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:29.397 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:29.397 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:29.397 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:29.397 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:29.397 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:29.397 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:29.397 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:29.397 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:29.397 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:29.397 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:29.397 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:29.397 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:29.397 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:29.398 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:29.398 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:29.398 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:29.398 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:29.398 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:29.658 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:29.658 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:29.658 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:29.658 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:29.658 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:29.658 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:10:29.658 00:10:29.658 --- 10.0.0.2 ping statistics --- 00:10:29.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.658 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:10:29.658 17:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:29.658 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:29.658 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:10:29.658 00:10:29.658 --- 10.0.0.1 ping statistics --- 00:10:29.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.658 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:10:29.658 17:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:29.658 17:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:10:29.658 17:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:29.658 17:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:29.658 17:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:29.658 17:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:29.658 17:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:29.658 17:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:29.658 17:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:29.658 17:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:29.658 17:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:29.658 17:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:29.658 17:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:29.658 17:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=2865656 00:10:29.658 17:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 2865656 00:10:29.658 17:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:29.659 17:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 2865656 ']' 00:10:29.659 17:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.659 17:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:29.659 17:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.659 17:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:29.659 17:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:29.659 [2024-10-01 17:10:28.117004] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:10:29.659 [2024-10-01 17:10:28.117074] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:29.659 [2024-10-01 17:10:28.189329] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:29.918 [2024-10-01 17:10:28.230333] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:29.918 [2024-10-01 17:10:28.230397] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:29.918 [2024-10-01 17:10:28.230406] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:29.918 [2024-10-01 17:10:28.230413] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:29.918 [2024-10-01 17:10:28.230419] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:29.918 [2024-10-01 17:10:28.230567] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:29.918 [2024-10-01 17:10:28.230675] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:29.918 [2024-10-01 17:10:28.230838] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.918 [2024-10-01 17:10:28.230840] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:30.614 17:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:30.614 17:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:10:30.614 17:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:30.614 17:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:30.614 17:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.614 17:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:30.614 17:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:30.614 17:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.614 17:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.614 [2024-10-01 17:10:28.957623] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:30.614 17:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.614 17:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:30.614 17:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.614 17:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.614 Malloc0 00:10:30.615 17:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.615 17:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:30.615 17:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.615 17:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.615 17:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.615 17:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:30.615 17:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.615 17:10:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.615 17:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.615 17:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:30.615 17:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.615 17:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.615 [2024-10-01 17:10:29.016843] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:30.615 17:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.615 17:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:30.615 test case1: single bdev can't be used in multiple subsystems 00:10:30.615 17:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:30.615 17:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.615 17:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.615 17:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.615 17:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:30.615 17:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.615 17:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.615 17:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.615 17:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:30.615 17:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:30.615 17:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.615 17:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.615 [2024-10-01 17:10:29.052739] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:30.615 [2024-10-01 17:10:29.052758] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:30.615 [2024-10-01 17:10:29.052766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.615 request: 00:10:30.615 { 00:10:30.615 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:30.615 "namespace": { 00:10:30.615 "bdev_name": "Malloc0", 00:10:30.615 "no_auto_visible": false 00:10:30.615 }, 00:10:30.615 "method": "nvmf_subsystem_add_ns", 00:10:30.615 "req_id": 1 00:10:30.615 } 00:10:30.615 Got JSON-RPC error response 00:10:30.615 response: 00:10:30.615 { 00:10:30.615 "code": -32602, 00:10:30.615 "message": "Invalid parameters" 00:10:30.615 } 00:10:30.615 17:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:30.615 17:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:30.615 17:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:30.615 17:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:30.615 Adding namespace failed - expected result. 00:10:30.615 17:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:30.615 test case2: host connect to nvmf target in multiple paths 00:10:30.615 17:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:30.615 17:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.615 17:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.615 [2024-10-01 17:10:29.064899] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:30.615 17:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.615 17:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:32.049 17:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:33.965 17:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:33.965 17:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:33.965 17:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:33.965 17:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:33.965 17:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:35.893 17:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:35.893 17:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:35.893 17:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:35.893 17:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:35.893 17:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:35.893 17:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:35.893 17:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:35.893 [global] 00:10:35.893 thread=1 00:10:35.893 invalidate=1 00:10:35.894 rw=write 00:10:35.894 time_based=1 00:10:35.894 runtime=1 00:10:35.894 ioengine=libaio 00:10:35.894 direct=1 00:10:35.894 bs=4096 00:10:35.894 iodepth=1 00:10:35.894 norandommap=0 00:10:35.894 numjobs=1 00:10:35.894 00:10:35.894 verify_dump=1 00:10:35.894 verify_backlog=512 00:10:35.894 verify_state_save=0 00:10:35.894 do_verify=1 00:10:35.894 verify=crc32c-intel 00:10:35.894 [job0] 00:10:35.894 filename=/dev/nvme0n1 00:10:35.894 Could not set queue depth (nvme0n1) 00:10:36.154 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.154 fio-3.35 00:10:36.154 Starting 1 thread 00:10:37.540 00:10:37.540 job0: (groupid=0, jobs=1): err= 0: pid=2866984: Tue Oct 1 17:10:35 2024 00:10:37.540 read: IOPS=17, BW=69.2KiB/s (70.9kB/s)(72.0KiB/1040msec) 00:10:37.540 slat (nsec): min=24681, max=25955, avg=25230.22, stdev=370.22 00:10:37.540 clat (usec): min=1135, max=42974, avg=39917.08, stdev=9687.86 00:10:37.540 lat (usec): min=1160, max=42999, avg=39942.31, stdev=9687.89 00:10:37.540 clat percentiles (usec): 00:10:37.540 | 1.00th=[ 1139], 5.00th=[ 1139], 10.00th=[41681], 20.00th=[42206], 00:10:37.540 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:10:37.540 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[42730], 00:10:37.540 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:37.540 | 99.99th=[42730] 00:10:37.540 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:10:37.540 slat (nsec): min=9636, max=51419, avg=27431.39, stdev=9675.80 00:10:37.540 clat (usec): min=236, max=793, avg=592.60, stdev=102.32 00:10:37.540 lat (usec): min=245, max=834, avg=620.03, stdev=106.48 00:10:37.540 clat percentiles (usec): 00:10:37.540 | 1.00th=[ 351], 5.00th=[ 392], 10.00th=[ 433], 20.00th=[ 502], 00:10:37.540 | 30.00th=[ 562], 40.00th=[ 586], 50.00th=[ 603], 60.00th=[ 635], 00:10:37.540 | 70.00th=[ 668], 80.00th=[ 685], 90.00th=[ 701], 95.00th=[ 725], 00:10:37.540 | 99.00th=[ 758], 99.50th=[ 783], 99.90th=[ 791], 99.95th=[ 791], 00:10:37.540 | 99.99th=[ 791] 00:10:37.540 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:37.540 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:37.540 lat (usec) : 250=0.19%, 500=17.92%, 750=76.60%, 1000=1.89% 00:10:37.540 lat (msec) : 2=0.19%, 50=3.21% 00:10:37.540 cpu : usr=0.48%, sys=1.54%, ctx=530, majf=0, minf=1 00:10:37.540 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.540 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.540 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.540 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.540 00:10:37.540 Run status group 0 (all jobs): 00:10:37.540 READ: bw=69.2KiB/s (70.9kB/s), 69.2KiB/s-69.2KiB/s (70.9kB/s-70.9kB/s), io=72.0KiB (73.7kB), run=1040-1040msec 00:10:37.540 WRITE: bw=1969KiB/s (2016kB/s), 1969KiB/s-1969KiB/s (2016kB/s-2016kB/s), io=2048KiB (2097kB), run=1040-1040msec 00:10:37.540 00:10:37.540 Disk stats (read/write): 00:10:37.540 nvme0n1: ios=64/512, merge=0/0, ticks=606/297, in_queue=903, util=93.39% 00:10:37.540 17:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:37.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:37.540 17:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:37.540 17:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:37.540 17:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:37.540 17:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:37.540 17:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:37.540 17:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:37.540 17:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:37.540 17:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:37.540 17:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:37.540 17:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:37.540 17:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:37.540 17:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:37.540 17:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:37.540 17:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:37.540 17:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:37.540 rmmod nvme_tcp 00:10:37.540 rmmod nvme_fabrics 00:10:37.540 rmmod nvme_keyring 00:10:37.541 17:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:37.541 17:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:37.541 17:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:37.541 17:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 2865656 ']' 00:10:37.541 17:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 2865656 00:10:37.541 17:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 2865656 ']' 00:10:37.541 17:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 2865656 00:10:37.541 17:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:37.541 17:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:37.541 17:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2865656 00:10:37.541 17:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:37.541 17:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:37.541 17:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2865656' 00:10:37.541 killing process with pid 2865656 00:10:37.541 17:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 2865656 00:10:37.541 17:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 2865656 00:10:37.802 17:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:37.802 17:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:37.802 17:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:37.802 17:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:37.802 17:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:10:37.802 17:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:37.802 17:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:10:37.802 17:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:37.802 17:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:37.802 17:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.802 17:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.802 17:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.715 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:39.976 00:10:39.976 real 0m17.407s 00:10:39.976 user 0m50.950s 00:10:39.976 sys 0m6.165s 00:10:39.976 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:39.976 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:39.976 ************************************ 00:10:39.976 END TEST nvmf_nmic 00:10:39.976 ************************************ 00:10:39.976 17:10:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:39.976 17:10:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:39.976 17:10:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:39.976 17:10:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:39.976 ************************************ 00:10:39.976 START TEST nvmf_fio_target 00:10:39.976 ************************************ 00:10:39.976 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:39.976 * Looking for test storage... 00:10:39.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:39.976 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:39.976 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:10:39.976 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:40.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.238 --rc genhtml_branch_coverage=1 00:10:40.238 --rc genhtml_function_coverage=1 00:10:40.238 --rc genhtml_legend=1 00:10:40.238 --rc geninfo_all_blocks=1 00:10:40.238 --rc geninfo_unexecuted_blocks=1 00:10:40.238 00:10:40.238 ' 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:40.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.238 --rc genhtml_branch_coverage=1 00:10:40.238 --rc genhtml_function_coverage=1 00:10:40.238 --rc genhtml_legend=1 00:10:40.238 --rc geninfo_all_blocks=1 00:10:40.238 --rc geninfo_unexecuted_blocks=1 00:10:40.238 00:10:40.238 ' 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:40.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.238 --rc genhtml_branch_coverage=1 00:10:40.238 --rc genhtml_function_coverage=1 00:10:40.238 --rc genhtml_legend=1 00:10:40.238 --rc geninfo_all_blocks=1 00:10:40.238 --rc geninfo_unexecuted_blocks=1 00:10:40.238 00:10:40.238 ' 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:40.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.238 --rc genhtml_branch_coverage=1 00:10:40.238 --rc genhtml_function_coverage=1 00:10:40.238 --rc genhtml_legend=1 00:10:40.238 --rc geninfo_all_blocks=1 00:10:40.238 --rc geninfo_unexecuted_blocks=1 00:10:40.238 00:10:40.238 ' 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:40.238 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:40.239 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.239 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.239 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.239 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:40.239 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.239 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:40.239 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:40.239 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:40.239 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:40.239 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:40.239 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:40.239 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:40.239 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:40.239 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:40.239 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:40.239 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:40.239 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:40.239 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:40.239 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:40.239 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:40.239 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:40.239 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:40.239 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:40.239 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:40.239 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:40.239 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.239 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:40.239 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.239 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:40.239 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:40.239 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:40.239 17:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:48.376 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:48.376 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:48.376 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:48.376 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:48.376 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:48.377 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:48.377 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:48.377 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:48.377 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:48.377 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:48.377 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:48.377 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:48.377 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:48.377 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:48.377 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:48.377 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:48.377 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:48.377 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:48.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:48.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:10:48.377 00:10:48.377 --- 10.0.0.2 ping statistics --- 00:10:48.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.377 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:10:48.377 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:48.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:48.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:10:48.377 00:10:48.377 --- 10.0.0.1 ping statistics --- 00:10:48.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.377 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:10:48.377 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:48.377 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:10:48.377 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:48.377 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:48.377 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:48.377 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:48.377 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:48.377 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:48.377 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:48.377 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:48.377 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:48.377 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:48.377 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.377 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=2871640 00:10:48.377 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 2871640 00:10:48.377 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:48.377 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 2871640 ']' 00:10:48.377 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.377 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:48.377 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.377 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:48.377 17:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.377 [2024-10-01 17:10:45.959563] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:10:48.377 [2024-10-01 17:10:45.959621] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:48.377 [2024-10-01 17:10:46.028317] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:48.377 [2024-10-01 17:10:46.062558] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:48.377 [2024-10-01 17:10:46.062599] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:48.377 [2024-10-01 17:10:46.062607] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:48.377 [2024-10-01 17:10:46.062614] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:48.377 [2024-10-01 17:10:46.062620] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:48.377 [2024-10-01 17:10:46.062762] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:48.377 [2024-10-01 17:10:46.062881] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:48.377 [2024-10-01 17:10:46.063041] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.377 [2024-10-01 17:10:46.063042] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:48.377 17:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:48.377 17:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:48.377 17:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:48.377 17:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:48.377 17:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.377 17:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:48.377 17:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:48.638 [2024-10-01 17:10:46.952152] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:48.638 17:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:48.638 17:10:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:48.638 17:10:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:48.899 17:10:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:48.899 17:10:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:49.159 17:10:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:49.159 17:10:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:49.420 17:10:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:49.420 17:10:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:49.420 17:10:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:49.681 17:10:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:49.681 17:10:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:49.942 17:10:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:49.942 17:10:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:50.203 17:10:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:50.203 17:10:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:50.203 17:10:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:50.464 17:10:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:50.464 17:10:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:50.723 17:10:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:50.723 17:10:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:50.723 17:10:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:50.983 [2024-10-01 17:10:49.402338] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:50.983 17:10:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:51.243 17:10:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:51.503 17:10:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:52.888 17:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:52.888 17:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:52.888 17:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:52.888 17:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:52.888 17:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:52.888 17:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:55.438 17:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:55.438 17:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:55.438 17:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:55.438 17:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:55.438 17:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:55.438 17:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:55.438 17:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:55.438 [global] 00:10:55.438 thread=1 00:10:55.438 invalidate=1 00:10:55.438 rw=write 00:10:55.438 time_based=1 00:10:55.438 runtime=1 00:10:55.438 ioengine=libaio 00:10:55.438 direct=1 00:10:55.438 bs=4096 00:10:55.438 iodepth=1 00:10:55.438 norandommap=0 00:10:55.438 numjobs=1 00:10:55.438 00:10:55.438 verify_dump=1 00:10:55.438 verify_backlog=512 00:10:55.438 verify_state_save=0 00:10:55.438 do_verify=1 00:10:55.438 verify=crc32c-intel 00:10:55.438 [job0] 00:10:55.438 filename=/dev/nvme0n1 00:10:55.438 [job1] 00:10:55.438 filename=/dev/nvme0n2 00:10:55.438 [job2] 00:10:55.438 filename=/dev/nvme0n3 00:10:55.438 [job3] 00:10:55.438 filename=/dev/nvme0n4 00:10:55.438 Could not set queue depth (nvme0n1) 00:10:55.438 Could not set queue depth (nvme0n2) 00:10:55.438 Could not set queue depth (nvme0n3) 00:10:55.438 Could not set queue depth (nvme0n4) 00:10:55.438 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:55.439 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:55.439 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:55.439 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:55.439 fio-3.35 00:10:55.439 Starting 4 threads 00:10:56.858 00:10:56.858 job0: (groupid=0, jobs=1): err= 0: pid=2873322: Tue Oct 1 17:10:55 2024 00:10:56.858 read: IOPS=29, BW=120KiB/s (123kB/s)(120KiB/1003msec) 00:10:56.858 slat (nsec): min=26026, max=26842, avg=26389.47, stdev=190.24 00:10:56.858 clat (usec): min=806, max=42994, avg=22909.43, stdev=20830.77 00:10:56.858 lat (usec): min=833, max=43021, avg=22935.82, stdev=20830.78 00:10:56.858 clat percentiles (usec): 00:10:56.858 | 1.00th=[ 807], 5.00th=[ 865], 10.00th=[ 881], 20.00th=[ 963], 00:10:56.858 | 30.00th=[ 1057], 40.00th=[ 1123], 50.00th=[41681], 60.00th=[41681], 00:10:56.858 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:10:56.858 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:10:56.858 | 99.99th=[43254] 00:10:56.858 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:10:56.858 slat (usec): min=2, max=1945, avg=33.44, stdev=85.46 00:10:56.858 clat (usec): min=207, max=936, avg=573.53, stdev=119.31 00:10:56.858 lat (usec): min=218, max=2419, avg=606.98, stdev=146.85 00:10:56.858 clat percentiles (usec): 00:10:56.858 | 1.00th=[ 281], 5.00th=[ 367], 10.00th=[ 416], 20.00th=[ 478], 00:10:56.858 | 30.00th=[ 510], 40.00th=[ 545], 50.00th=[ 578], 60.00th=[ 603], 00:10:56.858 | 70.00th=[ 635], 80.00th=[ 668], 90.00th=[ 717], 95.00th=[ 766], 00:10:56.858 | 99.00th=[ 865], 99.50th=[ 906], 99.90th=[ 938], 99.95th=[ 938], 00:10:56.858 | 99.99th=[ 938] 00:10:56.858 bw ( KiB/s): min= 4096, max= 4096, per=34.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:56.858 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:56.858 lat (usec) : 250=0.18%, 500=25.65%, 750=63.10%, 1000=6.64% 00:10:56.858 lat (msec) : 2=1.48%, 50=2.95% 00:10:56.858 cpu : usr=0.80%, sys=1.50%, ctx=544, majf=0, minf=1 00:10:56.858 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:56.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.858 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.858 issued rwts: total=30,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:56.858 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:56.858 job1: (groupid=0, jobs=1): err= 0: pid=2873342: Tue Oct 1 17:10:55 2024 00:10:56.858 read: IOPS=671, BW=2688KiB/s (2752kB/s)(2696KiB/1003msec) 00:10:56.858 slat (nsec): min=7096, max=61738, avg=25684.67, stdev=6948.17 00:10:56.858 clat (usec): min=164, max=41965, avg=793.11, stdev=2227.58 00:10:56.858 lat (usec): min=172, max=41993, avg=818.79, stdev=2227.68 00:10:56.858 clat percentiles (usec): 00:10:56.858 | 1.00th=[ 253], 5.00th=[ 449], 10.00th=[ 519], 20.00th=[ 594], 00:10:56.858 | 30.00th=[ 611], 40.00th=[ 619], 50.00th=[ 644], 60.00th=[ 668], 00:10:56.858 | 70.00th=[ 758], 80.00th=[ 816], 90.00th=[ 857], 95.00th=[ 906], 00:10:56.858 | 99.00th=[ 1139], 99.50th=[ 1237], 99.90th=[42206], 99.95th=[42206], 00:10:56.858 | 99.99th=[42206] 00:10:56.858 write: IOPS=1020, BW=4084KiB/s (4182kB/s)(4096KiB/1003msec); 0 zone resets 00:10:56.858 slat (usec): min=3, max=2395, avg=29.02, stdev=75.04 00:10:56.858 clat (usec): min=100, max=1173, avg=399.03, stdev=169.52 00:10:56.858 lat (usec): min=110, max=2780, avg=428.04, stdev=190.19 00:10:56.858 clat percentiles (usec): 00:10:56.858 | 1.00th=[ 104], 5.00th=[ 113], 10.00th=[ 126], 20.00th=[ 293], 00:10:56.858 | 30.00th=[ 338], 40.00th=[ 359], 50.00th=[ 375], 60.00th=[ 400], 00:10:56.858 | 70.00th=[ 441], 80.00th=[ 537], 90.00th=[ 644], 95.00th=[ 709], 00:10:56.858 | 99.00th=[ 816], 99.50th=[ 832], 99.90th=[ 865], 99.95th=[ 1172], 00:10:56.858 | 99.99th=[ 1172] 00:10:56.858 bw ( KiB/s): min= 4096, max= 4096, per=34.00%, avg=4096.00, stdev= 0.00, samples=2 00:10:56.858 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:10:56.858 lat (usec) : 250=9.13%, 500=40.28%, 750=36.40%, 1000=13.25% 00:10:56.858 lat (msec) : 2=0.82%, 50=0.12% 00:10:56.858 cpu : usr=2.10%, sys=4.79%, ctx=1702, majf=0, minf=1 00:10:56.858 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:56.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.858 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.858 issued rwts: total=674,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:56.858 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:56.858 job2: (groupid=0, jobs=1): err= 0: pid=2873363: Tue Oct 1 17:10:55 2024 00:10:56.858 read: IOPS=128, BW=514KiB/s (526kB/s)(524KiB/1020msec) 00:10:56.858 slat (nsec): min=25981, max=45403, avg=27523.48, stdev=3521.90 00:10:56.858 clat (usec): min=882, max=42982, avg=5317.15, stdev=12281.05 00:10:56.858 lat (usec): min=909, max=43009, avg=5344.68, stdev=12281.76 00:10:56.858 clat percentiles (usec): 00:10:56.858 | 1.00th=[ 898], 5.00th=[ 1004], 10.00th=[ 1057], 20.00th=[ 1074], 00:10:56.858 | 30.00th=[ 1123], 40.00th=[ 1139], 50.00th=[ 1156], 60.00th=[ 1188], 00:10:56.858 | 70.00th=[ 1205], 80.00th=[ 1237], 90.00th=[17695], 95.00th=[41681], 00:10:56.858 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:56.858 | 99.99th=[42730] 00:10:56.858 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:10:56.858 slat (nsec): min=3762, max=54304, avg=22679.99, stdev=12839.42 00:10:56.858 clat (usec): min=141, max=1517, avg=593.05, stdev=164.00 00:10:56.858 lat (usec): min=151, max=1526, avg=615.73, stdev=165.54 00:10:56.858 clat percentiles (usec): 00:10:56.858 | 1.00th=[ 239], 5.00th=[ 318], 10.00th=[ 371], 20.00th=[ 453], 00:10:56.858 | 30.00th=[ 510], 40.00th=[ 562], 50.00th=[ 603], 60.00th=[ 644], 00:10:56.858 | 70.00th=[ 676], 80.00th=[ 734], 90.00th=[ 791], 95.00th=[ 840], 00:10:56.858 | 99.00th=[ 955], 99.50th=[ 1004], 99.90th=[ 1516], 99.95th=[ 1516], 00:10:56.858 | 99.99th=[ 1516] 00:10:56.858 bw ( KiB/s): min= 4096, max= 4096, per=34.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:56.858 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:56.858 lat (usec) : 250=1.09%, 500=22.40%, 750=42.61%, 1000=14.15% 00:10:56.858 lat (msec) : 2=17.57%, 20=0.16%, 50=2.02% 00:10:56.858 cpu : usr=0.69%, sys=1.47%, ctx=645, majf=0, minf=1 00:10:56.858 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:56.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.858 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.858 issued rwts: total=131,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:56.858 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:56.858 job3: (groupid=0, jobs=1): err= 0: pid=2873370: Tue Oct 1 17:10:55 2024 00:10:56.858 read: IOPS=725, BW=2901KiB/s (2971kB/s)(2904KiB/1001msec) 00:10:56.858 slat (nsec): min=7060, max=63905, avg=25172.42, stdev=7783.04 00:10:56.858 clat (usec): min=305, max=1102, avg=728.19, stdev=103.03 00:10:56.858 lat (usec): min=332, max=1130, avg=753.36, stdev=103.41 00:10:56.858 clat percentiles (usec): 00:10:56.858 | 1.00th=[ 453], 5.00th=[ 537], 10.00th=[ 578], 20.00th=[ 635], 00:10:56.858 | 30.00th=[ 693], 40.00th=[ 734], 50.00th=[ 750], 60.00th=[ 766], 00:10:56.858 | 70.00th=[ 791], 80.00th=[ 807], 90.00th=[ 832], 95.00th=[ 857], 00:10:56.859 | 99.00th=[ 938], 99.50th=[ 1029], 99.90th=[ 1106], 99.95th=[ 1106], 00:10:56.859 | 99.99th=[ 1106] 00:10:56.859 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:10:56.859 slat (nsec): min=10076, max=64356, avg=27039.56, stdev=12603.16 00:10:56.859 clat (usec): min=165, max=647, avg=403.82, stdev=80.32 00:10:56.859 lat (usec): min=176, max=682, avg=430.86, stdev=87.65 00:10:56.859 clat percentiles (usec): 00:10:56.859 | 1.00th=[ 237], 5.00th=[ 273], 10.00th=[ 302], 20.00th=[ 326], 00:10:56.859 | 30.00th=[ 351], 40.00th=[ 379], 50.00th=[ 416], 60.00th=[ 437], 00:10:56.859 | 70.00th=[ 453], 80.00th=[ 474], 90.00th=[ 502], 95.00th=[ 523], 00:10:56.859 | 99.00th=[ 578], 99.50th=[ 619], 99.90th=[ 644], 99.95th=[ 652], 00:10:56.859 | 99.99th=[ 652] 00:10:56.859 bw ( KiB/s): min= 4096, max= 4096, per=34.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:56.859 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:56.859 lat (usec) : 250=0.97%, 500=52.17%, 750=26.40%, 1000=20.23% 00:10:56.859 lat (msec) : 2=0.23% 00:10:56.859 cpu : usr=2.30%, sys=4.70%, ctx=1751, majf=0, minf=1 00:10:56.859 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:56.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.859 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.859 issued rwts: total=726,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:56.859 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:56.859 00:10:56.859 Run status group 0 (all jobs): 00:10:56.859 READ: bw=6122KiB/s (6268kB/s), 120KiB/s-2901KiB/s (123kB/s-2971kB/s), io=6244KiB (6394kB), run=1001-1020msec 00:10:56.859 WRITE: bw=11.8MiB/s (12.3MB/s), 2008KiB/s-4092KiB/s (2056kB/s-4190kB/s), io=12.0MiB (12.6MB), run=1001-1020msec 00:10:56.859 00:10:56.859 Disk stats (read/write): 00:10:56.859 nvme0n1: ios=75/512, merge=0/0, ticks=609/283, in_queue=892, util=84.07% 00:10:56.859 nvme0n2: ios=603/1024, merge=0/0, ticks=492/399, in_queue=891, util=87.96% 00:10:56.859 nvme0n3: ios=150/512, merge=0/0, ticks=942/285, in_queue=1227, util=92.18% 00:10:56.859 nvme0n4: ios=569/993, merge=0/0, ticks=574/369, in_queue=943, util=94.22% 00:10:56.859 17:10:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:56.859 [global] 00:10:56.859 thread=1 00:10:56.859 invalidate=1 00:10:56.859 rw=randwrite 00:10:56.859 time_based=1 00:10:56.859 runtime=1 00:10:56.859 ioengine=libaio 00:10:56.859 direct=1 00:10:56.859 bs=4096 00:10:56.859 iodepth=1 00:10:56.859 norandommap=0 00:10:56.859 numjobs=1 00:10:56.859 00:10:56.859 verify_dump=1 00:10:56.859 verify_backlog=512 00:10:56.859 verify_state_save=0 00:10:56.859 do_verify=1 00:10:56.859 verify=crc32c-intel 00:10:56.859 [job0] 00:10:56.859 filename=/dev/nvme0n1 00:10:56.859 [job1] 00:10:56.859 filename=/dev/nvme0n2 00:10:56.859 [job2] 00:10:56.859 filename=/dev/nvme0n3 00:10:56.859 [job3] 00:10:56.859 filename=/dev/nvme0n4 00:10:56.859 Could not set queue depth (nvme0n1) 00:10:56.859 Could not set queue depth (nvme0n2) 00:10:56.859 Could not set queue depth (nvme0n3) 00:10:56.859 Could not set queue depth (nvme0n4) 00:10:57.120 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:57.120 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:57.120 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:57.120 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:57.120 fio-3.35 00:10:57.120 Starting 4 threads 00:10:58.529 00:10:58.529 job0: (groupid=0, jobs=1): err= 0: pid=2873843: Tue Oct 1 17:10:56 2024 00:10:58.529 read: IOPS=618, BW=2474KiB/s (2533kB/s)(2476KiB/1001msec) 00:10:58.529 slat (nsec): min=7035, max=45567, avg=23324.31, stdev=8304.70 00:10:58.529 clat (usec): min=525, max=1079, avg=790.65, stdev=78.69 00:10:58.529 lat (usec): min=532, max=1088, avg=813.97, stdev=80.94 00:10:58.529 clat percentiles (usec): 00:10:58.529 | 1.00th=[ 570], 5.00th=[ 652], 10.00th=[ 693], 20.00th=[ 725], 00:10:58.529 | 30.00th=[ 758], 40.00th=[ 783], 50.00th=[ 799], 60.00th=[ 816], 00:10:58.529 | 70.00th=[ 840], 80.00th=[ 857], 90.00th=[ 881], 95.00th=[ 914], 00:10:58.529 | 99.00th=[ 938], 99.50th=[ 955], 99.90th=[ 1074], 99.95th=[ 1074], 00:10:58.529 | 99.99th=[ 1074] 00:10:58.529 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:10:58.529 slat (nsec): min=8963, max=67018, avg=28356.47, stdev=10992.47 00:10:58.529 clat (usec): min=199, max=890, avg=445.58, stdev=102.89 00:10:58.529 lat (usec): min=229, max=925, avg=473.93, stdev=107.33 00:10:58.529 clat percentiles (usec): 00:10:58.529 | 1.00th=[ 251], 5.00th=[ 281], 10.00th=[ 318], 20.00th=[ 355], 00:10:58.529 | 30.00th=[ 392], 40.00th=[ 429], 50.00th=[ 449], 60.00th=[ 469], 00:10:58.529 | 70.00th=[ 486], 80.00th=[ 510], 90.00th=[ 562], 95.00th=[ 619], 00:10:58.529 | 99.00th=[ 742], 99.50th=[ 832], 99.90th=[ 873], 99.95th=[ 889], 00:10:58.529 | 99.99th=[ 889] 00:10:58.529 bw ( KiB/s): min= 4096, max= 4096, per=35.02%, avg=4096.00, stdev= 0.00, samples=1 00:10:58.529 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:58.529 lat (usec) : 250=0.61%, 500=46.62%, 750=24.83%, 1000=27.81% 00:10:58.529 lat (msec) : 2=0.12% 00:10:58.529 cpu : usr=2.70%, sys=4.30%, ctx=1647, majf=0, minf=1 00:10:58.529 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:58.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.529 issued rwts: total=619,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.529 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:58.529 job1: (groupid=0, jobs=1): err= 0: pid=2873858: Tue Oct 1 17:10:56 2024 00:10:58.529 read: IOPS=18, BW=73.1KiB/s (74.8kB/s)(76.0KiB/1040msec) 00:10:58.529 slat (nsec): min=27408, max=28432, avg=27866.16, stdev=343.58 00:10:58.529 clat (usec): min=1022, max=42079, avg=39222.57, stdev=9262.20 00:10:58.529 lat (usec): min=1050, max=42106, avg=39250.44, stdev=9262.20 00:10:58.529 clat percentiles (usec): 00:10:58.529 | 1.00th=[ 1020], 5.00th=[ 1020], 10.00th=[40633], 20.00th=[41157], 00:10:58.529 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:58.529 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:10:58.529 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:58.529 | 99.99th=[42206] 00:10:58.529 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:10:58.529 slat (nsec): min=9112, max=74824, avg=33769.07, stdev=8185.67 00:10:58.529 clat (usec): min=145, max=1050, avg=532.30, stdev=160.58 00:10:58.529 lat (usec): min=180, max=1083, avg=566.07, stdev=162.35 00:10:58.529 clat percentiles (usec): 00:10:58.529 | 1.00th=[ 241], 5.00th=[ 285], 10.00th=[ 310], 20.00th=[ 371], 00:10:58.529 | 30.00th=[ 449], 40.00th=[ 490], 50.00th=[ 537], 60.00th=[ 570], 00:10:58.529 | 70.00th=[ 611], 80.00th=[ 676], 90.00th=[ 742], 95.00th=[ 799], 00:10:58.529 | 99.00th=[ 930], 99.50th=[ 979], 99.90th=[ 1057], 99.95th=[ 1057], 00:10:58.529 | 99.99th=[ 1057] 00:10:58.529 bw ( KiB/s): min= 4096, max= 4096, per=35.02%, avg=4096.00, stdev= 0.00, samples=1 00:10:58.529 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:58.529 lat (usec) : 250=1.51%, 500=39.17%, 750=46.52%, 1000=8.85% 00:10:58.529 lat (msec) : 2=0.56%, 50=3.39% 00:10:58.529 cpu : usr=1.25%, sys=1.92%, ctx=532, majf=0, minf=1 00:10:58.529 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:58.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.529 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.529 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:58.529 job2: (groupid=0, jobs=1): err= 0: pid=2873876: Tue Oct 1 17:10:56 2024 00:10:58.529 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:58.529 slat (nsec): min=9888, max=60742, avg=28829.12, stdev=3185.16 00:10:58.529 clat (usec): min=662, max=1485, avg=1079.53, stdev=121.48 00:10:58.529 lat (usec): min=691, max=1513, avg=1108.36, stdev=121.49 00:10:58.529 clat percentiles (usec): 00:10:58.529 | 1.00th=[ 766], 5.00th=[ 857], 10.00th=[ 914], 20.00th=[ 971], 00:10:58.529 | 30.00th=[ 1020], 40.00th=[ 1057], 50.00th=[ 1106], 60.00th=[ 1139], 00:10:58.529 | 70.00th=[ 1156], 80.00th=[ 1188], 90.00th=[ 1205], 95.00th=[ 1237], 00:10:58.529 | 99.00th=[ 1352], 99.50th=[ 1418], 99.90th=[ 1483], 99.95th=[ 1483], 00:10:58.529 | 99.99th=[ 1483] 00:10:58.529 write: IOPS=679, BW=2717KiB/s (2782kB/s)(2720KiB/1001msec); 0 zone resets 00:10:58.529 slat (nsec): min=9352, max=56700, avg=31719.50, stdev=9709.52 00:10:58.529 clat (usec): min=228, max=935, avg=589.70, stdev=130.88 00:10:58.529 lat (usec): min=239, max=970, avg=621.42, stdev=135.05 00:10:58.529 clat percentiles (usec): 00:10:58.529 | 1.00th=[ 260], 5.00th=[ 363], 10.00th=[ 408], 20.00th=[ 482], 00:10:58.529 | 30.00th=[ 529], 40.00th=[ 570], 50.00th=[ 594], 60.00th=[ 627], 00:10:58.529 | 70.00th=[ 660], 80.00th=[ 701], 90.00th=[ 750], 95.00th=[ 791], 00:10:58.529 | 99.00th=[ 857], 99.50th=[ 889], 99.90th=[ 938], 99.95th=[ 938], 00:10:58.529 | 99.99th=[ 938] 00:10:58.529 bw ( KiB/s): min= 4096, max= 4096, per=35.02%, avg=4096.00, stdev= 0.00, samples=1 00:10:58.529 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:58.529 lat (usec) : 250=0.25%, 500=13.59%, 750=37.33%, 1000=16.69% 00:10:58.529 lat (msec) : 2=32.13% 00:10:58.529 cpu : usr=2.40%, sys=5.00%, ctx=1195, majf=0, minf=2 00:10:58.529 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:58.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.529 issued rwts: total=512,680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.529 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:58.529 job3: (groupid=0, jobs=1): err= 0: pid=2873883: Tue Oct 1 17:10:56 2024 00:10:58.529 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:58.529 slat (nsec): min=8215, max=60940, avg=28891.38, stdev=3306.43 00:10:58.529 clat (usec): min=476, max=1216, avg=1015.66, stdev=96.14 00:10:58.529 lat (usec): min=505, max=1244, avg=1044.55, stdev=96.54 00:10:58.529 clat percentiles (usec): 00:10:58.529 | 1.00th=[ 725], 5.00th=[ 840], 10.00th=[ 889], 20.00th=[ 955], 00:10:58.529 | 30.00th=[ 988], 40.00th=[ 1012], 50.00th=[ 1029], 60.00th=[ 1057], 00:10:58.529 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1123], 95.00th=[ 1156], 00:10:58.529 | 99.00th=[ 1188], 99.50th=[ 1205], 99.90th=[ 1221], 99.95th=[ 1221], 00:10:58.529 | 99.99th=[ 1221] 00:10:58.529 write: IOPS=824, BW=3297KiB/s (3376kB/s)(3300KiB/1001msec); 0 zone resets 00:10:58.529 slat (nsec): min=9335, max=58965, avg=31000.12, stdev=10344.27 00:10:58.529 clat (usec): min=211, max=987, avg=520.08, stdev=126.87 00:10:58.529 lat (usec): min=241, max=1023, avg=551.08, stdev=131.22 00:10:58.529 clat percentiles (usec): 00:10:58.529 | 1.00th=[ 273], 5.00th=[ 326], 10.00th=[ 355], 20.00th=[ 416], 00:10:58.529 | 30.00th=[ 453], 40.00th=[ 478], 50.00th=[ 506], 60.00th=[ 545], 00:10:58.529 | 70.00th=[ 578], 80.00th=[ 627], 90.00th=[ 701], 95.00th=[ 742], 00:10:58.529 | 99.00th=[ 824], 99.50th=[ 857], 99.90th=[ 988], 99.95th=[ 988], 00:10:58.529 | 99.99th=[ 988] 00:10:58.529 bw ( KiB/s): min= 4096, max= 4096, per=35.02%, avg=4096.00, stdev= 0.00, samples=1 00:10:58.529 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:58.529 lat (usec) : 250=0.30%, 500=29.09%, 750=30.37%, 1000=15.11% 00:10:58.529 lat (msec) : 2=25.13% 00:10:58.529 cpu : usr=2.90%, sys=4.90%, ctx=1338, majf=0, minf=1 00:10:58.529 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:58.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.529 issued rwts: total=512,825,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.529 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:58.529 00:10:58.529 Run status group 0 (all jobs): 00:10:58.529 READ: bw=6392KiB/s (6546kB/s), 73.1KiB/s-2474KiB/s (74.8kB/s-2533kB/s), io=6648KiB (6808kB), run=1001-1040msec 00:10:58.530 WRITE: bw=11.4MiB/s (12.0MB/s), 1969KiB/s-4092KiB/s (2016kB/s-4190kB/s), io=11.9MiB (12.5MB), run=1001-1040msec 00:10:58.530 00:10:58.530 Disk stats (read/write): 00:10:58.530 nvme0n1: ios=558/868, merge=0/0, ticks=448/349, in_queue=797, util=87.07% 00:10:58.530 nvme0n2: ios=54/512, merge=0/0, ticks=622/200, in_queue=822, util=91.14% 00:10:58.530 nvme0n3: ios=520/512, merge=0/0, ticks=600/230, in_queue=830, util=92.74% 00:10:58.530 nvme0n4: ios=561/564, merge=0/0, ticks=563/234, in_queue=797, util=96.91% 00:10:58.530 17:10:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:58.530 [global] 00:10:58.530 thread=1 00:10:58.530 invalidate=1 00:10:58.530 rw=write 00:10:58.530 time_based=1 00:10:58.530 runtime=1 00:10:58.530 ioengine=libaio 00:10:58.530 direct=1 00:10:58.530 bs=4096 00:10:58.530 iodepth=128 00:10:58.530 norandommap=0 00:10:58.530 numjobs=1 00:10:58.530 00:10:58.530 verify_dump=1 00:10:58.530 verify_backlog=512 00:10:58.530 verify_state_save=0 00:10:58.530 do_verify=1 00:10:58.530 verify=crc32c-intel 00:10:58.530 [job0] 00:10:58.530 filename=/dev/nvme0n1 00:10:58.530 [job1] 00:10:58.530 filename=/dev/nvme0n2 00:10:58.530 [job2] 00:10:58.530 filename=/dev/nvme0n3 00:10:58.530 [job3] 00:10:58.530 filename=/dev/nvme0n4 00:10:58.530 Could not set queue depth (nvme0n1) 00:10:58.530 Could not set queue depth (nvme0n2) 00:10:58.530 Could not set queue depth (nvme0n3) 00:10:58.530 Could not set queue depth (nvme0n4) 00:10:58.819 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:58.819 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:58.819 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:58.819 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:58.819 fio-3.35 00:10:58.819 Starting 4 threads 00:11:00.207 00:11:00.207 job0: (groupid=0, jobs=1): err= 0: pid=2874323: Tue Oct 1 17:10:58 2024 00:11:00.207 read: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1011msec) 00:11:00.207 slat (nsec): min=970, max=21143k, avg=172081.65, stdev=1189845.47 00:11:00.207 clat (usec): min=5052, max=83412, avg=17866.81, stdev=11720.21 00:11:00.207 lat (usec): min=5060, max=83422, avg=18038.89, stdev=11848.56 00:11:00.207 clat percentiles (usec): 00:11:00.207 | 1.00th=[ 5669], 5.00th=[ 9765], 10.00th=[ 9896], 20.00th=[10159], 00:11:00.207 | 30.00th=[10683], 40.00th=[11600], 50.00th=[16581], 60.00th=[16712], 00:11:00.207 | 70.00th=[17957], 80.00th=[24511], 90.00th=[27395], 95.00th=[38011], 00:11:00.207 | 99.00th=[71828], 99.50th=[80217], 99.90th=[83362], 99.95th=[83362], 00:11:00.207 | 99.99th=[83362] 00:11:00.207 write: IOPS=3234, BW=12.6MiB/s (13.2MB/s)(12.8MiB/1011msec); 0 zone resets 00:11:00.207 slat (nsec): min=1633, max=29917k, avg=138462.94, stdev=879843.80 00:11:00.207 clat (usec): min=1319, max=83374, avg=22436.58, stdev=14272.23 00:11:00.207 lat (usec): min=1330, max=83376, avg=22575.04, stdev=14323.93 00:11:00.207 clat percentiles (usec): 00:11:00.207 | 1.00th=[ 4080], 5.00th=[ 7504], 10.00th=[ 8029], 20.00th=[10159], 00:11:00.207 | 30.00th=[16450], 40.00th=[17433], 50.00th=[18744], 60.00th=[20841], 00:11:00.207 | 70.00th=[25035], 80.00th=[30278], 90.00th=[45351], 95.00th=[54789], 00:11:00.207 | 99.00th=[72877], 99.50th=[72877], 99.90th=[73925], 99.95th=[83362], 00:11:00.207 | 99.99th=[83362] 00:11:00.207 bw ( KiB/s): min=10120, max=14994, per=16.57%, avg=12557.00, stdev=3446.44, samples=2 00:11:00.207 iops : min= 2530, max= 3748, avg=3139.00, stdev=861.26, samples=2 00:11:00.207 lat (msec) : 2=0.03%, 4=0.19%, 10=17.39%, 20=49.57%, 50=27.83% 00:11:00.207 lat (msec) : 100=4.98% 00:11:00.207 cpu : usr=2.67%, sys=3.86%, ctx=329, majf=0, minf=1 00:11:00.207 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:11:00.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.207 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:00.207 issued rwts: total=3072,3270,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.207 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:00.207 job1: (groupid=0, jobs=1): err= 0: pid=2874324: Tue Oct 1 17:10:58 2024 00:11:00.207 read: IOPS=3427, BW=13.4MiB/s (14.0MB/s)(13.5MiB/1007msec) 00:11:00.207 slat (nsec): min=995, max=19202k, avg=139149.10, stdev=1143785.76 00:11:00.207 clat (usec): min=3250, max=50440, avg=16096.25, stdev=6659.10 00:11:00.207 lat (usec): min=4898, max=50449, avg=16235.40, stdev=6777.36 00:11:00.207 clat percentiles (usec): 00:11:00.207 | 1.00th=[ 5866], 5.00th=[ 9503], 10.00th=[ 9634], 20.00th=[10421], 00:11:00.207 | 30.00th=[10814], 40.00th=[11731], 50.00th=[16450], 60.00th=[17171], 00:11:00.207 | 70.00th=[17957], 80.00th=[22152], 90.00th=[25035], 95.00th=[30802], 00:11:00.207 | 99.00th=[34866], 99.50th=[37487], 99.90th=[40633], 99.95th=[50594], 00:11:00.207 | 99.99th=[50594] 00:11:00.207 write: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec); 0 zone resets 00:11:00.207 slat (nsec): min=1698, max=23092k, avg=139432.59, stdev=840776.86 00:11:00.207 clat (usec): min=1202, max=84298, avg=20062.84, stdev=14098.44 00:11:00.207 lat (usec): min=1211, max=84307, avg=20202.28, stdev=14183.85 00:11:00.207 clat percentiles (usec): 00:11:00.207 | 1.00th=[ 4228], 5.00th=[ 6718], 10.00th=[ 7701], 20.00th=[ 8455], 00:11:00.207 | 30.00th=[ 9241], 40.00th=[16712], 50.00th=[17957], 60.00th=[20055], 00:11:00.207 | 70.00th=[22152], 80.00th=[25035], 90.00th=[36439], 95.00th=[47973], 00:11:00.207 | 99.00th=[81265], 99.50th=[82314], 99.90th=[84411], 99.95th=[84411], 00:11:00.207 | 99.99th=[84411] 00:11:00.207 bw ( KiB/s): min=11888, max=16784, per=18.92%, avg=14336.00, stdev=3461.99, samples=2 00:11:00.207 iops : min= 2972, max= 4196, avg=3584.00, stdev=865.50, samples=2 00:11:00.207 lat (msec) : 2=0.13%, 4=0.27%, 10=24.58%, 20=43.91%, 50=29.10% 00:11:00.207 lat (msec) : 100=2.02% 00:11:00.207 cpu : usr=3.28%, sys=3.78%, ctx=327, majf=0, minf=1 00:11:00.207 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:00.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.207 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:00.207 issued rwts: total=3451,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.207 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:00.207 job2: (groupid=0, jobs=1): err= 0: pid=2874340: Tue Oct 1 17:10:58 2024 00:11:00.207 read: IOPS=7153, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1002msec) 00:11:00.207 slat (nsec): min=1009, max=10970k, avg=74618.78, stdev=511382.54 00:11:00.207 clat (usec): min=2410, max=32789, avg=9588.98, stdev=4579.35 00:11:00.207 lat (usec): min=2412, max=32820, avg=9663.60, stdev=4623.81 00:11:00.207 clat percentiles (usec): 00:11:00.207 | 1.00th=[ 3818], 5.00th=[ 5014], 10.00th=[ 5407], 20.00th=[ 5997], 00:11:00.207 | 30.00th=[ 6783], 40.00th=[ 7373], 50.00th=[ 7767], 60.00th=[ 8848], 00:11:00.207 | 70.00th=[10814], 80.00th=[13435], 90.00th=[15795], 95.00th=[18482], 00:11:00.207 | 99.00th=[26084], 99.50th=[27132], 99.90th=[28443], 99.95th=[28967], 00:11:00.207 | 99.99th=[32900] 00:11:00.207 write: IOPS=7162, BW=28.0MiB/s (29.3MB/s)(28.0MiB/1002msec); 0 zone resets 00:11:00.207 slat (nsec): min=1695, max=5115.5k, avg=59287.48, stdev=278458.14 00:11:00.207 clat (usec): min=517, max=24822, avg=8041.89, stdev=3755.94 00:11:00.207 lat (usec): min=1351, max=24831, avg=8101.18, stdev=3776.88 00:11:00.207 clat percentiles (usec): 00:11:00.207 | 1.00th=[ 2507], 5.00th=[ 3687], 10.00th=[ 3949], 20.00th=[ 5211], 00:11:00.207 | 30.00th=[ 6063], 40.00th=[ 6325], 50.00th=[ 7111], 60.00th=[ 7767], 00:11:00.207 | 70.00th=[ 8455], 80.00th=[12387], 90.00th=[12649], 95.00th=[14615], 00:11:00.207 | 99.00th=[19792], 99.50th=[21365], 99.90th=[22152], 99.95th=[24773], 00:11:00.207 | 99.99th=[24773] 00:11:00.207 bw ( KiB/s): min=39600, max=39600, per=52.26%, avg=39600.00, stdev= 0.00, samples=1 00:11:00.207 iops : min= 9900, max= 9900, avg=9900.00, stdev= 0.00, samples=1 00:11:00.207 lat (usec) : 750=0.01% 00:11:00.207 lat (msec) : 2=0.25%, 4=5.71%, 10=64.48%, 20=27.28%, 50=2.27% 00:11:00.207 cpu : usr=4.60%, sys=7.59%, ctx=723, majf=0, minf=1 00:11:00.207 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:00.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.207 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:00.207 issued rwts: total=7168,7177,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.207 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:00.208 job3: (groupid=0, jobs=1): err= 0: pid=2874347: Tue Oct 1 17:10:58 2024 00:11:00.208 read: IOPS=4971, BW=19.4MiB/s (20.4MB/s)(19.5MiB/1003msec) 00:11:00.208 slat (nsec): min=917, max=17295k, avg=86906.76, stdev=672335.47 00:11:00.208 clat (usec): min=1272, max=44229, avg=11331.22, stdev=6503.91 00:11:00.208 lat (usec): min=3004, max=44236, avg=11418.13, stdev=6568.84 00:11:00.208 clat percentiles (usec): 00:11:00.208 | 1.00th=[ 3982], 5.00th=[ 5800], 10.00th=[ 6652], 20.00th=[ 7570], 00:11:00.208 | 30.00th=[ 8094], 40.00th=[ 9110], 50.00th=[ 9634], 60.00th=[10290], 00:11:00.208 | 70.00th=[10552], 80.00th=[10945], 90.00th=[24511], 95.00th=[25035], 00:11:00.208 | 99.00th=[32900], 99.50th=[36963], 99.90th=[44303], 99.95th=[44303], 00:11:00.208 | 99.99th=[44303] 00:11:00.208 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:11:00.208 slat (nsec): min=1569, max=17251k, avg=102859.50, stdev=650439.10 00:11:00.208 clat (usec): min=1223, max=84774, avg=13801.80, stdev=11394.05 00:11:00.208 lat (usec): min=1236, max=84784, avg=13904.66, stdev=11472.03 00:11:00.208 clat percentiles (usec): 00:11:00.208 | 1.00th=[ 3884], 5.00th=[ 4752], 10.00th=[ 5997], 20.00th=[ 7635], 00:11:00.208 | 30.00th=[ 8586], 40.00th=[10421], 50.00th=[11863], 60.00th=[12518], 00:11:00.208 | 70.00th=[12780], 80.00th=[14615], 90.00th=[22938], 95.00th=[25297], 00:11:00.208 | 99.00th=[79168], 99.50th=[82314], 99.90th=[84411], 99.95th=[84411], 00:11:00.208 | 99.99th=[84411] 00:11:00.208 bw ( KiB/s): min=16384, max=24576, per=27.03%, avg=20480.00, stdev=5792.62, samples=2 00:11:00.208 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:11:00.208 lat (msec) : 2=0.20%, 4=0.92%, 10=44.20%, 20=40.59%, 50=12.83% 00:11:00.208 lat (msec) : 100=1.26% 00:11:00.208 cpu : usr=3.09%, sys=5.89%, ctx=495, majf=0, minf=2 00:11:00.208 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:00.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.208 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:00.208 issued rwts: total=4986,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.208 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:00.208 00:11:00.208 Run status group 0 (all jobs): 00:11:00.208 READ: bw=72.2MiB/s (75.7MB/s), 11.9MiB/s-27.9MiB/s (12.4MB/s-29.3MB/s), io=73.0MiB (76.5MB), run=1002-1011msec 00:11:00.208 WRITE: bw=74.0MiB/s (77.6MB/s), 12.6MiB/s-28.0MiB/s (13.2MB/s-29.3MB/s), io=74.8MiB (78.4MB), run=1002-1011msec 00:11:00.208 00:11:00.208 Disk stats (read/write): 00:11:00.208 nvme0n1: ios=2610/2847, merge=0/0, ticks=43933/58037, in_queue=101970, util=91.88% 00:11:00.208 nvme0n2: ios=2584/2847, merge=0/0, ticks=41414/61485, in_queue=102899, util=97.14% 00:11:00.208 nvme0n3: ios=6200/6631, merge=0/0, ticks=39548/35510, in_queue=75058, util=100.00% 00:11:00.208 nvme0n4: ios=3900/4096, merge=0/0, ticks=36998/47006, in_queue=84004, util=91.67% 00:11:00.208 17:10:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:00.208 [global] 00:11:00.208 thread=1 00:11:00.208 invalidate=1 00:11:00.208 rw=randwrite 00:11:00.208 time_based=1 00:11:00.208 runtime=1 00:11:00.208 ioengine=libaio 00:11:00.208 direct=1 00:11:00.208 bs=4096 00:11:00.208 iodepth=128 00:11:00.208 norandommap=0 00:11:00.208 numjobs=1 00:11:00.208 00:11:00.208 verify_dump=1 00:11:00.208 verify_backlog=512 00:11:00.208 verify_state_save=0 00:11:00.208 do_verify=1 00:11:00.208 verify=crc32c-intel 00:11:00.208 [job0] 00:11:00.208 filename=/dev/nvme0n1 00:11:00.208 [job1] 00:11:00.208 filename=/dev/nvme0n2 00:11:00.208 [job2] 00:11:00.208 filename=/dev/nvme0n3 00:11:00.208 [job3] 00:11:00.208 filename=/dev/nvme0n4 00:11:00.208 Could not set queue depth (nvme0n1) 00:11:00.208 Could not set queue depth (nvme0n2) 00:11:00.208 Could not set queue depth (nvme0n3) 00:11:00.208 Could not set queue depth (nvme0n4) 00:11:00.468 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:00.468 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:00.468 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:00.468 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:00.468 fio-3.35 00:11:00.468 Starting 4 threads 00:11:01.878 00:11:01.878 job0: (groupid=0, jobs=1): err= 0: pid=2874832: Tue Oct 1 17:11:00 2024 00:11:01.878 read: IOPS=9055, BW=35.4MiB/s (37.1MB/s)(36.0MiB/1017msec) 00:11:01.878 slat (nsec): min=888, max=11273k, avg=48845.99, stdev=390385.17 00:11:01.878 clat (usec): min=1438, max=46562, avg=7325.00, stdev=3099.26 00:11:01.878 lat (usec): min=1443, max=57835, avg=7373.85, stdev=3122.64 00:11:01.878 clat percentiles (usec): 00:11:01.878 | 1.00th=[ 2606], 5.00th=[ 3884], 10.00th=[ 4490], 20.00th=[ 5800], 00:11:01.878 | 30.00th=[ 6259], 40.00th=[ 6587], 50.00th=[ 6915], 60.00th=[ 7177], 00:11:01.878 | 70.00th=[ 7635], 80.00th=[ 8455], 90.00th=[ 9503], 95.00th=[11076], 00:11:01.878 | 99.00th=[21890], 99.50th=[25297], 99.90th=[25560], 99.95th=[25560], 00:11:01.878 | 99.99th=[46400] 00:11:01.878 write: IOPS=9061, BW=35.4MiB/s (37.1MB/s)(36.0MiB/1017msec); 0 zone resets 00:11:01.878 slat (nsec): min=1488, max=17115k, avg=47705.68, stdev=430076.70 00:11:01.878 clat (usec): min=632, max=27214, avg=6677.63, stdev=2955.66 00:11:01.878 lat (usec): min=869, max=27238, avg=6725.34, stdev=2977.18 00:11:01.878 clat percentiles (usec): 00:11:01.878 | 1.00th=[ 1237], 5.00th=[ 2933], 10.00th=[ 4015], 20.00th=[ 5145], 00:11:01.878 | 30.00th=[ 5800], 40.00th=[ 6259], 50.00th=[ 6456], 60.00th=[ 6652], 00:11:01.878 | 70.00th=[ 7046], 80.00th=[ 7439], 90.00th=[ 8979], 95.00th=[10814], 00:11:01.878 | 99.00th=[21365], 99.50th=[23200], 99.90th=[24511], 99.95th=[24511], 00:11:01.879 | 99.99th=[27132] 00:11:01.879 bw ( KiB/s): min=34704, max=39024, per=38.83%, avg=36864.00, stdev=3054.70, samples=2 00:11:01.879 iops : min= 8676, max= 9756, avg=9216.00, stdev=763.68, samples=2 00:11:01.879 lat (usec) : 750=0.01% 00:11:01.879 lat (msec) : 2=1.60%, 4=6.57%, 10=84.21%, 20=5.91%, 50=1.70% 00:11:01.879 cpu : usr=7.38%, sys=9.06%, ctx=489, majf=0, minf=1 00:11:01.879 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:11:01.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:01.879 issued rwts: total=9209,9216,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.879 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:01.879 job1: (groupid=0, jobs=1): err= 0: pid=2874838: Tue Oct 1 17:11:00 2024 00:11:01.879 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:11:01.879 slat (nsec): min=906, max=28677k, avg=102289.94, stdev=886331.80 00:11:01.879 clat (usec): min=1820, max=127761, avg=12814.66, stdev=11428.41 00:11:01.879 lat (usec): min=1827, max=127897, avg=12916.95, stdev=11514.35 00:11:01.879 clat percentiles (msec): 00:11:01.879 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 8], 20.00th=[ 9], 00:11:01.879 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 10], 60.00th=[ 11], 00:11:01.879 | 70.00th=[ 12], 80.00th=[ 17], 90.00th=[ 23], 95.00th=[ 24], 00:11:01.879 | 99.00th=[ 47], 99.50th=[ 111], 99.90th=[ 123], 99.95th=[ 123], 00:11:01.879 | 99.99th=[ 128] 00:11:01.879 write: IOPS=4745, BW=18.5MiB/s (19.4MB/s)(18.6MiB/1002msec); 0 zone resets 00:11:01.879 slat (nsec): min=1500, max=16413k, avg=97380.87, stdev=723275.13 00:11:01.879 clat (usec): min=592, max=129269, avg=14331.35, stdev=20030.22 00:11:01.879 lat (usec): min=670, max=129275, avg=14428.73, stdev=20130.29 00:11:01.879 clat percentiles (usec): 00:11:01.879 | 1.00th=[ 1139], 5.00th=[ 3982], 10.00th=[ 5932], 20.00th=[ 7177], 00:11:01.879 | 30.00th=[ 7635], 40.00th=[ 8291], 50.00th=[ 8717], 60.00th=[ 9110], 00:11:01.879 | 70.00th=[ 10290], 80.00th=[ 12256], 90.00th=[ 19268], 95.00th=[ 80217], 00:11:01.879 | 99.00th=[108528], 99.50th=[116917], 99.90th=[129500], 99.95th=[129500], 00:11:01.879 | 99.99th=[129500] 00:11:01.879 bw ( KiB/s): min=15568, max=21448, per=19.50%, avg=18508.00, stdev=4157.79, samples=2 00:11:01.879 iops : min= 3892, max= 5362, avg=4627.00, stdev=1039.45, samples=2 00:11:01.879 lat (usec) : 750=0.04%, 1000=0.26% 00:11:01.879 lat (msec) : 2=1.93%, 4=0.76%, 10=61.21%, 20=24.06%, 50=8.34% 00:11:01.879 lat (msec) : 100=2.33%, 250=1.07% 00:11:01.879 cpu : usr=3.50%, sys=4.90%, ctx=430, majf=0, minf=3 00:11:01.879 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:01.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:01.879 issued rwts: total=4608,4755,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.879 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:01.879 job2: (groupid=0, jobs=1): err= 0: pid=2874848: Tue Oct 1 17:11:00 2024 00:11:01.879 read: IOPS=5322, BW=20.8MiB/s (21.8MB/s)(21.7MiB/1046msec) 00:11:01.879 slat (nsec): min=921, max=19164k, avg=85776.12, stdev=632826.70 00:11:01.879 clat (usec): min=3364, max=52591, avg=11417.21, stdev=7451.61 00:11:01.879 lat (usec): min=3367, max=59569, avg=11502.99, stdev=7488.57 00:11:01.879 clat percentiles (usec): 00:11:01.879 | 1.00th=[ 3621], 5.00th=[ 4146], 10.00th=[ 6652], 20.00th=[ 8160], 00:11:01.879 | 30.00th=[ 8356], 40.00th=[ 8586], 50.00th=[ 9110], 60.00th=[10945], 00:11:01.879 | 70.00th=[12125], 80.00th=[12518], 90.00th=[15401], 95.00th=[25035], 00:11:01.879 | 99.00th=[52167], 99.50th=[52167], 99.90th=[52691], 99.95th=[52691], 00:11:01.879 | 99.99th=[52691] 00:11:01.879 write: IOPS=5873, BW=22.9MiB/s (24.1MB/s)(24.0MiB/1046msec); 0 zone resets 00:11:01.879 slat (nsec): min=1564, max=10959k, avg=78648.27, stdev=518953.44 00:11:01.879 clat (usec): min=796, max=123040, avg=11194.07, stdev=12196.67 00:11:01.879 lat (usec): min=804, max=125617, avg=11272.72, stdev=12273.60 00:11:01.879 clat percentiles (msec): 00:11:01.879 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 8], 00:11:01.879 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 10], 00:11:01.879 | 70.00th=[ 11], 80.00th=[ 12], 90.00th=[ 14], 95.00th=[ 20], 00:11:01.879 | 99.00th=[ 92], 99.50th=[ 100], 99.90th=[ 124], 99.95th=[ 124], 00:11:01.879 | 99.99th=[ 124] 00:11:01.879 bw ( KiB/s): min=22664, max=26488, per=25.89%, avg=24576.00, stdev=2703.98, samples=2 00:11:01.879 iops : min= 5666, max= 6622, avg=6144.00, stdev=675.99, samples=2 00:11:01.879 lat (usec) : 1000=0.03% 00:11:01.879 lat (msec) : 2=0.17%, 4=3.89%, 10=55.87%, 20=34.61%, 50=3.49% 00:11:01.879 lat (msec) : 100=1.69%, 250=0.25% 00:11:01.879 cpu : usr=3.25%, sys=5.74%, ctx=569, majf=0, minf=1 00:11:01.879 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:01.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:01.879 issued rwts: total=5567,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.879 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:01.879 job3: (groupid=0, jobs=1): err= 0: pid=2874855: Tue Oct 1 17:11:00 2024 00:11:01.879 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:11:01.879 slat (nsec): min=933, max=14306k, avg=111666.62, stdev=877490.60 00:11:01.879 clat (usec): min=3471, max=36086, avg=14870.50, stdev=5102.01 00:11:01.879 lat (usec): min=3486, max=36111, avg=14982.17, stdev=5170.20 00:11:01.879 clat percentiles (usec): 00:11:01.879 | 1.00th=[ 3654], 5.00th=[ 8225], 10.00th=[ 9372], 20.00th=[11207], 00:11:01.879 | 30.00th=[12125], 40.00th=[12518], 50.00th=[12911], 60.00th=[15139], 00:11:01.879 | 70.00th=[16319], 80.00th=[19268], 90.00th=[21890], 95.00th=[23987], 00:11:01.879 | 99.00th=[30802], 99.50th=[31851], 99.90th=[33817], 99.95th=[33817], 00:11:01.879 | 99.99th=[35914] 00:11:01.879 write: IOPS=4682, BW=18.3MiB/s (19.2MB/s)(18.4MiB/1006msec); 0 zone resets 00:11:01.879 slat (nsec): min=1527, max=14348k, avg=86505.58, stdev=623192.40 00:11:01.879 clat (usec): min=993, max=37951, avg=12555.94, stdev=5797.39 00:11:01.879 lat (usec): min=1001, max=37955, avg=12642.44, stdev=5843.14 00:11:01.879 clat percentiles (usec): 00:11:01.879 | 1.00th=[ 1876], 5.00th=[ 5866], 10.00th=[ 6783], 20.00th=[ 8979], 00:11:01.879 | 30.00th=[ 9634], 40.00th=[10159], 50.00th=[10683], 60.00th=[11207], 00:11:01.879 | 70.00th=[13042], 80.00th=[17957], 90.00th=[21365], 95.00th=[24511], 00:11:01.879 | 99.00th=[30278], 99.50th=[30802], 99.90th=[31065], 99.95th=[31065], 00:11:01.879 | 99.99th=[38011] 00:11:01.879 bw ( KiB/s): min=17752, max=19112, per=19.42%, avg=18432.00, stdev=961.67, samples=2 00:11:01.879 iops : min= 4438, max= 4778, avg=4608.00, stdev=240.42, samples=2 00:11:01.879 lat (usec) : 1000=0.03% 00:11:01.879 lat (msec) : 2=0.53%, 4=1.35%, 10=23.85%, 20=60.22%, 50=14.01% 00:11:01.879 cpu : usr=3.88%, sys=4.98%, ctx=350, majf=0, minf=2 00:11:01.879 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:01.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:01.879 issued rwts: total=4608,4711,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.879 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:01.879 00:11:01.879 Run status group 0 (all jobs): 00:11:01.879 READ: bw=89.6MiB/s (93.9MB/s), 17.9MiB/s-35.4MiB/s (18.8MB/s-37.1MB/s), io=93.7MiB (98.3MB), run=1002-1046msec 00:11:01.879 WRITE: bw=92.7MiB/s (97.2MB/s), 18.3MiB/s-35.4MiB/s (19.2MB/s-37.1MB/s), io=97.0MiB (102MB), run=1002-1046msec 00:11:01.879 00:11:01.879 Disk stats (read/write): 00:11:01.879 nvme0n1: ios=7730/8012, merge=0/0, ticks=44784/41349, in_queue=86133, util=87.58% 00:11:01.879 nvme0n2: ios=3438/3584, merge=0/0, ticks=34977/48180, in_queue=83157, util=88.06% 00:11:01.879 nvme0n3: ios=4639/5287, merge=0/0, ticks=23993/29903, in_queue=53896, util=87.32% 00:11:01.879 nvme0n4: ios=3622/4096, merge=0/0, ticks=45816/41422, in_queue=87238, util=89.19% 00:11:01.879 17:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:01.879 17:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2875161 00:11:01.879 17:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:01.879 17:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:01.879 [global] 00:11:01.879 thread=1 00:11:01.879 invalidate=1 00:11:01.879 rw=read 00:11:01.879 time_based=1 00:11:01.879 runtime=10 00:11:01.879 ioengine=libaio 00:11:01.879 direct=1 00:11:01.879 bs=4096 00:11:01.879 iodepth=1 00:11:01.879 norandommap=1 00:11:01.879 numjobs=1 00:11:01.879 00:11:01.879 [job0] 00:11:01.879 filename=/dev/nvme0n1 00:11:01.879 [job1] 00:11:01.879 filename=/dev/nvme0n2 00:11:01.879 [job2] 00:11:01.879 filename=/dev/nvme0n3 00:11:01.879 [job3] 00:11:01.879 filename=/dev/nvme0n4 00:11:01.879 Could not set queue depth (nvme0n1) 00:11:01.879 Could not set queue depth (nvme0n2) 00:11:01.879 Could not set queue depth (nvme0n3) 00:11:01.879 Could not set queue depth (nvme0n4) 00:11:02.144 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:02.144 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:02.144 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:02.144 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:02.144 fio-3.35 00:11:02.144 Starting 4 threads 00:11:04.690 17:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:04.951 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=8077312, buflen=4096 00:11:04.951 fio: pid=2875401, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:04.951 17:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:04.951 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=12001280, buflen=4096 00:11:04.951 fio: pid=2875392, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:04.951 17:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:04.951 17:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:05.211 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=10305536, buflen=4096 00:11:05.211 fio: pid=2875371, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:05.211 17:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:05.211 17:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:05.472 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=348160, buflen=4096 00:11:05.472 fio: pid=2875374, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:11:05.472 17:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:05.472 17:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:05.472 00:11:05.472 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2875371: Tue Oct 1 17:11:03 2024 00:11:05.472 read: IOPS=850, BW=3400KiB/s (3482kB/s)(9.83MiB/2960msec) 00:11:05.472 slat (usec): min=6, max=26240, avg=41.84, stdev=601.74 00:11:05.472 clat (usec): min=496, max=41093, avg=1119.49, stdev=804.75 00:11:05.472 lat (usec): min=504, max=41118, avg=1161.34, stdev=1004.05 00:11:05.472 clat percentiles (usec): 00:11:05.472 | 1.00th=[ 701], 5.00th=[ 898], 10.00th=[ 971], 20.00th=[ 1045], 00:11:05.472 | 30.00th=[ 1090], 40.00th=[ 1106], 50.00th=[ 1123], 60.00th=[ 1139], 00:11:05.472 | 70.00th=[ 1156], 80.00th=[ 1188], 90.00th=[ 1205], 95.00th=[ 1237], 00:11:05.472 | 99.00th=[ 1270], 99.50th=[ 1303], 99.90th=[ 1336], 99.95th=[ 1369], 00:11:05.472 | 99.99th=[41157] 00:11:05.472 bw ( KiB/s): min= 3416, max= 3520, per=36.30%, avg=3462.40, stdev=39.76, samples=5 00:11:05.472 iops : min= 854, max= 880, avg=865.60, stdev= 9.94, samples=5 00:11:05.472 lat (usec) : 500=0.04%, 750=1.59%, 1000=11.52% 00:11:05.473 lat (msec) : 2=86.77%, 50=0.04% 00:11:05.473 cpu : usr=0.74%, sys=2.67%, ctx=2520, majf=0, minf=1 00:11:05.473 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:05.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.473 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.473 issued rwts: total=2517,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.473 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:05.473 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=2875374: Tue Oct 1 17:11:03 2024 00:11:05.473 read: IOPS=27, BW=108KiB/s (111kB/s)(340KiB/3147msec) 00:11:05.473 slat (usec): min=7, max=19609, avg=568.47, stdev=2856.87 00:11:05.473 clat (usec): min=645, max=44917, avg=36434.97, stdev=13799.43 00:11:05.473 lat (usec): min=671, max=61012, avg=36930.93, stdev=14247.26 00:11:05.473 clat percentiles (usec): 00:11:05.473 | 1.00th=[ 644], 5.00th=[ 889], 10.00th=[ 1004], 20.00th=[41157], 00:11:05.473 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:11:05.473 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:11:05.473 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:11:05.473 | 99.99th=[44827] 00:11:05.473 bw ( KiB/s): min= 88, max= 160, per=1.13%, avg=108.17, stdev=26.51, samples=6 00:11:05.473 iops : min= 22, max= 40, avg=27.00, stdev= 6.66, samples=6 00:11:05.473 lat (usec) : 750=2.33%, 1000=5.81% 00:11:05.473 lat (msec) : 2=4.65%, 50=86.05% 00:11:05.473 cpu : usr=0.00%, sys=0.35%, ctx=89, majf=0, minf=2 00:11:05.473 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:05.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.473 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.473 issued rwts: total=86,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.473 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:05.473 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2875392: Tue Oct 1 17:11:03 2024 00:11:05.473 read: IOPS=1053, BW=4213KiB/s (4314kB/s)(11.4MiB/2782msec) 00:11:05.473 slat (usec): min=6, max=22947, avg=35.65, stdev=471.70 00:11:05.473 clat (usec): min=168, max=41458, avg=900.96, stdev=2285.71 00:11:05.473 lat (usec): min=176, max=41486, avg=936.60, stdev=2332.82 00:11:05.473 clat percentiles (usec): 00:11:05.473 | 1.00th=[ 433], 5.00th=[ 529], 10.00th=[ 570], 20.00th=[ 652], 00:11:05.473 | 30.00th=[ 701], 40.00th=[ 742], 50.00th=[ 775], 60.00th=[ 816], 00:11:05.473 | 70.00th=[ 857], 80.00th=[ 898], 90.00th=[ 947], 95.00th=[ 979], 00:11:05.473 | 99.00th=[ 1029], 99.50th=[ 1074], 99.90th=[41157], 99.95th=[41157], 00:11:05.473 | 99.99th=[41681] 00:11:05.473 bw ( KiB/s): min= 1096, max= 5064, per=44.24%, avg=4219.20, stdev=1747.38, samples=5 00:11:05.473 iops : min= 274, max= 1266, avg=1054.80, stdev=436.84, samples=5 00:11:05.473 lat (usec) : 250=0.10%, 500=3.79%, 750=39.54%, 1000=54.01% 00:11:05.473 lat (msec) : 2=2.15%, 4=0.03%, 50=0.34% 00:11:05.473 cpu : usr=1.08%, sys=2.88%, ctx=2935, majf=0, minf=2 00:11:05.473 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:05.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.473 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.473 issued rwts: total=2931,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.473 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:05.473 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2875401: Tue Oct 1 17:11:03 2024 00:11:05.473 read: IOPS=758, BW=3033KiB/s (3105kB/s)(7888KiB/2601msec) 00:11:05.473 slat (nsec): min=7135, max=48699, avg=27522.95, stdev=2789.41 00:11:05.473 clat (usec): min=657, max=42063, avg=1271.75, stdev=3288.41 00:11:05.473 lat (usec): min=685, max=42090, avg=1299.27, stdev=3288.27 00:11:05.473 clat percentiles (usec): 00:11:05.473 | 1.00th=[ 791], 5.00th=[ 865], 10.00th=[ 906], 20.00th=[ 955], 00:11:05.473 | 30.00th=[ 979], 40.00th=[ 996], 50.00th=[ 1012], 60.00th=[ 1020], 00:11:05.473 | 70.00th=[ 1037], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1123], 00:11:05.473 | 99.00th=[ 1205], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:11:05.473 | 99.99th=[42206] 00:11:05.473 bw ( KiB/s): min= 112, max= 3888, per=31.91%, avg=3043.20, stdev=1647.50, samples=5 00:11:05.473 iops : min= 28, max= 972, avg=760.80, stdev=411.88, samples=5 00:11:05.473 lat (usec) : 750=0.51%, 1000=42.37% 00:11:05.473 lat (msec) : 2=56.36%, 4=0.05%, 50=0.66% 00:11:05.473 cpu : usr=1.58%, sys=2.92%, ctx=1973, majf=0, minf=2 00:11:05.473 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:05.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.473 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.473 issued rwts: total=1973,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.473 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:05.473 00:11:05.473 Run status group 0 (all jobs): 00:11:05.473 READ: bw=9537KiB/s (9766kB/s), 108KiB/s-4213KiB/s (111kB/s-4314kB/s), io=29.3MiB (30.7MB), run=2601-3147msec 00:11:05.473 00:11:05.473 Disk stats (read/write): 00:11:05.473 nvme0n1: ios=2424/0, merge=0/0, ticks=2625/0, in_queue=2625, util=93.36% 00:11:05.473 nvme0n2: ios=83/0, merge=0/0, ticks=3016/0, in_queue=3016, util=94.48% 00:11:05.473 nvme0n3: ios=2775/0, merge=0/0, ticks=3145/0, in_queue=3145, util=100.00% 00:11:05.473 nvme0n4: ios=1973/0, merge=0/0, ticks=2374/0, in_queue=2374, util=96.38% 00:11:05.734 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:05.734 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:05.734 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:05.734 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:05.994 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:05.994 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:06.255 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:06.255 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:06.255 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:06.255 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2875161 00:11:06.255 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:06.255 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:06.517 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.517 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:06.517 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:11:06.517 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:06.517 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:06.517 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:06.517 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:06.517 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:11:06.517 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:06.517 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:06.517 nvmf hotplug test: fio failed as expected 00:11:06.517 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:06.517 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:06.778 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:06.778 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:06.778 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:06.778 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:06.778 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:06.778 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:06.778 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:06.778 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:06.778 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:06.778 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:06.778 rmmod nvme_tcp 00:11:06.778 rmmod nvme_fabrics 00:11:06.778 rmmod nvme_keyring 00:11:06.778 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:06.778 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:06.778 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:06.778 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 2871640 ']' 00:11:06.778 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 2871640 00:11:06.778 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 2871640 ']' 00:11:06.778 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 2871640 00:11:06.778 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:11:06.778 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:06.778 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2871640 00:11:06.778 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:06.778 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:06.778 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2871640' 00:11:06.778 killing process with pid 2871640 00:11:06.778 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 2871640 00:11:06.778 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 2871640 00:11:07.040 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:07.040 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:07.040 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:07.040 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:07.040 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:11:07.040 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:07.040 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:11:07.040 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:07.040 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:07.040 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.040 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:07.040 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.956 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:08.956 00:11:08.956 real 0m29.078s 00:11:08.956 user 2m31.015s 00:11:08.956 sys 0m9.455s 00:11:08.956 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:08.956 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.956 ************************************ 00:11:08.956 END TEST nvmf_fio_target 00:11:08.956 ************************************ 00:11:08.956 17:11:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:08.956 17:11:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:08.956 17:11:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:08.956 17:11:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:09.218 ************************************ 00:11:09.218 START TEST nvmf_bdevio 00:11:09.218 ************************************ 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:09.218 * Looking for test storage... 00:11:09.218 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:09.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.218 --rc genhtml_branch_coverage=1 00:11:09.218 --rc genhtml_function_coverage=1 00:11:09.218 --rc genhtml_legend=1 00:11:09.218 --rc geninfo_all_blocks=1 00:11:09.218 --rc geninfo_unexecuted_blocks=1 00:11:09.218 00:11:09.218 ' 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:09.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.218 --rc genhtml_branch_coverage=1 00:11:09.218 --rc genhtml_function_coverage=1 00:11:09.218 --rc genhtml_legend=1 00:11:09.218 --rc geninfo_all_blocks=1 00:11:09.218 --rc geninfo_unexecuted_blocks=1 00:11:09.218 00:11:09.218 ' 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:09.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.218 --rc genhtml_branch_coverage=1 00:11:09.218 --rc genhtml_function_coverage=1 00:11:09.218 --rc genhtml_legend=1 00:11:09.218 --rc geninfo_all_blocks=1 00:11:09.218 --rc geninfo_unexecuted_blocks=1 00:11:09.218 00:11:09.218 ' 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:09.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.218 --rc genhtml_branch_coverage=1 00:11:09.218 --rc genhtml_function_coverage=1 00:11:09.218 --rc genhtml_legend=1 00:11:09.218 --rc geninfo_all_blocks=1 00:11:09.218 --rc geninfo_unexecuted_blocks=1 00:11:09.218 00:11:09.218 ' 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:09.218 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.219 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.219 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:09.219 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:09.219 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:09.219 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:09.219 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.219 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.219 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.219 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.219 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.219 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.219 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:09.219 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.219 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:09.219 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:09.219 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:09.219 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:09.219 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.219 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.219 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:09.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:09.219 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:09.219 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:09.219 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:09.219 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:09.219 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:09.219 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:09.219 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:09.219 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:09.219 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:09.219 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:09.219 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:09.219 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.219 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:09.219 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.219 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:09.219 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:09.219 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:09.219 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:17.363 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:17.363 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:17.363 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:17.363 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:17.363 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:17.363 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:17.363 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:17.363 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:17.363 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:17.363 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:17.363 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:17.363 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:17.363 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:17.363 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:17.363 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:17.363 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:17.363 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:17.363 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:17.363 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:17.363 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:17.364 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:17.364 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:17.364 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:17.364 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:17.364 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:17.364 17:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:17.364 17:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:17.364 17:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:17.364 17:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:17.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:17.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:11:17.364 00:11:17.364 --- 10.0.0.2 ping statistics --- 00:11:17.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.364 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:11:17.364 17:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:17.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:17.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:11:17.364 00:11:17.364 --- 10.0.0.1 ping statistics --- 00:11:17.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.364 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:11:17.364 17:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:17.364 17:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:11:17.364 17:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:17.364 17:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:17.364 17:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:17.364 17:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:17.364 17:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:17.364 17:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:17.364 17:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:17.364 17:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:17.364 17:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:17.364 17:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:17.364 17:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:17.364 17:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=2880700 00:11:17.364 17:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 2880700 00:11:17.364 17:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:17.364 17:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 2880700 ']' 00:11:17.364 17:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.364 17:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:17.365 17:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.365 17:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:17.365 17:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:17.365 [2024-10-01 17:11:15.187198] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:11:17.365 [2024-10-01 17:11:15.187270] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:17.365 [2024-10-01 17:11:15.275130] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:17.365 [2024-10-01 17:11:15.322813] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:17.365 [2024-10-01 17:11:15.322865] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:17.365 [2024-10-01 17:11:15.322874] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:17.365 [2024-10-01 17:11:15.322881] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:17.365 [2024-10-01 17:11:15.322887] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:17.365 [2024-10-01 17:11:15.323070] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:11:17.365 [2024-10-01 17:11:15.323199] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:11:17.365 [2024-10-01 17:11:15.323354] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:17.365 [2024-10-01 17:11:15.323356] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:11:17.626 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:17.626 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:11:17.626 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:17.626 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:17.626 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:17.626 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:17.626 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:17.626 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.626 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:17.626 [2024-10-01 17:11:16.057752] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:17.626 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.626 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:17.626 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.626 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:17.626 Malloc0 00:11:17.626 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.626 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:17.626 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.626 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:17.626 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.626 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:17.626 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.626 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:17.626 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.626 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:17.626 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.626 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:17.626 [2024-10-01 17:11:16.111204] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:17.626 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.626 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:17.626 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:17.626 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:11:17.626 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:11:17.626 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:11:17.626 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:11:17.626 { 00:11:17.626 "params": { 00:11:17.626 "name": "Nvme$subsystem", 00:11:17.626 "trtype": "$TEST_TRANSPORT", 00:11:17.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:17.626 "adrfam": "ipv4", 00:11:17.626 "trsvcid": "$NVMF_PORT", 00:11:17.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:17.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:17.626 "hdgst": ${hdgst:-false}, 00:11:17.626 "ddgst": ${ddgst:-false} 00:11:17.626 }, 00:11:17.626 "method": "bdev_nvme_attach_controller" 00:11:17.626 } 00:11:17.626 EOF 00:11:17.626 )") 00:11:17.626 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:11:17.626 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:11:17.626 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:11:17.626 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:11:17.626 "params": { 00:11:17.626 "name": "Nvme1", 00:11:17.626 "trtype": "tcp", 00:11:17.626 "traddr": "10.0.0.2", 00:11:17.626 "adrfam": "ipv4", 00:11:17.626 "trsvcid": "4420", 00:11:17.626 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:17.626 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:17.626 "hdgst": false, 00:11:17.626 "ddgst": false 00:11:17.626 }, 00:11:17.626 "method": "bdev_nvme_attach_controller" 00:11:17.626 }' 00:11:17.887 [2024-10-01 17:11:16.174720] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:11:17.887 [2024-10-01 17:11:16.174788] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2880752 ] 00:11:17.887 [2024-10-01 17:11:16.243293] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:17.887 [2024-10-01 17:11:16.284118] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:17.887 [2024-10-01 17:11:16.284242] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:17.887 [2024-10-01 17:11:16.284245] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.147 I/O targets: 00:11:18.147 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:18.147 00:11:18.147 00:11:18.147 CUnit - A unit testing framework for C - Version 2.1-3 00:11:18.147 http://cunit.sourceforge.net/ 00:11:18.147 00:11:18.147 00:11:18.147 Suite: bdevio tests on: Nvme1n1 00:11:18.147 Test: blockdev write read block ...passed 00:11:18.147 Test: blockdev write zeroes read block ...passed 00:11:18.147 Test: blockdev write zeroes read no split ...passed 00:11:18.147 Test: blockdev write zeroes read split ...passed 00:11:18.147 Test: blockdev write zeroes read split partial ...passed 00:11:18.147 Test: blockdev reset ...[2024-10-01 17:11:16.595645] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:18.147 [2024-10-01 17:11:16.595717] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad4c50 (9): Bad file descriptor 00:11:18.147 [2024-10-01 17:11:16.613013] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:18.147 passed 00:11:18.147 Test: blockdev write read 8 blocks ...passed 00:11:18.147 Test: blockdev write read size > 128k ...passed 00:11:18.147 Test: blockdev write read invalid size ...passed 00:11:18.408 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:18.408 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:18.408 Test: blockdev write read max offset ...passed 00:11:18.408 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:18.408 Test: blockdev writev readv 8 blocks ...passed 00:11:18.408 Test: blockdev writev readv 30 x 1block ...passed 00:11:18.408 Test: blockdev writev readv block ...passed 00:11:18.408 Test: blockdev writev readv size > 128k ...passed 00:11:18.408 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:18.408 Test: blockdev comparev and writev ...[2024-10-01 17:11:16.919100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:18.408 [2024-10-01 17:11:16.919130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:18.408 [2024-10-01 17:11:16.919141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:18.408 [2024-10-01 17:11:16.919147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:18.408 [2024-10-01 17:11:16.919645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:18.408 [2024-10-01 17:11:16.919656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:18.408 [2024-10-01 17:11:16.919666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:18.408 [2024-10-01 17:11:16.919672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:18.408 [2024-10-01 17:11:16.920144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:18.408 [2024-10-01 17:11:16.920152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:18.408 [2024-10-01 17:11:16.920162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:18.408 [2024-10-01 17:11:16.920167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:18.408 [2024-10-01 17:11:16.920652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:18.408 [2024-10-01 17:11:16.920661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:18.408 [2024-10-01 17:11:16.920671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:18.408 [2024-10-01 17:11:16.920677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:18.668 passed 00:11:18.668 Test: blockdev nvme passthru rw ...passed 00:11:18.668 Test: blockdev nvme passthru vendor specific ...[2024-10-01 17:11:17.004803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:18.668 [2024-10-01 17:11:17.004816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:18.668 [2024-10-01 17:11:17.005145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:18.668 [2024-10-01 17:11:17.005156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:18.668 [2024-10-01 17:11:17.005485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:18.668 [2024-10-01 17:11:17.005492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:18.668 [2024-10-01 17:11:17.005821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:18.668 [2024-10-01 17:11:17.005829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:18.668 passed 00:11:18.668 Test: blockdev nvme admin passthru ...passed 00:11:18.668 Test: blockdev copy ...passed 00:11:18.668 00:11:18.668 Run Summary: Type Total Ran Passed Failed Inactive 00:11:18.668 suites 1 1 n/a 0 0 00:11:18.668 tests 23 23 23 0 0 00:11:18.668 asserts 152 152 152 0 n/a 00:11:18.668 00:11:18.668 Elapsed time = 1.198 seconds 00:11:18.668 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:18.668 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.668 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:18.668 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.668 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:18.668 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:18.668 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:18.668 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:18.668 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:18.668 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:18.668 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:18.668 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:18.668 rmmod nvme_tcp 00:11:18.668 rmmod nvme_fabrics 00:11:18.668 rmmod nvme_keyring 00:11:18.930 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:18.930 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:18.930 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:18.930 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 2880700 ']' 00:11:18.930 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 2880700 00:11:18.930 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 2880700 ']' 00:11:18.930 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 2880700 00:11:18.930 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:11:18.930 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:18.930 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2880700 00:11:18.930 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:11:18.930 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:11:18.930 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2880700' 00:11:18.930 killing process with pid 2880700 00:11:18.930 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 2880700 00:11:18.930 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 2880700 00:11:18.930 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:18.930 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:18.930 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:18.930 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:18.930 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:18.930 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:11:18.930 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:11:18.930 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:18.930 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:18.930 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.930 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:18.930 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.474 17:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:21.474 00:11:21.474 real 0m11.987s 00:11:21.474 user 0m12.583s 00:11:21.474 sys 0m6.118s 00:11:21.474 17:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:21.474 17:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.474 ************************************ 00:11:21.474 END TEST nvmf_bdevio 00:11:21.474 ************************************ 00:11:21.474 17:11:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:21.474 00:11:21.474 real 4m57.104s 00:11:21.474 user 11m38.101s 00:11:21.474 sys 1m47.896s 00:11:21.474 17:11:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:21.474 17:11:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:21.474 ************************************ 00:11:21.474 END TEST nvmf_target_core 00:11:21.474 ************************************ 00:11:21.474 17:11:19 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:21.474 17:11:19 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:21.474 17:11:19 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:21.474 17:11:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:21.474 ************************************ 00:11:21.474 START TEST nvmf_target_extra 00:11:21.474 ************************************ 00:11:21.474 17:11:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:21.474 * Looking for test storage... 00:11:21.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:21.474 17:11:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:21.474 17:11:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:11:21.474 17:11:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:21.474 17:11:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:21.474 17:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:21.474 17:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:21.474 17:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:21.474 17:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:21.474 17:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:21.474 17:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:21.474 17:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:21.474 17:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:21.474 17:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:21.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.475 --rc genhtml_branch_coverage=1 00:11:21.475 --rc genhtml_function_coverage=1 00:11:21.475 --rc genhtml_legend=1 00:11:21.475 --rc geninfo_all_blocks=1 00:11:21.475 --rc geninfo_unexecuted_blocks=1 00:11:21.475 00:11:21.475 ' 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:21.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.475 --rc genhtml_branch_coverage=1 00:11:21.475 --rc genhtml_function_coverage=1 00:11:21.475 --rc genhtml_legend=1 00:11:21.475 --rc geninfo_all_blocks=1 00:11:21.475 --rc geninfo_unexecuted_blocks=1 00:11:21.475 00:11:21.475 ' 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:21.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.475 --rc genhtml_branch_coverage=1 00:11:21.475 --rc genhtml_function_coverage=1 00:11:21.475 --rc genhtml_legend=1 00:11:21.475 --rc geninfo_all_blocks=1 00:11:21.475 --rc geninfo_unexecuted_blocks=1 00:11:21.475 00:11:21.475 ' 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:21.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.475 --rc genhtml_branch_coverage=1 00:11:21.475 --rc genhtml_function_coverage=1 00:11:21.475 --rc genhtml_legend=1 00:11:21.475 --rc geninfo_all_blocks=1 00:11:21.475 --rc geninfo_unexecuted_blocks=1 00:11:21.475 00:11:21.475 ' 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:21.475 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:21.475 17:11:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:21.475 ************************************ 00:11:21.475 START TEST nvmf_example 00:11:21.475 ************************************ 00:11:21.476 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:21.476 * Looking for test storage... 00:11:21.476 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:21.476 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:21.476 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lcov --version 00:11:21.476 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:21.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.738 --rc genhtml_branch_coverage=1 00:11:21.738 --rc genhtml_function_coverage=1 00:11:21.738 --rc genhtml_legend=1 00:11:21.738 --rc geninfo_all_blocks=1 00:11:21.738 --rc geninfo_unexecuted_blocks=1 00:11:21.738 00:11:21.738 ' 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:21.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.738 --rc genhtml_branch_coverage=1 00:11:21.738 --rc genhtml_function_coverage=1 00:11:21.738 --rc genhtml_legend=1 00:11:21.738 --rc geninfo_all_blocks=1 00:11:21.738 --rc geninfo_unexecuted_blocks=1 00:11:21.738 00:11:21.738 ' 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:21.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.738 --rc genhtml_branch_coverage=1 00:11:21.738 --rc genhtml_function_coverage=1 00:11:21.738 --rc genhtml_legend=1 00:11:21.738 --rc geninfo_all_blocks=1 00:11:21.738 --rc geninfo_unexecuted_blocks=1 00:11:21.738 00:11:21.738 ' 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:21.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.738 --rc genhtml_branch_coverage=1 00:11:21.738 --rc genhtml_function_coverage=1 00:11:21.738 --rc genhtml_legend=1 00:11:21.738 --rc geninfo_all_blocks=1 00:11:21.738 --rc geninfo_unexecuted_blocks=1 00:11:21.738 00:11:21.738 ' 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:21.738 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:21.739 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:21.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:21.739 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:21.739 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:21.739 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:21.739 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:21.739 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:21.739 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:21.739 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:21.739 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:21.739 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:21.739 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:21.739 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:21.739 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:21.739 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:21.739 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:21.739 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:21.739 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:21.739 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:21.739 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:21.739 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:21.739 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.739 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:21.739 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.739 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:21.739 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:21.739 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:21.739 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:29.885 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:29.885 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:29.885 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:29.885 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:29.885 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:29.885 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:29.885 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:29.885 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:29.885 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:29.885 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:29.885 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:29.885 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:29.885 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:29.885 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:29.885 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:29.885 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:29.885 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:29.885 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:29.885 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:29.885 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:29.885 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:29.886 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:29.886 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:29.886 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:29.886 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # is_hw=yes 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:29.886 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:29.886 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:11:29.886 00:11:29.886 --- 10.0.0.2 ping statistics --- 00:11:29.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.886 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:29.886 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:29.886 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:11:29.886 00:11:29.886 --- 10.0.0.1 ping statistics --- 00:11:29.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.886 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # return 0 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2885461 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2885461 00:11:29.886 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 2885461 ']' 00:11:29.887 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.887 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:29.887 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.887 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:29.887 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:29.887 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:29.887 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:11:29.887 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:29.887 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:29.887 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:30.148 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:30.148 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.148 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:30.148 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.148 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:30.148 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.148 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:30.148 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.148 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:30.148 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:30.148 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.148 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:30.148 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.148 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:30.148 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:30.148 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.148 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:30.148 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.148 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:30.148 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.148 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:30.148 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.148 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:30.148 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:40.331 Initializing NVMe Controllers 00:11:40.331 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:40.331 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:40.331 Initialization complete. Launching workers. 00:11:40.331 ======================================================== 00:11:40.331 Latency(us) 00:11:40.331 Device Information : IOPS MiB/s Average min max 00:11:40.331 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19005.84 74.24 3367.08 634.08 16175.96 00:11:40.331 ======================================================== 00:11:40.331 Total : 19005.84 74.24 3367.08 634.08 16175.96 00:11:40.331 00:11:40.331 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:40.331 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:40.331 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:40.331 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:40.331 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:40.331 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:40.331 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:40.331 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:40.331 rmmod nvme_tcp 00:11:40.331 rmmod nvme_fabrics 00:11:40.331 rmmod nvme_keyring 00:11:40.331 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:40.331 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:40.331 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:40.331 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@515 -- # '[' -n 2885461 ']' 00:11:40.331 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # killprocess 2885461 00:11:40.331 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 2885461 ']' 00:11:40.331 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 2885461 00:11:40.331 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:11:40.331 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:40.331 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2885461 00:11:40.331 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:11:40.331 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:11:40.331 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2885461' 00:11:40.331 killing process with pid 2885461 00:11:40.331 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 2885461 00:11:40.331 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 2885461 00:11:40.591 nvmf threads initialize successfully 00:11:40.591 bdev subsystem init successfully 00:11:40.591 created a nvmf target service 00:11:40.591 create targets's poll groups done 00:11:40.592 all subsystems of target started 00:11:40.592 nvmf target is running 00:11:40.592 all subsystems of target stopped 00:11:40.592 destroy targets's poll groups done 00:11:40.592 destroyed the nvmf target service 00:11:40.592 bdev subsystem finish successfully 00:11:40.592 nvmf threads destroy successfully 00:11:40.592 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:40.592 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:40.592 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:40.592 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:40.592 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-save 00:11:40.592 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:40.592 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-restore 00:11:40.592 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:40.592 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:40.592 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.592 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:40.592 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.507 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:42.768 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:42.768 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:42.768 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:42.768 00:11:42.768 real 0m21.201s 00:11:42.768 user 0m46.523s 00:11:42.768 sys 0m6.768s 00:11:42.768 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:42.768 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:42.768 ************************************ 00:11:42.768 END TEST nvmf_example 00:11:42.768 ************************************ 00:11:42.768 17:11:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:42.768 17:11:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:42.768 17:11:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:42.768 17:11:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:42.768 ************************************ 00:11:42.768 START TEST nvmf_filesystem 00:11:42.768 ************************************ 00:11:42.768 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:42.768 * Looking for test storage... 00:11:42.768 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:42.768 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:42.768 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:11:42.768 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:43.032 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:43.032 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:43.032 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:43.032 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:43.032 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:43.032 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:43.032 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:43.032 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:43.032 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:43.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.033 --rc genhtml_branch_coverage=1 00:11:43.033 --rc genhtml_function_coverage=1 00:11:43.033 --rc genhtml_legend=1 00:11:43.033 --rc geninfo_all_blocks=1 00:11:43.033 --rc geninfo_unexecuted_blocks=1 00:11:43.033 00:11:43.033 ' 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:43.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.033 --rc genhtml_branch_coverage=1 00:11:43.033 --rc genhtml_function_coverage=1 00:11:43.033 --rc genhtml_legend=1 00:11:43.033 --rc geninfo_all_blocks=1 00:11:43.033 --rc geninfo_unexecuted_blocks=1 00:11:43.033 00:11:43.033 ' 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:43.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.033 --rc genhtml_branch_coverage=1 00:11:43.033 --rc genhtml_function_coverage=1 00:11:43.033 --rc genhtml_legend=1 00:11:43.033 --rc geninfo_all_blocks=1 00:11:43.033 --rc geninfo_unexecuted_blocks=1 00:11:43.033 00:11:43.033 ' 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:43.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.033 --rc genhtml_branch_coverage=1 00:11:43.033 --rc genhtml_function_coverage=1 00:11:43.033 --rc genhtml_legend=1 00:11:43.033 --rc geninfo_all_blocks=1 00:11:43.033 --rc geninfo_unexecuted_blocks=1 00:11:43.033 00:11:43.033 ' 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:11:43.033 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:43.034 #define SPDK_CONFIG_H 00:11:43.034 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:43.034 #define SPDK_CONFIG_APPS 1 00:11:43.034 #define SPDK_CONFIG_ARCH native 00:11:43.034 #undef SPDK_CONFIG_ASAN 00:11:43.034 #undef SPDK_CONFIG_AVAHI 00:11:43.034 #undef SPDK_CONFIG_CET 00:11:43.034 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:43.034 #define SPDK_CONFIG_COVERAGE 1 00:11:43.034 #define SPDK_CONFIG_CROSS_PREFIX 00:11:43.034 #undef SPDK_CONFIG_CRYPTO 00:11:43.034 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:43.034 #undef SPDK_CONFIG_CUSTOMOCF 00:11:43.034 #undef SPDK_CONFIG_DAOS 00:11:43.034 #define SPDK_CONFIG_DAOS_DIR 00:11:43.034 #define SPDK_CONFIG_DEBUG 1 00:11:43.034 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:43.034 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:43.034 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:43.034 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:43.034 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:43.034 #undef SPDK_CONFIG_DPDK_UADK 00:11:43.034 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:43.034 #define SPDK_CONFIG_EXAMPLES 1 00:11:43.034 #undef SPDK_CONFIG_FC 00:11:43.034 #define SPDK_CONFIG_FC_PATH 00:11:43.034 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:43.034 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:43.034 #define SPDK_CONFIG_FSDEV 1 00:11:43.034 #undef SPDK_CONFIG_FUSE 00:11:43.034 #undef SPDK_CONFIG_FUZZER 00:11:43.034 #define SPDK_CONFIG_FUZZER_LIB 00:11:43.034 #undef SPDK_CONFIG_GOLANG 00:11:43.034 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:43.034 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:43.034 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:43.034 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:43.034 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:43.034 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:43.034 #undef SPDK_CONFIG_HAVE_LZ4 00:11:43.034 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:43.034 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:43.034 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:43.034 #define SPDK_CONFIG_IDXD 1 00:11:43.034 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:43.034 #undef SPDK_CONFIG_IPSEC_MB 00:11:43.034 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:43.034 #define SPDK_CONFIG_ISAL 1 00:11:43.034 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:43.034 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:43.034 #define SPDK_CONFIG_LIBDIR 00:11:43.034 #undef SPDK_CONFIG_LTO 00:11:43.034 #define SPDK_CONFIG_MAX_LCORES 128 00:11:43.034 #define SPDK_CONFIG_NVME_CUSE 1 00:11:43.034 #undef SPDK_CONFIG_OCF 00:11:43.034 #define SPDK_CONFIG_OCF_PATH 00:11:43.034 #define SPDK_CONFIG_OPENSSL_PATH 00:11:43.034 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:43.034 #define SPDK_CONFIG_PGO_DIR 00:11:43.034 #undef SPDK_CONFIG_PGO_USE 00:11:43.034 #define SPDK_CONFIG_PREFIX /usr/local 00:11:43.034 #undef SPDK_CONFIG_RAID5F 00:11:43.034 #undef SPDK_CONFIG_RBD 00:11:43.034 #define SPDK_CONFIG_RDMA 1 00:11:43.034 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:43.034 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:43.034 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:43.034 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:43.034 #define SPDK_CONFIG_SHARED 1 00:11:43.034 #undef SPDK_CONFIG_SMA 00:11:43.034 #define SPDK_CONFIG_TESTS 1 00:11:43.034 #undef SPDK_CONFIG_TSAN 00:11:43.034 #define SPDK_CONFIG_UBLK 1 00:11:43.034 #define SPDK_CONFIG_UBSAN 1 00:11:43.034 #undef SPDK_CONFIG_UNIT_TESTS 00:11:43.034 #undef SPDK_CONFIG_URING 00:11:43.034 #define SPDK_CONFIG_URING_PATH 00:11:43.034 #undef SPDK_CONFIG_URING_ZNS 00:11:43.034 #undef SPDK_CONFIG_USDT 00:11:43.034 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:43.034 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:43.034 #define SPDK_CONFIG_VFIO_USER 1 00:11:43.034 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:43.034 #define SPDK_CONFIG_VHOST 1 00:11:43.034 #define SPDK_CONFIG_VIRTIO 1 00:11:43.034 #undef SPDK_CONFIG_VTUNE 00:11:43.034 #define SPDK_CONFIG_VTUNE_DIR 00:11:43.034 #define SPDK_CONFIG_WERROR 1 00:11:43.034 #define SPDK_CONFIG_WPDK_DIR 00:11:43.034 #undef SPDK_CONFIG_XNVME 00:11:43.034 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:43.034 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:43.035 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v23.11 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:11:43.036 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j144 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 2888255 ]] 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 2888255 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.wl2Pgp 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.wl2Pgp/tests/target /tmp/spdk.wl2Pgp 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=677969920 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:11:43.037 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=4606459904 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=116722380800 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=129356513280 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12634132480 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64666890240 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678256640 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=11366400 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=25847934976 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=25871302656 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23367680 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=efivarfs 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=efivarfs 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=216064 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=507904 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=287744 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64677224448 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678256640 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=1032192 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12935639040 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12935651328 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:11:43.038 * Looking for test storage... 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=116722380800 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=14848724992 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:43.038 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1668 -- # set -o errtrace 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1673 -- # true 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1675 -- # xtrace_fd 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:43.038 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:43.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.039 --rc genhtml_branch_coverage=1 00:11:43.039 --rc genhtml_function_coverage=1 00:11:43.039 --rc genhtml_legend=1 00:11:43.039 --rc geninfo_all_blocks=1 00:11:43.039 --rc geninfo_unexecuted_blocks=1 00:11:43.039 00:11:43.039 ' 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:43.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.039 --rc genhtml_branch_coverage=1 00:11:43.039 --rc genhtml_function_coverage=1 00:11:43.039 --rc genhtml_legend=1 00:11:43.039 --rc geninfo_all_blocks=1 00:11:43.039 --rc geninfo_unexecuted_blocks=1 00:11:43.039 00:11:43.039 ' 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:43.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.039 --rc genhtml_branch_coverage=1 00:11:43.039 --rc genhtml_function_coverage=1 00:11:43.039 --rc genhtml_legend=1 00:11:43.039 --rc geninfo_all_blocks=1 00:11:43.039 --rc geninfo_unexecuted_blocks=1 00:11:43.039 00:11:43.039 ' 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:43.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.039 --rc genhtml_branch_coverage=1 00:11:43.039 --rc genhtml_function_coverage=1 00:11:43.039 --rc genhtml_legend=1 00:11:43.039 --rc geninfo_all_blocks=1 00:11:43.039 --rc geninfo_unexecuted_blocks=1 00:11:43.039 00:11:43.039 ' 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:43.039 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:43.039 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:43.301 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:43.301 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:43.301 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:43.301 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:51.443 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:51.443 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:51.443 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:51.443 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # is_hw=yes 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:51.443 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:51.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:51.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:11:51.444 00:11:51.444 --- 10.0.0.2 ping statistics --- 00:11:51.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.444 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:11:51.444 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:51.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:51.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:11:51.444 00:11:51.444 --- 10.0.0.1 ping statistics --- 00:11:51.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.444 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:11:51.444 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:51.444 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # return 0 00:11:51.444 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:51.444 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:51.444 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:51.444 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:51.444 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:51.444 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:51.444 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:51.444 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:51.444 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:51.444 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:51.444 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:51.444 ************************************ 00:11:51.444 START TEST nvmf_filesystem_no_in_capsule 00:11:51.444 ************************************ 00:11:51.444 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:11:51.444 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:51.444 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:51.444 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:51.444 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:51.444 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.444 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=2891893 00:11:51.444 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 2891893 00:11:51.444 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:51.444 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 2891893 ']' 00:11:51.444 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.444 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:51.444 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.444 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:51.444 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.444 [2024-10-01 17:11:48.931589] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:11:51.444 [2024-10-01 17:11:48.931653] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:51.444 [2024-10-01 17:11:49.004445] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:51.444 [2024-10-01 17:11:49.044271] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:51.444 [2024-10-01 17:11:49.044314] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:51.444 [2024-10-01 17:11:49.044322] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:51.444 [2024-10-01 17:11:49.044329] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:51.444 [2024-10-01 17:11:49.044336] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:51.444 [2024-10-01 17:11:49.044505] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.444 [2024-10-01 17:11:49.044624] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:51.444 [2024-10-01 17:11:49.044782] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.444 [2024-10-01 17:11:49.044782] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:51.444 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:51.444 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:51.444 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:51.444 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:51.444 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.444 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:51.444 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:51.444 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:51.444 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.444 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.444 [2024-10-01 17:11:49.780938] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:51.444 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.444 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:51.444 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.444 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.444 Malloc1 00:11:51.444 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.444 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:51.444 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.444 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.444 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.444 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:51.444 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.444 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.444 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.444 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:51.444 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.444 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.444 [2024-10-01 17:11:49.908053] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:51.444 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.444 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:51.444 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:51.444 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:51.444 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:51.444 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:51.444 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:51.444 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.444 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.444 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.444 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:51.444 { 00:11:51.444 "name": "Malloc1", 00:11:51.444 "aliases": [ 00:11:51.445 "d8266333-1cee-4385-8818-b5b1348f6bb9" 00:11:51.445 ], 00:11:51.445 "product_name": "Malloc disk", 00:11:51.445 "block_size": 512, 00:11:51.445 "num_blocks": 1048576, 00:11:51.445 "uuid": "d8266333-1cee-4385-8818-b5b1348f6bb9", 00:11:51.445 "assigned_rate_limits": { 00:11:51.445 "rw_ios_per_sec": 0, 00:11:51.445 "rw_mbytes_per_sec": 0, 00:11:51.445 "r_mbytes_per_sec": 0, 00:11:51.445 "w_mbytes_per_sec": 0 00:11:51.445 }, 00:11:51.445 "claimed": true, 00:11:51.445 "claim_type": "exclusive_write", 00:11:51.445 "zoned": false, 00:11:51.445 "supported_io_types": { 00:11:51.445 "read": true, 00:11:51.445 "write": true, 00:11:51.445 "unmap": true, 00:11:51.445 "flush": true, 00:11:51.445 "reset": true, 00:11:51.445 "nvme_admin": false, 00:11:51.445 "nvme_io": false, 00:11:51.445 "nvme_io_md": false, 00:11:51.445 "write_zeroes": true, 00:11:51.445 "zcopy": true, 00:11:51.445 "get_zone_info": false, 00:11:51.445 "zone_management": false, 00:11:51.445 "zone_append": false, 00:11:51.445 "compare": false, 00:11:51.445 "compare_and_write": false, 00:11:51.445 "abort": true, 00:11:51.445 "seek_hole": false, 00:11:51.445 "seek_data": false, 00:11:51.445 "copy": true, 00:11:51.445 "nvme_iov_md": false 00:11:51.445 }, 00:11:51.445 "memory_domains": [ 00:11:51.445 { 00:11:51.445 "dma_device_id": "system", 00:11:51.445 "dma_device_type": 1 00:11:51.445 }, 00:11:51.445 { 00:11:51.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.445 "dma_device_type": 2 00:11:51.445 } 00:11:51.445 ], 00:11:51.445 "driver_specific": {} 00:11:51.445 } 00:11:51.445 ]' 00:11:51.445 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:51.445 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:51.445 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:51.705 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:51.705 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:51.705 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:51.705 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:51.705 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:53.086 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:53.086 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:53.086 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:53.086 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:53.086 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:54.998 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:54.998 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:54.998 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:55.258 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:55.258 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:55.258 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:55.258 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:55.258 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:55.258 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:55.258 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:55.258 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:55.258 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:55.258 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:55.258 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:55.258 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:55.258 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:55.258 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:55.258 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:55.519 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:56.461 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:56.461 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:56.461 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:56.461 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:56.461 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.461 ************************************ 00:11:56.461 START TEST filesystem_ext4 00:11:56.461 ************************************ 00:11:56.461 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:56.461 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:56.461 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:56.461 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:56.461 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:56.461 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:56.461 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:56.461 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:56.461 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:56.461 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:56.461 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:56.461 mke2fs 1.47.0 (5-Feb-2023) 00:11:56.461 Discarding device blocks: 0/522240 done 00:11:56.461 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:56.461 Filesystem UUID: 7f6f1b7c-06d0-47fe-b96a-1515b241291e 00:11:56.461 Superblock backups stored on blocks: 00:11:56.461 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:56.461 00:11:56.461 Allocating group tables: 0/64 done 00:11:56.461 Writing inode tables: 0/64 done 00:11:56.723 Creating journal (8192 blocks): done 00:11:56.983 Writing superblocks and filesystem accounting information: 0/64 done 00:11:56.983 00:11:56.983 17:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:56.983 17:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:03.562 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:03.562 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:03.562 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:03.562 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:03.562 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:03.562 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:03.562 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2891893 00:12:03.562 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:03.562 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:03.562 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:03.562 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:03.562 00:12:03.562 real 0m6.111s 00:12:03.562 user 0m0.030s 00:12:03.562 sys 0m0.077s 00:12:03.562 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:03.562 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:03.562 ************************************ 00:12:03.562 END TEST filesystem_ext4 00:12:03.562 ************************************ 00:12:03.562 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:03.562 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:03.562 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:03.562 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:03.562 ************************************ 00:12:03.562 START TEST filesystem_btrfs 00:12:03.562 ************************************ 00:12:03.562 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:03.562 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:03.562 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:03.562 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:03.562 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:03.563 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:03.563 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:03.563 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:03.563 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:03.563 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:03.563 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:03.563 btrfs-progs v6.8.1 00:12:03.563 See https://btrfs.readthedocs.io for more information. 00:12:03.563 00:12:03.563 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:03.563 NOTE: several default settings have changed in version 5.15, please make sure 00:12:03.563 this does not affect your deployments: 00:12:03.563 - DUP for metadata (-m dup) 00:12:03.563 - enabled no-holes (-O no-holes) 00:12:03.563 - enabled free-space-tree (-R free-space-tree) 00:12:03.563 00:12:03.563 Label: (null) 00:12:03.563 UUID: 9d2c8371-ae1e-45db-8251-299067e202de 00:12:03.563 Node size: 16384 00:12:03.563 Sector size: 4096 (CPU page size: 4096) 00:12:03.563 Filesystem size: 510.00MiB 00:12:03.563 Block group profiles: 00:12:03.563 Data: single 8.00MiB 00:12:03.563 Metadata: DUP 32.00MiB 00:12:03.563 System: DUP 8.00MiB 00:12:03.563 SSD detected: yes 00:12:03.563 Zoned device: no 00:12:03.563 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:03.563 Checksum: crc32c 00:12:03.563 Number of devices: 1 00:12:03.563 Devices: 00:12:03.563 ID SIZE PATH 00:12:03.563 1 510.00MiB /dev/nvme0n1p1 00:12:03.563 00:12:03.563 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:03.563 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:03.822 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:03.822 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:03.822 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:03.822 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:03.822 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:03.822 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:03.822 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2891893 00:12:03.822 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:03.822 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:03.822 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:03.822 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:03.822 00:12:03.822 real 0m1.238s 00:12:03.822 user 0m0.032s 00:12:03.822 sys 0m0.117s 00:12:03.822 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:03.823 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:03.823 ************************************ 00:12:03.823 END TEST filesystem_btrfs 00:12:03.823 ************************************ 00:12:04.084 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:04.084 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:04.084 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:04.084 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.084 ************************************ 00:12:04.084 START TEST filesystem_xfs 00:12:04.084 ************************************ 00:12:04.084 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:04.084 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:04.084 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:04.084 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:04.084 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:04.084 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:04.084 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:04.084 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:12:04.084 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:04.084 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:04.084 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:04.084 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:04.084 = sectsz=512 attr=2, projid32bit=1 00:12:04.084 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:04.084 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:04.084 data = bsize=4096 blocks=130560, imaxpct=25 00:12:04.084 = sunit=0 swidth=0 blks 00:12:04.084 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:04.084 log =internal log bsize=4096 blocks=16384, version=2 00:12:04.084 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:04.084 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:05.470 Discarding blocks...Done. 00:12:05.470 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:05.470 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:07.380 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:07.380 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:07.380 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:07.380 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:07.380 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:07.380 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:07.380 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2891893 00:12:07.380 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:07.380 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:07.380 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:07.380 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:07.380 00:12:07.380 real 0m3.093s 00:12:07.380 user 0m0.035s 00:12:07.380 sys 0m0.070s 00:12:07.380 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:07.380 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:07.380 ************************************ 00:12:07.380 END TEST filesystem_xfs 00:12:07.380 ************************************ 00:12:07.380 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:07.380 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:07.380 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:07.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.640 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:07.640 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:07.640 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:07.640 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:07.640 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:07.640 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:07.640 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:07.640 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:07.640 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.640 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.640 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.640 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:07.640 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2891893 00:12:07.640 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 2891893 ']' 00:12:07.640 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 2891893 00:12:07.640 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:07.640 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:07.640 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2891893 00:12:07.640 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:07.640 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:07.640 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2891893' 00:12:07.640 killing process with pid 2891893 00:12:07.640 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 2891893 00:12:07.640 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 2891893 00:12:07.900 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:07.900 00:12:07.900 real 0m17.466s 00:12:07.900 user 1m9.040s 00:12:07.900 sys 0m1.382s 00:12:07.900 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:07.900 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.900 ************************************ 00:12:07.900 END TEST nvmf_filesystem_no_in_capsule 00:12:07.900 ************************************ 00:12:07.900 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:07.900 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:07.900 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:07.900 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:07.900 ************************************ 00:12:07.900 START TEST nvmf_filesystem_in_capsule 00:12:07.900 ************************************ 00:12:07.900 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:12:07.900 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:07.900 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:07.900 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:07.900 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:07.901 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.901 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=2895587 00:12:07.901 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 2895587 00:12:07.901 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:07.901 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 2895587 ']' 00:12:07.901 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.901 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:07.901 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.901 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:07.901 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.163 [2024-10-01 17:12:06.466651] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:12:08.163 [2024-10-01 17:12:06.466704] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:08.163 [2024-10-01 17:12:06.537773] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:08.163 [2024-10-01 17:12:06.573746] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:08.163 [2024-10-01 17:12:06.573786] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:08.163 [2024-10-01 17:12:06.573794] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:08.163 [2024-10-01 17:12:06.573801] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:08.163 [2024-10-01 17:12:06.573807] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:08.163 [2024-10-01 17:12:06.573953] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.163 [2024-10-01 17:12:06.574098] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:08.163 [2024-10-01 17:12:06.574385] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:08.163 [2024-10-01 17:12:06.574387] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.734 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:08.734 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:12:08.734 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:08.734 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:08.734 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.995 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:08.995 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:08.995 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:08.995 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.995 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.995 [2024-10-01 17:12:07.304660] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:08.995 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.996 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:08.996 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.996 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.996 Malloc1 00:12:08.996 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.996 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:08.996 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.996 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.996 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.996 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:08.996 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.996 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.996 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.996 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:08.996 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.996 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.996 [2024-10-01 17:12:07.431122] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:08.996 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.996 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:08.996 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:12:08.996 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:12:08.996 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:12:08.996 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:12:08.996 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:08.996 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.996 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.996 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.996 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:12:08.996 { 00:12:08.996 "name": "Malloc1", 00:12:08.996 "aliases": [ 00:12:08.996 "6d47610c-7450-486f-b451-62a99b75aae0" 00:12:08.996 ], 00:12:08.996 "product_name": "Malloc disk", 00:12:08.996 "block_size": 512, 00:12:08.996 "num_blocks": 1048576, 00:12:08.996 "uuid": "6d47610c-7450-486f-b451-62a99b75aae0", 00:12:08.996 "assigned_rate_limits": { 00:12:08.996 "rw_ios_per_sec": 0, 00:12:08.996 "rw_mbytes_per_sec": 0, 00:12:08.996 "r_mbytes_per_sec": 0, 00:12:08.996 "w_mbytes_per_sec": 0 00:12:08.996 }, 00:12:08.996 "claimed": true, 00:12:08.996 "claim_type": "exclusive_write", 00:12:08.996 "zoned": false, 00:12:08.996 "supported_io_types": { 00:12:08.996 "read": true, 00:12:08.996 "write": true, 00:12:08.996 "unmap": true, 00:12:08.996 "flush": true, 00:12:08.996 "reset": true, 00:12:08.996 "nvme_admin": false, 00:12:08.996 "nvme_io": false, 00:12:08.996 "nvme_io_md": false, 00:12:08.996 "write_zeroes": true, 00:12:08.996 "zcopy": true, 00:12:08.996 "get_zone_info": false, 00:12:08.996 "zone_management": false, 00:12:08.996 "zone_append": false, 00:12:08.996 "compare": false, 00:12:08.996 "compare_and_write": false, 00:12:08.996 "abort": true, 00:12:08.996 "seek_hole": false, 00:12:08.996 "seek_data": false, 00:12:08.996 "copy": true, 00:12:08.996 "nvme_iov_md": false 00:12:08.996 }, 00:12:08.996 "memory_domains": [ 00:12:08.996 { 00:12:08.996 "dma_device_id": "system", 00:12:08.996 "dma_device_type": 1 00:12:08.996 }, 00:12:08.996 { 00:12:08.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.996 "dma_device_type": 2 00:12:08.996 } 00:12:08.996 ], 00:12:08.996 "driver_specific": {} 00:12:08.996 } 00:12:08.996 ]' 00:12:08.996 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:12:08.996 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:12:08.996 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:12:09.257 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:12:09.257 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:12:09.257 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:12:09.257 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:09.257 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:10.644 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:10.644 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:12:10.644 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:10.644 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:10.644 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:12:12.562 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:12.562 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:12.562 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:12.562 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:12.562 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:12.562 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:12:12.562 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:12.562 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:12.562 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:12.562 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:12.562 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:12.562 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:12.562 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:12.562 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:12.562 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:12.562 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:12.562 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:12.826 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:13.768 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:14.711 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:14.712 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:14.712 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:14.712 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:14.712 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:14.712 ************************************ 00:12:14.712 START TEST filesystem_in_capsule_ext4 00:12:14.712 ************************************ 00:12:14.712 17:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:14.712 17:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:14.712 17:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:14.712 17:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:14.712 17:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:12:14.712 17:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:14.712 17:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:12:14.712 17:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:12:14.712 17:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:12:14.712 17:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:12:14.712 17:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:14.712 mke2fs 1.47.0 (5-Feb-2023) 00:12:14.712 Discarding device blocks: 0/522240 done 00:12:14.712 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:14.712 Filesystem UUID: 15209465-6c88-4dfd-8a5c-d7f1b2bc30a6 00:12:14.712 Superblock backups stored on blocks: 00:12:14.712 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:14.712 00:12:14.712 Allocating group tables: 0/64 done 00:12:14.712 Writing inode tables: 0/64 done 00:12:17.291 Creating journal (8192 blocks): done 00:12:19.323 Writing superblocks and filesystem accounting information: 0/64 4/64 done 00:12:19.323 00:12:19.323 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:12:19.323 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:25.906 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:25.906 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:25.906 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:25.906 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:25.906 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:25.907 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:25.907 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2895587 00:12:25.907 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:25.907 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:25.907 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:25.907 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:25.907 00:12:25.907 real 0m10.310s 00:12:25.907 user 0m0.028s 00:12:25.907 sys 0m0.084s 00:12:25.907 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:25.907 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:25.907 ************************************ 00:12:25.907 END TEST filesystem_in_capsule_ext4 00:12:25.907 ************************************ 00:12:25.907 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:25.907 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:25.907 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:25.907 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:25.907 ************************************ 00:12:25.907 START TEST filesystem_in_capsule_btrfs 00:12:25.907 ************************************ 00:12:25.907 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:25.907 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:25.907 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:25.907 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:25.907 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:25.907 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:25.907 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:25.907 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:25.907 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:25.907 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:25.907 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:25.907 btrfs-progs v6.8.1 00:12:25.907 See https://btrfs.readthedocs.io for more information. 00:12:25.907 00:12:25.907 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:25.907 NOTE: several default settings have changed in version 5.15, please make sure 00:12:25.907 this does not affect your deployments: 00:12:25.907 - DUP for metadata (-m dup) 00:12:25.907 - enabled no-holes (-O no-holes) 00:12:25.907 - enabled free-space-tree (-R free-space-tree) 00:12:25.907 00:12:25.907 Label: (null) 00:12:25.907 UUID: 226ca1fb-b232-46e5-b5de-b0f60b803acc 00:12:25.907 Node size: 16384 00:12:25.907 Sector size: 4096 (CPU page size: 4096) 00:12:25.907 Filesystem size: 510.00MiB 00:12:25.907 Block group profiles: 00:12:25.907 Data: single 8.00MiB 00:12:25.907 Metadata: DUP 32.00MiB 00:12:25.907 System: DUP 8.00MiB 00:12:25.907 SSD detected: yes 00:12:25.907 Zoned device: no 00:12:25.907 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:25.907 Checksum: crc32c 00:12:25.907 Number of devices: 1 00:12:25.907 Devices: 00:12:25.907 ID SIZE PATH 00:12:25.907 1 510.00MiB /dev/nvme0n1p1 00:12:25.907 00:12:25.907 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:25.907 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:25.907 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:25.907 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:25.907 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:25.907 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:25.907 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:25.907 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:25.907 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2895587 00:12:25.907 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:25.907 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:25.907 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:25.907 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:25.907 00:12:25.907 real 0m0.765s 00:12:25.907 user 0m0.022s 00:12:25.907 sys 0m0.124s 00:12:25.907 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:25.907 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:25.907 ************************************ 00:12:25.907 END TEST filesystem_in_capsule_btrfs 00:12:25.907 ************************************ 00:12:25.907 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:25.907 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:25.907 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:25.907 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:25.907 ************************************ 00:12:25.907 START TEST filesystem_in_capsule_xfs 00:12:25.907 ************************************ 00:12:25.907 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:25.907 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:25.907 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:25.907 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:25.907 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:25.907 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:25.907 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:25.907 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:12:25.907 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:25.907 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:25.907 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:25.907 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:25.907 = sectsz=512 attr=2, projid32bit=1 00:12:25.907 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:25.907 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:25.907 data = bsize=4096 blocks=130560, imaxpct=25 00:12:25.907 = sunit=0 swidth=0 blks 00:12:25.907 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:25.907 log =internal log bsize=4096 blocks=16384, version=2 00:12:25.907 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:25.907 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:26.850 Discarding blocks...Done. 00:12:26.850 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:26.850 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:28.763 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:28.763 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:28.763 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:28.763 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:28.763 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:28.763 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:28.763 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2895587 00:12:28.763 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:28.763 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:28.763 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:28.763 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:28.763 00:12:28.763 real 0m2.676s 00:12:28.763 user 0m0.033s 00:12:28.763 sys 0m0.074s 00:12:28.763 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:28.763 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:28.763 ************************************ 00:12:28.763 END TEST filesystem_in_capsule_xfs 00:12:28.763 ************************************ 00:12:28.763 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:28.763 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:28.763 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:28.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.763 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:28.763 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:28.763 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:28.763 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:28.763 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:28.763 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:28.763 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:28.763 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:28.763 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.763 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:28.763 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.763 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:28.763 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2895587 00:12:28.763 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 2895587 ']' 00:12:28.763 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 2895587 00:12:28.763 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:28.763 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:28.763 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2895587 00:12:28.763 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:28.763 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:28.763 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2895587' 00:12:28.763 killing process with pid 2895587 00:12:28.763 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 2895587 00:12:28.763 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 2895587 00:12:29.025 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:29.025 00:12:29.025 real 0m21.063s 00:12:29.025 user 1m23.405s 00:12:29.025 sys 0m1.393s 00:12:29.025 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:29.025 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:29.025 ************************************ 00:12:29.025 END TEST nvmf_filesystem_in_capsule 00:12:29.025 ************************************ 00:12:29.025 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:29.025 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:29.025 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:29.025 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:29.025 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:29.025 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:29.025 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:29.025 rmmod nvme_tcp 00:12:29.025 rmmod nvme_fabrics 00:12:29.025 rmmod nvme_keyring 00:12:29.286 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:29.286 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:29.286 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:29.286 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:12:29.286 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:29.286 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:29.286 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:29.286 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:29.286 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-save 00:12:29.286 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:29.286 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-restore 00:12:29.286 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:29.286 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:29.286 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.286 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:29.286 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.212 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:31.212 00:12:31.212 real 0m48.494s 00:12:31.212 user 2m34.633s 00:12:31.212 sys 0m8.499s 00:12:31.212 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:31.212 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:31.212 ************************************ 00:12:31.212 END TEST nvmf_filesystem 00:12:31.212 ************************************ 00:12:31.212 17:12:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:31.212 17:12:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:31.212 17:12:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:31.212 17:12:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:31.474 ************************************ 00:12:31.474 START TEST nvmf_target_discovery 00:12:31.474 ************************************ 00:12:31.474 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:31.474 * Looking for test storage... 00:12:31.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:31.474 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:31.474 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:12:31.474 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:31.474 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:31.474 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:31.474 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:31.474 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:31.474 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:31.474 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:31.474 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:31.474 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:31.474 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:31.474 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:31.474 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:31.474 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:31.474 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:31.474 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:31.474 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:31.474 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:31.474 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:31.474 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:31.474 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:31.474 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:31.474 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:31.474 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:31.474 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:31.474 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:31.474 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:31.474 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:31.474 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:31.474 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:31.474 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:31.474 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:31.474 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:31.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.474 --rc genhtml_branch_coverage=1 00:12:31.475 --rc genhtml_function_coverage=1 00:12:31.475 --rc genhtml_legend=1 00:12:31.475 --rc geninfo_all_blocks=1 00:12:31.475 --rc geninfo_unexecuted_blocks=1 00:12:31.475 00:12:31.475 ' 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:31.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.475 --rc genhtml_branch_coverage=1 00:12:31.475 --rc genhtml_function_coverage=1 00:12:31.475 --rc genhtml_legend=1 00:12:31.475 --rc geninfo_all_blocks=1 00:12:31.475 --rc geninfo_unexecuted_blocks=1 00:12:31.475 00:12:31.475 ' 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:31.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.475 --rc genhtml_branch_coverage=1 00:12:31.475 --rc genhtml_function_coverage=1 00:12:31.475 --rc genhtml_legend=1 00:12:31.475 --rc geninfo_all_blocks=1 00:12:31.475 --rc geninfo_unexecuted_blocks=1 00:12:31.475 00:12:31.475 ' 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:31.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.475 --rc genhtml_branch_coverage=1 00:12:31.475 --rc genhtml_function_coverage=1 00:12:31.475 --rc genhtml_legend=1 00:12:31.475 --rc geninfo_all_blocks=1 00:12:31.475 --rc geninfo_unexecuted_blocks=1 00:12:31.475 00:12:31.475 ' 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:31.475 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:31.475 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:39.606 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:39.606 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:39.606 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:39.606 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:39.606 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:39.606 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:39.606 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:39.606 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:39.606 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:39.606 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:39.606 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:39.606 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:39.606 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:39.606 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:39.606 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:39.606 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:39.606 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:39.606 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:39.606 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:39.606 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:39.607 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:39.607 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:39.607 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:39.607 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:39.607 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:39.607 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:39.607 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:39.607 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:39.607 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:39.607 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:39.607 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:39.607 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:39.607 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:39.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:39.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:12:39.607 00:12:39.607 --- 10.0.0.2 ping statistics --- 00:12:39.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.607 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:12:39.607 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:39.607 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:39.607 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:12:39.607 00:12:39.607 --- 10.0.0.1 ping statistics --- 00:12:39.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.607 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:12:39.607 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:39.607 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # return 0 00:12:39.607 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:39.607 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:39.607 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:39.607 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:39.607 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:39.607 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:39.607 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:39.607 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:39.607 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:39.607 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:39.608 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:39.608 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # nvmfpid=2904627 00:12:39.608 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # waitforlisten 2904627 00:12:39.608 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:39.608 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 2904627 ']' 00:12:39.608 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.608 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:39.608 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.608 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:39.608 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:39.608 [2024-10-01 17:12:37.286387] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:12:39.608 [2024-10-01 17:12:37.286456] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:39.608 [2024-10-01 17:12:37.359340] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:39.608 [2024-10-01 17:12:37.398551] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:39.608 [2024-10-01 17:12:37.398614] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:39.608 [2024-10-01 17:12:37.398623] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:39.608 [2024-10-01 17:12:37.398630] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:39.608 [2024-10-01 17:12:37.398636] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:39.608 [2024-10-01 17:12:37.398783] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:39.608 [2024-10-01 17:12:37.398911] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:39.608 [2024-10-01 17:12:37.399077] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.608 [2024-10-01 17:12:37.399077] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:39.608 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:39.608 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:12:39.608 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:39.608 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:39.608 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:39.608 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.608 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:39.608 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.608 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:39.608 [2024-10-01 17:12:38.145809] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:39.874 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.874 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:39.874 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:39.874 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:39.874 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.874 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:39.874 Null1 00:12:39.874 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.874 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:39.874 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.874 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:39.874 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.874 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:39.874 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.874 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:39.874 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.874 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:39.874 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.874 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:39.874 [2024-10-01 17:12:38.206127] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:39.874 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.874 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:39.875 Null2 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:39.875 Null3 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:39.875 Null4 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.875 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:12:40.162 00:12:40.162 Discovery Log Number of Records 6, Generation counter 6 00:12:40.162 =====Discovery Log Entry 0====== 00:12:40.162 trtype: tcp 00:12:40.162 adrfam: ipv4 00:12:40.162 subtype: current discovery subsystem 00:12:40.162 treq: not required 00:12:40.162 portid: 0 00:12:40.162 trsvcid: 4420 00:12:40.162 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:40.162 traddr: 10.0.0.2 00:12:40.162 eflags: explicit discovery connections, duplicate discovery information 00:12:40.162 sectype: none 00:12:40.162 =====Discovery Log Entry 1====== 00:12:40.162 trtype: tcp 00:12:40.162 adrfam: ipv4 00:12:40.162 subtype: nvme subsystem 00:12:40.162 treq: not required 00:12:40.162 portid: 0 00:12:40.162 trsvcid: 4420 00:12:40.162 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:40.162 traddr: 10.0.0.2 00:12:40.162 eflags: none 00:12:40.162 sectype: none 00:12:40.162 =====Discovery Log Entry 2====== 00:12:40.162 trtype: tcp 00:12:40.162 adrfam: ipv4 00:12:40.162 subtype: nvme subsystem 00:12:40.162 treq: not required 00:12:40.162 portid: 0 00:12:40.162 trsvcid: 4420 00:12:40.162 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:40.162 traddr: 10.0.0.2 00:12:40.162 eflags: none 00:12:40.162 sectype: none 00:12:40.162 =====Discovery Log Entry 3====== 00:12:40.162 trtype: tcp 00:12:40.162 adrfam: ipv4 00:12:40.162 subtype: nvme subsystem 00:12:40.162 treq: not required 00:12:40.162 portid: 0 00:12:40.162 trsvcid: 4420 00:12:40.162 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:40.162 traddr: 10.0.0.2 00:12:40.162 eflags: none 00:12:40.162 sectype: none 00:12:40.162 =====Discovery Log Entry 4====== 00:12:40.162 trtype: tcp 00:12:40.162 adrfam: ipv4 00:12:40.162 subtype: nvme subsystem 00:12:40.162 treq: not required 00:12:40.162 portid: 0 00:12:40.162 trsvcid: 4420 00:12:40.162 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:40.162 traddr: 10.0.0.2 00:12:40.162 eflags: none 00:12:40.162 sectype: none 00:12:40.162 =====Discovery Log Entry 5====== 00:12:40.162 trtype: tcp 00:12:40.162 adrfam: ipv4 00:12:40.162 subtype: discovery subsystem referral 00:12:40.162 treq: not required 00:12:40.162 portid: 0 00:12:40.162 trsvcid: 4430 00:12:40.162 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:40.162 traddr: 10.0.0.2 00:12:40.162 eflags: none 00:12:40.162 sectype: none 00:12:40.162 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:40.162 Perform nvmf subsystem discovery via RPC 00:12:40.162 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:40.162 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.162 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.162 [ 00:12:40.162 { 00:12:40.162 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:40.162 "subtype": "Discovery", 00:12:40.162 "listen_addresses": [ 00:12:40.162 { 00:12:40.162 "trtype": "TCP", 00:12:40.162 "adrfam": "IPv4", 00:12:40.162 "traddr": "10.0.0.2", 00:12:40.162 "trsvcid": "4420" 00:12:40.162 } 00:12:40.162 ], 00:12:40.162 "allow_any_host": true, 00:12:40.162 "hosts": [] 00:12:40.162 }, 00:12:40.162 { 00:12:40.162 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:40.162 "subtype": "NVMe", 00:12:40.162 "listen_addresses": [ 00:12:40.162 { 00:12:40.162 "trtype": "TCP", 00:12:40.162 "adrfam": "IPv4", 00:12:40.162 "traddr": "10.0.0.2", 00:12:40.162 "trsvcid": "4420" 00:12:40.162 } 00:12:40.162 ], 00:12:40.162 "allow_any_host": true, 00:12:40.162 "hosts": [], 00:12:40.162 "serial_number": "SPDK00000000000001", 00:12:40.162 "model_number": "SPDK bdev Controller", 00:12:40.162 "max_namespaces": 32, 00:12:40.162 "min_cntlid": 1, 00:12:40.162 "max_cntlid": 65519, 00:12:40.162 "namespaces": [ 00:12:40.162 { 00:12:40.162 "nsid": 1, 00:12:40.162 "bdev_name": "Null1", 00:12:40.162 "name": "Null1", 00:12:40.162 "nguid": "09AC04BD7F154F17BBA9F353A5BBDAEC", 00:12:40.162 "uuid": "09ac04bd-7f15-4f17-bba9-f353a5bbdaec" 00:12:40.162 } 00:12:40.162 ] 00:12:40.162 }, 00:12:40.162 { 00:12:40.162 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:40.162 "subtype": "NVMe", 00:12:40.162 "listen_addresses": [ 00:12:40.162 { 00:12:40.162 "trtype": "TCP", 00:12:40.162 "adrfam": "IPv4", 00:12:40.162 "traddr": "10.0.0.2", 00:12:40.162 "trsvcid": "4420" 00:12:40.162 } 00:12:40.162 ], 00:12:40.162 "allow_any_host": true, 00:12:40.162 "hosts": [], 00:12:40.162 "serial_number": "SPDK00000000000002", 00:12:40.162 "model_number": "SPDK bdev Controller", 00:12:40.162 "max_namespaces": 32, 00:12:40.162 "min_cntlid": 1, 00:12:40.162 "max_cntlid": 65519, 00:12:40.162 "namespaces": [ 00:12:40.162 { 00:12:40.162 "nsid": 1, 00:12:40.162 "bdev_name": "Null2", 00:12:40.162 "name": "Null2", 00:12:40.163 "nguid": "B3238F5C16134319A04B352D6D04D63B", 00:12:40.163 "uuid": "b3238f5c-1613-4319-a04b-352d6d04d63b" 00:12:40.163 } 00:12:40.163 ] 00:12:40.163 }, 00:12:40.163 { 00:12:40.163 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:40.163 "subtype": "NVMe", 00:12:40.163 "listen_addresses": [ 00:12:40.163 { 00:12:40.163 "trtype": "TCP", 00:12:40.163 "adrfam": "IPv4", 00:12:40.163 "traddr": "10.0.0.2", 00:12:40.163 "trsvcid": "4420" 00:12:40.163 } 00:12:40.163 ], 00:12:40.163 "allow_any_host": true, 00:12:40.163 "hosts": [], 00:12:40.163 "serial_number": "SPDK00000000000003", 00:12:40.163 "model_number": "SPDK bdev Controller", 00:12:40.163 "max_namespaces": 32, 00:12:40.163 "min_cntlid": 1, 00:12:40.163 "max_cntlid": 65519, 00:12:40.163 "namespaces": [ 00:12:40.163 { 00:12:40.163 "nsid": 1, 00:12:40.163 "bdev_name": "Null3", 00:12:40.163 "name": "Null3", 00:12:40.163 "nguid": "0DF621A5CFF04C9E93EB28B120C4A49F", 00:12:40.163 "uuid": "0df621a5-cff0-4c9e-93eb-28b120c4a49f" 00:12:40.163 } 00:12:40.163 ] 00:12:40.163 }, 00:12:40.163 { 00:12:40.163 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:40.163 "subtype": "NVMe", 00:12:40.163 "listen_addresses": [ 00:12:40.163 { 00:12:40.163 "trtype": "TCP", 00:12:40.163 "adrfam": "IPv4", 00:12:40.163 "traddr": "10.0.0.2", 00:12:40.163 "trsvcid": "4420" 00:12:40.163 } 00:12:40.163 ], 00:12:40.163 "allow_any_host": true, 00:12:40.163 "hosts": [], 00:12:40.163 "serial_number": "SPDK00000000000004", 00:12:40.163 "model_number": "SPDK bdev Controller", 00:12:40.163 "max_namespaces": 32, 00:12:40.163 "min_cntlid": 1, 00:12:40.163 "max_cntlid": 65519, 00:12:40.163 "namespaces": [ 00:12:40.163 { 00:12:40.163 "nsid": 1, 00:12:40.163 "bdev_name": "Null4", 00:12:40.163 "name": "Null4", 00:12:40.163 "nguid": "3D6C1AD7451C4BD9AA6BF5C733A582E6", 00:12:40.163 "uuid": "3d6c1ad7-451c-4bd9-aa6b-f5c733a582e6" 00:12:40.163 } 00:12:40.163 ] 00:12:40.163 } 00:12:40.163 ] 00:12:40.163 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.163 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:40.163 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:40.163 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.163 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.163 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.163 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.163 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:40.163 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.163 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.163 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.163 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:40.163 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:40.163 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.163 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.163 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.163 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:40.163 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.163 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.163 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.163 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:40.163 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:40.163 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.163 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.163 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.163 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:40.163 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.163 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.446 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.446 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:40.446 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:40.446 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.446 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.446 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.446 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:40.446 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.446 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.446 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.446 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:40.446 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.446 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.446 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.446 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:40.446 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:40.446 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.446 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.446 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.446 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:40.446 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:40.446 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:40.446 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:40.446 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:40.446 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:40.446 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:40.446 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:40.446 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:40.446 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:40.446 rmmod nvme_tcp 00:12:40.446 rmmod nvme_fabrics 00:12:40.446 rmmod nvme_keyring 00:12:40.446 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:40.446 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:40.446 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:40.446 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@515 -- # '[' -n 2904627 ']' 00:12:40.446 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # killprocess 2904627 00:12:40.446 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 2904627 ']' 00:12:40.446 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 2904627 00:12:40.446 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:12:40.446 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:40.446 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2904627 00:12:40.446 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:40.446 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:40.446 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2904627' 00:12:40.446 killing process with pid 2904627 00:12:40.446 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 2904627 00:12:40.446 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 2904627 00:12:40.743 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:40.743 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:40.743 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:40.743 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:40.743 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-save 00:12:40.743 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:40.743 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:12:40.743 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:40.743 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:40.743 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:40.743 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:40.743 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.653 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:42.653 00:12:42.653 real 0m11.355s 00:12:42.653 user 0m8.824s 00:12:42.653 sys 0m5.848s 00:12:42.653 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:42.653 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.654 ************************************ 00:12:42.654 END TEST nvmf_target_discovery 00:12:42.654 ************************************ 00:12:42.654 17:12:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:42.654 17:12:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:42.654 17:12:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:42.654 17:12:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:42.654 ************************************ 00:12:42.654 START TEST nvmf_referrals 00:12:42.654 ************************************ 00:12:42.654 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:42.916 * Looking for test storage... 00:12:42.916 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:42.916 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:42.916 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lcov --version 00:12:42.916 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:42.916 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:42.916 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:42.916 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:42.916 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:42.916 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:42.916 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:42.916 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:42.916 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:42.916 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:42.916 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:42.916 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:42.916 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:42.916 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:42.916 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:42.916 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:42.916 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:42.916 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:42.916 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:42.916 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:42.916 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:42.916 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:42.916 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:42.916 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:42.916 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:42.916 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:42.916 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:42.916 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:42.916 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:42.916 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:42.916 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:42.916 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:42.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.916 --rc genhtml_branch_coverage=1 00:12:42.916 --rc genhtml_function_coverage=1 00:12:42.916 --rc genhtml_legend=1 00:12:42.916 --rc geninfo_all_blocks=1 00:12:42.916 --rc geninfo_unexecuted_blocks=1 00:12:42.916 00:12:42.916 ' 00:12:42.916 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:42.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.916 --rc genhtml_branch_coverage=1 00:12:42.916 --rc genhtml_function_coverage=1 00:12:42.916 --rc genhtml_legend=1 00:12:42.916 --rc geninfo_all_blocks=1 00:12:42.916 --rc geninfo_unexecuted_blocks=1 00:12:42.916 00:12:42.916 ' 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:42.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.917 --rc genhtml_branch_coverage=1 00:12:42.917 --rc genhtml_function_coverage=1 00:12:42.917 --rc genhtml_legend=1 00:12:42.917 --rc geninfo_all_blocks=1 00:12:42.917 --rc geninfo_unexecuted_blocks=1 00:12:42.917 00:12:42.917 ' 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:42.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.917 --rc genhtml_branch_coverage=1 00:12:42.917 --rc genhtml_function_coverage=1 00:12:42.917 --rc genhtml_legend=1 00:12:42.917 --rc geninfo_all_blocks=1 00:12:42.917 --rc geninfo_unexecuted_blocks=1 00:12:42.917 00:12:42.917 ' 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:42.917 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:42.917 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:51.060 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:51.060 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:51.060 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:51.061 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:51.061 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # is_hw=yes 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:51.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:51.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:12:51.061 00:12:51.061 --- 10.0.0.2 ping statistics --- 00:12:51.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.061 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:51.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:51.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:12:51.061 00:12:51.061 --- 10.0.0.1 ping statistics --- 00:12:51.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.061 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # return 0 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # nvmfpid=2909030 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # waitforlisten 2909030 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 2909030 ']' 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:51.061 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:51.061 [2024-10-01 17:12:48.761606] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:12:51.061 [2024-10-01 17:12:48.761677] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.061 [2024-10-01 17:12:48.832415] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:51.061 [2024-10-01 17:12:48.868033] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:51.061 [2024-10-01 17:12:48.868074] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:51.061 [2024-10-01 17:12:48.868082] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:51.061 [2024-10-01 17:12:48.868089] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:51.061 [2024-10-01 17:12:48.868095] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:51.061 [2024-10-01 17:12:48.868167] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.061 [2024-10-01 17:12:48.868295] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:51.061 [2024-10-01 17:12:48.868460] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.061 [2024-10-01 17:12:48.868461] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:51.061 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:51.061 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:12:51.061 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:51.061 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:51.061 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:51.061 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:51.061 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:51.061 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.061 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:51.323 [2024-10-01 17:12:49.609755] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:51.323 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.323 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:51.323 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.323 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:51.323 [2024-10-01 17:12:49.625948] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:51.323 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.323 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:51.323 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.323 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:51.323 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.323 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:51.323 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.323 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:51.323 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.323 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:51.323 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.323 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:51.323 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.323 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:51.323 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:51.323 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.323 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:51.323 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.323 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:51.323 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:51.323 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:51.323 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:51.323 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:51.323 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.323 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:51.323 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:51.323 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.323 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:51.323 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:51.323 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:51.323 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:51.323 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:51.323 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:51.323 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:51.323 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:51.584 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:51.584 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:51.584 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:51.584 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.584 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:51.584 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.584 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:51.584 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.584 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:51.584 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.584 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:51.584 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.584 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:51.584 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.584 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:51.584 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:51.584 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.584 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:51.584 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.584 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:51.584 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:51.584 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:51.584 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:51.584 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:51.584 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:51.584 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:51.845 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:51.845 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:51.845 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:51.845 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.845 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:51.845 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.845 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:51.845 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.845 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:51.845 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.845 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:51.845 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:51.845 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:51.845 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:51.845 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.845 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:51.845 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:51.845 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.845 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:51.845 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:51.845 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:51.845 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:51.845 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:51.845 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:51.845 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:51.845 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:52.106 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:52.106 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:52.106 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:52.106 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:52.106 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:52.106 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:52.106 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:52.367 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:52.367 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:52.367 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:52.367 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:52.367 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:52.367 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:52.628 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:52.628 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:52.628 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.628 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:52.628 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.628 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:52.628 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:52.628 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:52.628 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:52.628 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.628 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:52.628 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:52.628 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.628 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:52.628 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:52.628 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:52.628 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:52.628 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:52.628 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:52.628 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:52.628 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:52.889 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:52.889 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:52.889 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:52.889 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:52.889 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:52.889 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:52.889 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:52.889 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:52.889 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:52.889 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:52.889 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:52.889 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:52.889 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:53.150 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:53.150 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:53.150 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.150 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.150 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.150 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:53.150 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:53.150 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.150 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.150 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.150 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:53.150 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:53.150 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:53.150 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:53.150 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:53.150 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:53.150 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:53.410 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:53.410 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:53.410 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:53.410 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:53.410 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:53.410 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:53.410 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:53.410 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:53.410 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:53.410 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:53.410 rmmod nvme_tcp 00:12:53.410 rmmod nvme_fabrics 00:12:53.410 rmmod nvme_keyring 00:12:53.410 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:53.410 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:53.410 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:53.410 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@515 -- # '[' -n 2909030 ']' 00:12:53.410 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # killprocess 2909030 00:12:53.410 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 2909030 ']' 00:12:53.410 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 2909030 00:12:53.410 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:12:53.410 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:53.671 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2909030 00:12:53.671 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:53.671 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:53.671 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2909030' 00:12:53.671 killing process with pid 2909030 00:12:53.671 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 2909030 00:12:53.671 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 2909030 00:12:53.671 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:53.671 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:53.671 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:53.671 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:53.671 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-save 00:12:53.671 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:53.671 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-restore 00:12:53.671 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:53.671 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:53.671 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.671 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:53.671 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:56.220 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:56.220 00:12:56.220 real 0m13.028s 00:12:56.220 user 0m16.204s 00:12:56.220 sys 0m6.233s 00:12:56.220 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:56.220 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:56.220 ************************************ 00:12:56.220 END TEST nvmf_referrals 00:12:56.220 ************************************ 00:12:56.220 17:12:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:56.220 17:12:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:56.220 17:12:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:56.220 17:12:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:56.220 ************************************ 00:12:56.220 START TEST nvmf_connect_disconnect 00:12:56.220 ************************************ 00:12:56.220 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:56.220 * Looking for test storage... 00:12:56.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:56.220 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:56.220 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:12:56.220 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:56.220 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:56.220 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:56.220 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:56.220 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:56.220 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:56.220 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:56.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.221 --rc genhtml_branch_coverage=1 00:12:56.221 --rc genhtml_function_coverage=1 00:12:56.221 --rc genhtml_legend=1 00:12:56.221 --rc geninfo_all_blocks=1 00:12:56.221 --rc geninfo_unexecuted_blocks=1 00:12:56.221 00:12:56.221 ' 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:56.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.221 --rc genhtml_branch_coverage=1 00:12:56.221 --rc genhtml_function_coverage=1 00:12:56.221 --rc genhtml_legend=1 00:12:56.221 --rc geninfo_all_blocks=1 00:12:56.221 --rc geninfo_unexecuted_blocks=1 00:12:56.221 00:12:56.221 ' 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:56.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.221 --rc genhtml_branch_coverage=1 00:12:56.221 --rc genhtml_function_coverage=1 00:12:56.221 --rc genhtml_legend=1 00:12:56.221 --rc geninfo_all_blocks=1 00:12:56.221 --rc geninfo_unexecuted_blocks=1 00:12:56.221 00:12:56.221 ' 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:56.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.221 --rc genhtml_branch_coverage=1 00:12:56.221 --rc genhtml_function_coverage=1 00:12:56.221 --rc genhtml_legend=1 00:12:56.221 --rc geninfo_all_blocks=1 00:12:56.221 --rc geninfo_unexecuted_blocks=1 00:12:56.221 00:12:56.221 ' 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:56.221 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:56.222 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.222 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.222 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.222 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:56.222 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.222 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:56.222 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:56.222 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:56.222 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:56.222 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:56.222 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:56.222 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:56.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:56.222 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:56.222 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:56.222 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:56.222 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:56.222 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:56.222 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:56.222 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:56.222 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:56.222 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:56.222 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:56.222 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:56.222 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.222 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:56.222 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:56.222 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:56.222 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:56.222 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:56.222 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:04.363 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:04.363 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:04.363 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:04.364 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:04.364 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:04.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:04.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:13:04.364 00:13:04.364 --- 10.0.0.2 ping statistics --- 00:13:04.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.364 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:04.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:04.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:13:04.364 00:13:04.364 --- 10.0.0.1 ping statistics --- 00:13:04.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.364 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # return 0 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # nvmfpid=2914095 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # waitforlisten 2914095 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 2914095 ']' 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:04.364 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:04.364 [2024-10-01 17:13:01.902460] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:13:04.364 [2024-10-01 17:13:01.902523] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:04.364 [2024-10-01 17:13:01.973397] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:04.364 [2024-10-01 17:13:02.007585] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:04.364 [2024-10-01 17:13:02.007626] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:04.364 [2024-10-01 17:13:02.007635] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:04.364 [2024-10-01 17:13:02.007643] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:04.364 [2024-10-01 17:13:02.007648] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:04.364 [2024-10-01 17:13:02.007800] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:04.364 [2024-10-01 17:13:02.007915] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:04.364 [2024-10-01 17:13:02.008074] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.364 [2024-10-01 17:13:02.008074] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:13:04.364 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:04.365 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:13:04.365 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:04.365 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:04.365 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:04.365 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:04.365 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:04.365 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.365 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:04.365 [2024-10-01 17:13:02.751869] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:04.365 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.365 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:04.365 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.365 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:04.365 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.365 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:04.365 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:04.365 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.365 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:04.365 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.365 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:04.365 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.365 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:04.365 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.365 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.365 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.365 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:04.365 [2024-10-01 17:13:02.811041] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.365 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.365 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:13:04.365 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:13:04.365 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:13:04.365 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:06.910 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.450 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.989 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.900 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.451 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.998 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.913 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.002 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.466 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.011 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.579 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.100 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.556 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.100 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.556 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.098 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.641 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.556 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.101 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.580 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.583 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.666 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.581 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.663 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.572 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.580 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.762 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:57.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.766 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:04.310 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:06.220 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:13.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.304 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:20.217 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:22.762 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.307 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.236 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:29.778 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.865 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:36.776 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:39.312 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:43.922 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:46.463 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:48.371 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:50.913 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.365 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.901 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.441 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.979 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:04.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.432 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.343 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:11.882 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:14.423 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:16.965 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:18.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:21.528 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:24.074 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:28.535 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:31.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:32.991 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:35.533 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:37.445 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:39.984 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:42.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:44.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:46.988 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:49.535 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:51.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:53.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:56.535 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:59.079 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:59.079 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:59.079 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:59.079 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:59.079 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:59.079 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:59.079 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:59.079 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:59.079 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:59.079 rmmod nvme_tcp 00:16:59.079 rmmod nvme_fabrics 00:16:59.079 rmmod nvme_keyring 00:16:59.079 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:59.079 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:59.079 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:59.079 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@515 -- # '[' -n 2914095 ']' 00:16:59.079 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # killprocess 2914095 00:16:59.079 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 2914095 ']' 00:16:59.079 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 2914095 00:16:59.079 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:16:59.079 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:59.079 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2914095 00:16:59.079 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:59.079 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:59.079 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2914095' 00:16:59.079 killing process with pid 2914095 00:16:59.079 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 2914095 00:16:59.079 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 2914095 00:16:59.079 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:59.079 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:59.079 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:59.079 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:59.079 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:16:59.079 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:16:59.079 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:59.079 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:59.079 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:59.079 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.079 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:59.079 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.992 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:00.992 00:17:00.992 real 4m5.091s 00:17:00.992 user 15m32.619s 00:17:00.992 sys 0m26.468s 00:17:00.992 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:00.992 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:00.992 ************************************ 00:17:00.992 END TEST nvmf_connect_disconnect 00:17:00.992 ************************************ 00:17:00.992 17:16:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:00.992 17:16:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:00.992 17:16:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:00.992 17:16:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:00.992 ************************************ 00:17:00.992 START TEST nvmf_multitarget 00:17:00.992 ************************************ 00:17:00.992 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:01.255 * Looking for test storage... 00:17:01.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lcov --version 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:01.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.255 --rc genhtml_branch_coverage=1 00:17:01.255 --rc genhtml_function_coverage=1 00:17:01.255 --rc genhtml_legend=1 00:17:01.255 --rc geninfo_all_blocks=1 00:17:01.255 --rc geninfo_unexecuted_blocks=1 00:17:01.255 00:17:01.255 ' 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:01.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.255 --rc genhtml_branch_coverage=1 00:17:01.255 --rc genhtml_function_coverage=1 00:17:01.255 --rc genhtml_legend=1 00:17:01.255 --rc geninfo_all_blocks=1 00:17:01.255 --rc geninfo_unexecuted_blocks=1 00:17:01.255 00:17:01.255 ' 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:01.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.255 --rc genhtml_branch_coverage=1 00:17:01.255 --rc genhtml_function_coverage=1 00:17:01.255 --rc genhtml_legend=1 00:17:01.255 --rc geninfo_all_blocks=1 00:17:01.255 --rc geninfo_unexecuted_blocks=1 00:17:01.255 00:17:01.255 ' 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:01.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.255 --rc genhtml_branch_coverage=1 00:17:01.255 --rc genhtml_function_coverage=1 00:17:01.255 --rc genhtml_legend=1 00:17:01.255 --rc geninfo_all_blocks=1 00:17:01.255 --rc geninfo_unexecuted_blocks=1 00:17:01.255 00:17:01.255 ' 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:01.255 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:17:01.255 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:07.841 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:07.841 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:07.841 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:07.841 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # is_hw=yes 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:07.841 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:07.842 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:07.842 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:08.108 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:08.108 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:08.108 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:08.108 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:08.108 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:08.108 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.576 ms 00:17:08.108 00:17:08.108 --- 10.0.0.2 ping statistics --- 00:17:08.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.108 rtt min/avg/max/mdev = 0.576/0.576/0.576/0.000 ms 00:17:08.108 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:08.108 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:08.108 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:17:08.108 00:17:08.108 --- 10.0.0.1 ping statistics --- 00:17:08.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.108 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:17:08.108 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:08.108 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # return 0 00:17:08.108 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:08.108 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:08.108 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:08.108 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:08.108 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:08.108 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:08.108 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:08.108 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:17:08.108 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:08.108 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:08.108 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:08.108 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # nvmfpid=2965423 00:17:08.108 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # waitforlisten 2965423 00:17:08.108 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:08.108 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 2965423 ']' 00:17:08.108 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.108 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:08.108 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.108 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:08.108 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:08.108 [2024-10-01 17:17:06.593936] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:17:08.108 [2024-10-01 17:17:06.594020] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:08.441 [2024-10-01 17:17:06.668289] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:08.441 [2024-10-01 17:17:06.707528] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:08.441 [2024-10-01 17:17:06.707576] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:08.441 [2024-10-01 17:17:06.707584] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:08.441 [2024-10-01 17:17:06.707591] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:08.441 [2024-10-01 17:17:06.707596] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:08.441 [2024-10-01 17:17:06.707740] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:08.441 [2024-10-01 17:17:06.707867] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:08.441 [2024-10-01 17:17:06.708035] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:08.441 [2024-10-01 17:17:06.708056] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.044 17:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:09.044 17:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:17:09.044 17:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:09.044 17:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:09.044 17:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:09.044 17:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:09.044 17:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:09.044 17:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:09.044 17:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:17:09.044 17:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:17:09.044 17:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:17:09.304 "nvmf_tgt_1" 00:17:09.304 17:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:17:09.304 "nvmf_tgt_2" 00:17:09.304 17:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:09.304 17:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:17:09.564 17:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:17:09.564 17:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:17:09.564 true 00:17:09.564 17:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:17:09.564 true 00:17:09.564 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:17:09.564 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:09.823 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:17:09.823 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:09.823 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:17:09.823 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:09.823 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:17:09.823 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:09.823 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:17:09.823 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:09.823 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:09.823 rmmod nvme_tcp 00:17:09.823 rmmod nvme_fabrics 00:17:09.823 rmmod nvme_keyring 00:17:09.823 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:09.823 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:17:09.823 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:17:09.823 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@515 -- # '[' -n 2965423 ']' 00:17:09.823 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # killprocess 2965423 00:17:09.823 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 2965423 ']' 00:17:09.823 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 2965423 00:17:09.823 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:17:09.823 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:09.823 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2965423 00:17:09.823 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:09.823 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:09.823 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2965423' 00:17:09.823 killing process with pid 2965423 00:17:09.823 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 2965423 00:17:09.823 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 2965423 00:17:10.084 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:10.084 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:10.084 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:10.084 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:17:10.084 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-save 00:17:10.084 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:10.084 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-restore 00:17:10.084 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:10.084 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:10.084 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.084 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:10.084 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.993 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:11.993 00:17:11.993 real 0m11.044s 00:17:11.993 user 0m9.528s 00:17:11.993 sys 0m5.726s 00:17:11.993 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:11.993 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:11.993 ************************************ 00:17:11.993 END TEST nvmf_multitarget 00:17:11.993 ************************************ 00:17:12.254 17:17:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:12.254 17:17:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:12.254 17:17:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:12.254 17:17:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:12.254 ************************************ 00:17:12.254 START TEST nvmf_rpc 00:17:12.254 ************************************ 00:17:12.254 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:12.254 * Looking for test storage... 00:17:12.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:12.254 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:12.254 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:17:12.254 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:12.254 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:12.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.516 --rc genhtml_branch_coverage=1 00:17:12.516 --rc genhtml_function_coverage=1 00:17:12.516 --rc genhtml_legend=1 00:17:12.516 --rc geninfo_all_blocks=1 00:17:12.516 --rc geninfo_unexecuted_blocks=1 00:17:12.516 00:17:12.516 ' 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:12.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.516 --rc genhtml_branch_coverage=1 00:17:12.516 --rc genhtml_function_coverage=1 00:17:12.516 --rc genhtml_legend=1 00:17:12.516 --rc geninfo_all_blocks=1 00:17:12.516 --rc geninfo_unexecuted_blocks=1 00:17:12.516 00:17:12.516 ' 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:12.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.516 --rc genhtml_branch_coverage=1 00:17:12.516 --rc genhtml_function_coverage=1 00:17:12.516 --rc genhtml_legend=1 00:17:12.516 --rc geninfo_all_blocks=1 00:17:12.516 --rc geninfo_unexecuted_blocks=1 00:17:12.516 00:17:12.516 ' 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:12.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.516 --rc genhtml_branch_coverage=1 00:17:12.516 --rc genhtml_function_coverage=1 00:17:12.516 --rc genhtml_legend=1 00:17:12.516 --rc geninfo_all_blocks=1 00:17:12.516 --rc geninfo_unexecuted_blocks=1 00:17:12.516 00:17:12.516 ' 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.516 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:17:12.517 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.517 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:17:12.517 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:12.517 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:12.517 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:12.517 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:12.517 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:12.517 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:12.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:12.517 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:12.517 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:12.517 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:12.517 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:17:12.517 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:17:12.517 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:12.517 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:12.517 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:12.517 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:12.517 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:12.517 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:12.517 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:12.517 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:12.517 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:12.517 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:12.517 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:17:12.517 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:20.665 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:20.665 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:20.665 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:20.665 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # is_hw=yes 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:20.665 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:20.666 17:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:20.666 17:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:20.666 17:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:20.666 17:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:20.666 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:20.666 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.560 ms 00:17:20.666 00:17:20.666 --- 10.0.0.2 ping statistics --- 00:17:20.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.666 rtt min/avg/max/mdev = 0.560/0.560/0.560/0.000 ms 00:17:20.666 17:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:20.666 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:20.666 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.338 ms 00:17:20.666 00:17:20.666 --- 10.0.0.1 ping statistics --- 00:17:20.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.666 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:17:20.666 17:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:20.666 17:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # return 0 00:17:20.666 17:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:20.666 17:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:20.666 17:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:20.666 17:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:20.666 17:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:20.666 17:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:20.666 17:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:20.666 17:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:17:20.666 17:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:20.666 17:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:20.666 17:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.666 17:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # nvmfpid=2970065 00:17:20.666 17:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # waitforlisten 2970065 00:17:20.666 17:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:20.666 17:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 2970065 ']' 00:17:20.666 17:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.666 17:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:20.666 17:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.666 17:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:20.666 17:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.666 [2024-10-01 17:17:18.220224] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:17:20.666 [2024-10-01 17:17:18.220291] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:20.666 [2024-10-01 17:17:18.295688] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:20.666 [2024-10-01 17:17:18.335349] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:20.666 [2024-10-01 17:17:18.335398] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:20.666 [2024-10-01 17:17:18.335406] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:20.666 [2024-10-01 17:17:18.335413] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:20.666 [2024-10-01 17:17:18.335420] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:20.666 [2024-10-01 17:17:18.335574] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.666 [2024-10-01 17:17:18.335710] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.666 [2024-10-01 17:17:18.335868] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.666 [2024-10-01 17:17:18.335869] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:20.666 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:20.666 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:17:20.666 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:20.666 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:20.666 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.666 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:20.666 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:17:20.666 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.666 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.666 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.666 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:17:20.666 "tick_rate": 2400000000, 00:17:20.666 "poll_groups": [ 00:17:20.666 { 00:17:20.666 "name": "nvmf_tgt_poll_group_000", 00:17:20.666 "admin_qpairs": 0, 00:17:20.666 "io_qpairs": 0, 00:17:20.666 "current_admin_qpairs": 0, 00:17:20.666 "current_io_qpairs": 0, 00:17:20.666 "pending_bdev_io": 0, 00:17:20.666 "completed_nvme_io": 0, 00:17:20.666 "transports": [] 00:17:20.666 }, 00:17:20.666 { 00:17:20.666 "name": "nvmf_tgt_poll_group_001", 00:17:20.666 "admin_qpairs": 0, 00:17:20.666 "io_qpairs": 0, 00:17:20.666 "current_admin_qpairs": 0, 00:17:20.666 "current_io_qpairs": 0, 00:17:20.666 "pending_bdev_io": 0, 00:17:20.666 "completed_nvme_io": 0, 00:17:20.666 "transports": [] 00:17:20.666 }, 00:17:20.666 { 00:17:20.666 "name": "nvmf_tgt_poll_group_002", 00:17:20.666 "admin_qpairs": 0, 00:17:20.666 "io_qpairs": 0, 00:17:20.666 "current_admin_qpairs": 0, 00:17:20.666 "current_io_qpairs": 0, 00:17:20.666 "pending_bdev_io": 0, 00:17:20.666 "completed_nvme_io": 0, 00:17:20.666 "transports": [] 00:17:20.666 }, 00:17:20.666 { 00:17:20.666 "name": "nvmf_tgt_poll_group_003", 00:17:20.666 "admin_qpairs": 0, 00:17:20.666 "io_qpairs": 0, 00:17:20.666 "current_admin_qpairs": 0, 00:17:20.666 "current_io_qpairs": 0, 00:17:20.666 "pending_bdev_io": 0, 00:17:20.666 "completed_nvme_io": 0, 00:17:20.666 "transports": [] 00:17:20.666 } 00:17:20.666 ] 00:17:20.666 }' 00:17:20.666 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:17:20.666 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:17:20.666 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:17:20.666 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:20.666 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:17:20.666 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:17:20.666 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:17:20.666 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:20.666 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.666 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.666 [2024-10-01 17:17:19.189192] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:20.666 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.666 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:17:20.667 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.667 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:17:20.929 "tick_rate": 2400000000, 00:17:20.929 "poll_groups": [ 00:17:20.929 { 00:17:20.929 "name": "nvmf_tgt_poll_group_000", 00:17:20.929 "admin_qpairs": 0, 00:17:20.929 "io_qpairs": 0, 00:17:20.929 "current_admin_qpairs": 0, 00:17:20.929 "current_io_qpairs": 0, 00:17:20.929 "pending_bdev_io": 0, 00:17:20.929 "completed_nvme_io": 0, 00:17:20.929 "transports": [ 00:17:20.929 { 00:17:20.929 "trtype": "TCP" 00:17:20.929 } 00:17:20.929 ] 00:17:20.929 }, 00:17:20.929 { 00:17:20.929 "name": "nvmf_tgt_poll_group_001", 00:17:20.929 "admin_qpairs": 0, 00:17:20.929 "io_qpairs": 0, 00:17:20.929 "current_admin_qpairs": 0, 00:17:20.929 "current_io_qpairs": 0, 00:17:20.929 "pending_bdev_io": 0, 00:17:20.929 "completed_nvme_io": 0, 00:17:20.929 "transports": [ 00:17:20.929 { 00:17:20.929 "trtype": "TCP" 00:17:20.929 } 00:17:20.929 ] 00:17:20.929 }, 00:17:20.929 { 00:17:20.929 "name": "nvmf_tgt_poll_group_002", 00:17:20.929 "admin_qpairs": 0, 00:17:20.929 "io_qpairs": 0, 00:17:20.929 "current_admin_qpairs": 0, 00:17:20.929 "current_io_qpairs": 0, 00:17:20.929 "pending_bdev_io": 0, 00:17:20.929 "completed_nvme_io": 0, 00:17:20.929 "transports": [ 00:17:20.929 { 00:17:20.929 "trtype": "TCP" 00:17:20.929 } 00:17:20.929 ] 00:17:20.929 }, 00:17:20.929 { 00:17:20.929 "name": "nvmf_tgt_poll_group_003", 00:17:20.929 "admin_qpairs": 0, 00:17:20.929 "io_qpairs": 0, 00:17:20.929 "current_admin_qpairs": 0, 00:17:20.929 "current_io_qpairs": 0, 00:17:20.929 "pending_bdev_io": 0, 00:17:20.929 "completed_nvme_io": 0, 00:17:20.929 "transports": [ 00:17:20.929 { 00:17:20.929 "trtype": "TCP" 00:17:20.929 } 00:17:20.929 ] 00:17:20.929 } 00:17:20.929 ] 00:17:20.929 }' 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.929 Malloc1 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.929 [2024-10-01 17:17:19.381030] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:17:20.929 [2024-10-01 17:17:19.417773] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:17:20.929 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:20.929 could not add new controller: failed to write to nvme-fabrics device 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.929 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:22.837 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:17:22.837 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:22.837 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:22.837 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:22.837 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:24.747 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:24.747 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:24.747 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:24.747 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:24.747 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:24.747 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:24.747 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:24.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:24.747 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:24.747 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:24.747 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:24.747 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:24.747 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:24.747 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:24.747 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:24.747 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:24.747 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.747 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.747 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.747 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:24.747 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:17:24.747 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:24.747 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:17:24.747 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:24.747 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:17:24.747 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:24.747 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:17:24.747 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:24.747 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:17:24.747 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:17:24.747 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:24.747 [2024-10-01 17:17:23.194068] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:17:24.747 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:24.747 could not add new controller: failed to write to nvme-fabrics device 00:17:24.747 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:17:24.747 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:24.748 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:24.748 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:24.748 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:17:24.748 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.748 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.748 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.748 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:26.658 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:17:26.658 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:26.658 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:26.658 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:26.658 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:28.567 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:28.567 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:28.567 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:28.567 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:28.567 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:28.567 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:28.567 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:28.567 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:28.567 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:28.567 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:28.567 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:28.567 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:28.567 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:28.567 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:28.567 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:28.567 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:28.567 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.567 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.567 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.567 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:17:28.567 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:28.567 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:28.567 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.567 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.567 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.567 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:28.567 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.567 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.567 [2024-10-01 17:17:26.924155] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:28.567 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.567 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:28.567 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.567 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.567 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.567 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:28.567 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.567 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.567 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.567 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:29.967 17:17:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:29.967 17:17:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:29.967 17:17:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:29.967 17:17:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:29.967 17:17:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:32.512 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:32.512 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:32.512 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:32.512 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:32.512 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:32.512 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:32.512 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:32.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:32.512 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:32.512 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:32.512 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:32.512 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:32.512 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:32.512 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:32.512 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:32.512 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:32.512 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.512 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.512 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.512 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:32.512 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.512 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.512 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.512 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:32.512 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:32.512 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.512 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.512 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.513 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:32.513 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.513 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.513 [2024-10-01 17:17:30.650485] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:32.513 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.513 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:32.513 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.513 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.513 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.513 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:32.513 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.513 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.513 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.513 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:33.899 17:17:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:33.899 17:17:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:33.899 17:17:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:33.899 17:17:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:33.899 17:17:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:35.813 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:35.813 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:35.813 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:35.813 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:35.813 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:35.813 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:35.813 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:35.813 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:35.813 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:35.813 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:35.813 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:35.813 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:35.813 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:35.813 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:35.813 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:35.813 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:35.813 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.813 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.813 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.813 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:35.813 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.813 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.813 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.813 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:35.813 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:35.813 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.813 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:36.074 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.074 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:36.074 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.074 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:36.074 [2024-10-01 17:17:34.373638] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:36.074 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.074 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:36.074 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.074 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:36.074 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.074 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:36.074 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.074 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:36.074 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.074 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:37.460 17:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:37.460 17:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:37.460 17:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:37.460 17:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:37.460 17:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:40.000 17:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:40.000 17:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:40.000 17:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:40.000 17:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:40.000 17:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:40.000 17:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:40.000 17:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:40.000 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:40.000 17:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:40.000 17:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:40.000 17:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:40.000 17:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:40.000 17:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:40.000 17:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:40.000 17:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:40.000 17:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:40.000 17:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.000 17:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.000 17:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.000 17:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:40.000 17:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.000 17:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.000 17:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.000 17:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:40.000 17:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:40.000 17:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.000 17:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.000 17:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.000 17:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:40.000 17:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.000 17:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.000 [2024-10-01 17:17:38.145344] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:40.000 17:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.000 17:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:40.000 17:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.000 17:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.000 17:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.000 17:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:40.000 17:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.000 17:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.000 17:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.000 17:17:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:41.413 17:17:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:41.413 17:17:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:41.413 17:17:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:41.413 17:17:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:41.413 17:17:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:43.323 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:43.323 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:43.323 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:43.323 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:43.323 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:43.323 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:43.323 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:43.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:43.323 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:43.323 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:43.323 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:43.323 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:43.323 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:43.323 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:43.583 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:43.583 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:43.583 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.583 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.583 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.583 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:43.583 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.583 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.583 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.583 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:43.583 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:43.583 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.583 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.583 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.583 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:43.583 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.583 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.583 [2024-10-01 17:17:41.917346] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:43.583 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.583 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:43.583 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.583 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.583 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.583 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:43.583 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.583 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.583 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.583 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:45.492 17:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:45.492 17:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:45.492 17:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:45.492 17:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:45.492 17:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:47.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.403 [2024-10-01 17:17:45.692366] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.403 [2024-10-01 17:17:45.760508] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.403 [2024-10-01 17:17:45.828710] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.403 [2024-10-01 17:17:45.900958] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.403 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.664 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.664 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:47.664 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:47.664 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.664 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.664 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.664 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:47.664 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.664 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.664 [2024-10-01 17:17:45.969180] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:47.664 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.664 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:47.664 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.664 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.664 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.664 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:47.664 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.664 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.664 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.664 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:47.664 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.664 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.664 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.664 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:47.664 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.664 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.664 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.664 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:47.664 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.664 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.664 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.664 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:47.664 "tick_rate": 2400000000, 00:17:47.664 "poll_groups": [ 00:17:47.664 { 00:17:47.664 "name": "nvmf_tgt_poll_group_000", 00:17:47.664 "admin_qpairs": 0, 00:17:47.664 "io_qpairs": 224, 00:17:47.664 "current_admin_qpairs": 0, 00:17:47.664 "current_io_qpairs": 0, 00:17:47.664 "pending_bdev_io": 0, 00:17:47.664 "completed_nvme_io": 274, 00:17:47.664 "transports": [ 00:17:47.664 { 00:17:47.664 "trtype": "TCP" 00:17:47.664 } 00:17:47.664 ] 00:17:47.664 }, 00:17:47.664 { 00:17:47.664 "name": "nvmf_tgt_poll_group_001", 00:17:47.664 "admin_qpairs": 1, 00:17:47.664 "io_qpairs": 223, 00:17:47.664 "current_admin_qpairs": 0, 00:17:47.664 "current_io_qpairs": 0, 00:17:47.664 "pending_bdev_io": 0, 00:17:47.664 "completed_nvme_io": 522, 00:17:47.664 "transports": [ 00:17:47.664 { 00:17:47.664 "trtype": "TCP" 00:17:47.664 } 00:17:47.664 ] 00:17:47.664 }, 00:17:47.664 { 00:17:47.664 "name": "nvmf_tgt_poll_group_002", 00:17:47.664 "admin_qpairs": 6, 00:17:47.664 "io_qpairs": 218, 00:17:47.664 "current_admin_qpairs": 0, 00:17:47.664 "current_io_qpairs": 0, 00:17:47.664 "pending_bdev_io": 0, 00:17:47.664 "completed_nvme_io": 219, 00:17:47.664 "transports": [ 00:17:47.664 { 00:17:47.664 "trtype": "TCP" 00:17:47.664 } 00:17:47.664 ] 00:17:47.664 }, 00:17:47.664 { 00:17:47.664 "name": "nvmf_tgt_poll_group_003", 00:17:47.664 "admin_qpairs": 0, 00:17:47.664 "io_qpairs": 224, 00:17:47.664 "current_admin_qpairs": 0, 00:17:47.664 "current_io_qpairs": 0, 00:17:47.664 "pending_bdev_io": 0, 00:17:47.664 "completed_nvme_io": 224, 00:17:47.664 "transports": [ 00:17:47.664 { 00:17:47.665 "trtype": "TCP" 00:17:47.665 } 00:17:47.665 ] 00:17:47.665 } 00:17:47.665 ] 00:17:47.665 }' 00:17:47.665 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:47.665 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:47.665 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:47.665 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:47.665 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:47.665 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:47.665 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:47.665 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:47.665 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:47.665 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:17:47.665 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:17:47.665 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:47.665 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:47.665 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:47.665 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:17:47.665 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:47.665 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:17:47.665 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:47.665 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:47.665 rmmod nvme_tcp 00:17:47.665 rmmod nvme_fabrics 00:17:47.665 rmmod nvme_keyring 00:17:47.665 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:47.665 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:17:47.665 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:17:47.665 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@515 -- # '[' -n 2970065 ']' 00:17:47.665 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # killprocess 2970065 00:17:47.665 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 2970065 ']' 00:17:47.665 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 2970065 00:17:47.926 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:17:47.926 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:47.926 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2970065 00:17:47.926 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:47.926 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:47.926 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2970065' 00:17:47.926 killing process with pid 2970065 00:17:47.926 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 2970065 00:17:47.926 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 2970065 00:17:47.926 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:47.926 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:47.926 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:47.926 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:17:47.926 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:47.926 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-save 00:17:47.926 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-restore 00:17:47.926 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:47.926 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:47.926 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:47.926 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:47.926 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.549 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:50.549 00:17:50.549 real 0m37.888s 00:17:50.549 user 1m54.187s 00:17:50.549 sys 0m7.675s 00:17:50.549 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:50.549 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:50.549 ************************************ 00:17:50.549 END TEST nvmf_rpc 00:17:50.549 ************************************ 00:17:50.549 17:17:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:50.549 17:17:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:50.549 17:17:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:50.549 17:17:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:50.549 ************************************ 00:17:50.549 START TEST nvmf_invalid 00:17:50.549 ************************************ 00:17:50.549 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:50.549 * Looking for test storage... 00:17:50.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:50.549 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:50.549 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lcov --version 00:17:50.549 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:50.549 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:50.549 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:50.549 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:50.549 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:50.549 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:17:50.549 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:17:50.549 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:17:50.549 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:17:50.549 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:17:50.549 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:17:50.549 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:17:50.549 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:50.549 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:17:50.549 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:17:50.549 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:50.549 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:50.549 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:17:50.549 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:17:50.549 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:50.549 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:17:50.549 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:50.549 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:17:50.549 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:17:50.549 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:50.549 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:17:50.549 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:50.549 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:50.549 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:50.549 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:17:50.549 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:50.549 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:50.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.549 --rc genhtml_branch_coverage=1 00:17:50.549 --rc genhtml_function_coverage=1 00:17:50.549 --rc genhtml_legend=1 00:17:50.550 --rc geninfo_all_blocks=1 00:17:50.550 --rc geninfo_unexecuted_blocks=1 00:17:50.550 00:17:50.550 ' 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:50.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.550 --rc genhtml_branch_coverage=1 00:17:50.550 --rc genhtml_function_coverage=1 00:17:50.550 --rc genhtml_legend=1 00:17:50.550 --rc geninfo_all_blocks=1 00:17:50.550 --rc geninfo_unexecuted_blocks=1 00:17:50.550 00:17:50.550 ' 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:50.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.550 --rc genhtml_branch_coverage=1 00:17:50.550 --rc genhtml_function_coverage=1 00:17:50.550 --rc genhtml_legend=1 00:17:50.550 --rc geninfo_all_blocks=1 00:17:50.550 --rc geninfo_unexecuted_blocks=1 00:17:50.550 00:17:50.550 ' 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:50.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.550 --rc genhtml_branch_coverage=1 00:17:50.550 --rc genhtml_function_coverage=1 00:17:50.550 --rc genhtml_legend=1 00:17:50.550 --rc geninfo_all_blocks=1 00:17:50.550 --rc geninfo_unexecuted_blocks=1 00:17:50.550 00:17:50.550 ' 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:50.550 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:17:50.550 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:58.692 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:58.692 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:58.692 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:58.693 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:58.693 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:58.693 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:58.693 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:58.693 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:58.693 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:58.693 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:58.693 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:58.693 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:58.693 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:58.693 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:58.693 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:58.693 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:58.693 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:58.693 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:58.693 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:58.693 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:58.693 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:58.693 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:58.693 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # is_hw=yes 00:17:58.693 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:58.693 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:17:58.693 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:17:58.693 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:58.693 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:58.693 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:58.693 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:58.693 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:58.693 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:58.693 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:58.693 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:58.693 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:58.693 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:58.693 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:58.693 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:58.693 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:58.693 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:58.693 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:58.693 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:58.693 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:58.693 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:58.693 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:58.693 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:58.693 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:58.693 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:58.693 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:58.693 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:58.693 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.569 ms 00:17:58.693 00:17:58.693 --- 10.0.0.2 ping statistics --- 00:17:58.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.693 rtt min/avg/max/mdev = 0.569/0.569/0.569/0.000 ms 00:17:58.693 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:58.693 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:58.693 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:17:58.693 00:17:58.693 --- 10.0.0.1 ping statistics --- 00:17:58.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.693 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:17:58.693 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:58.693 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # return 0 00:17:58.693 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:58.693 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:58.693 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:58.693 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:58.693 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:58.693 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:58.693 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:58.693 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:58.693 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:58.693 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:58.693 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:58.693 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # nvmfpid=2979654 00:17:58.693 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # waitforlisten 2979654 00:17:58.693 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:58.693 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 2979654 ']' 00:17:58.693 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.693 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:58.693 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.693 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:58.693 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:58.693 [2024-10-01 17:17:56.153196] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:17:58.693 [2024-10-01 17:17:56.153248] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:58.693 [2024-10-01 17:17:56.222118] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:58.693 [2024-10-01 17:17:56.253823] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:58.693 [2024-10-01 17:17:56.253863] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:58.693 [2024-10-01 17:17:56.253872] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:58.693 [2024-10-01 17:17:56.253879] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:58.693 [2024-10-01 17:17:56.253885] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:58.693 [2024-10-01 17:17:56.254065] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:58.693 [2024-10-01 17:17:56.254183] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:58.693 [2024-10-01 17:17:56.254341] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.693 [2024-10-01 17:17:56.254342] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:58.693 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:58.693 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:17:58.693 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:58.693 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:58.693 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:58.693 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:58.693 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:58.693 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode16364 00:17:58.693 [2024-10-01 17:17:56.555188] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:58.693 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:58.693 { 00:17:58.693 "nqn": "nqn.2016-06.io.spdk:cnode16364", 00:17:58.693 "tgt_name": "foobar", 00:17:58.693 "method": "nvmf_create_subsystem", 00:17:58.693 "req_id": 1 00:17:58.693 } 00:17:58.693 Got JSON-RPC error response 00:17:58.693 response: 00:17:58.693 { 00:17:58.693 "code": -32603, 00:17:58.693 "message": "Unable to find target foobar" 00:17:58.693 }' 00:17:58.693 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:58.693 { 00:17:58.693 "nqn": "nqn.2016-06.io.spdk:cnode16364", 00:17:58.693 "tgt_name": "foobar", 00:17:58.693 "method": "nvmf_create_subsystem", 00:17:58.693 "req_id": 1 00:17:58.693 } 00:17:58.693 Got JSON-RPC error response 00:17:58.694 response: 00:17:58.694 { 00:17:58.694 "code": -32603, 00:17:58.694 "message": "Unable to find target foobar" 00:17:58.694 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:58.694 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:58.694 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode746 00:17:58.694 [2024-10-01 17:17:56.747829] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode746: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:58.694 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:58.694 { 00:17:58.694 "nqn": "nqn.2016-06.io.spdk:cnode746", 00:17:58.694 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:58.694 "method": "nvmf_create_subsystem", 00:17:58.694 "req_id": 1 00:17:58.694 } 00:17:58.694 Got JSON-RPC error response 00:17:58.694 response: 00:17:58.694 { 00:17:58.694 "code": -32602, 00:17:58.694 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:58.694 }' 00:17:58.694 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:58.694 { 00:17:58.694 "nqn": "nqn.2016-06.io.spdk:cnode746", 00:17:58.694 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:58.694 "method": "nvmf_create_subsystem", 00:17:58.694 "req_id": 1 00:17:58.694 } 00:17:58.694 Got JSON-RPC error response 00:17:58.694 response: 00:17:58.694 { 00:17:58.694 "code": -32602, 00:17:58.694 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:58.694 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:58.694 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:58.694 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode11674 00:17:58.694 [2024-10-01 17:17:56.940444] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11674: invalid model number 'SPDK_Controller' 00:17:58.694 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:58.694 { 00:17:58.694 "nqn": "nqn.2016-06.io.spdk:cnode11674", 00:17:58.694 "model_number": "SPDK_Controller\u001f", 00:17:58.694 "method": "nvmf_create_subsystem", 00:17:58.694 "req_id": 1 00:17:58.694 } 00:17:58.694 Got JSON-RPC error response 00:17:58.694 response: 00:17:58.694 { 00:17:58.694 "code": -32602, 00:17:58.694 "message": "Invalid MN SPDK_Controller\u001f" 00:17:58.694 }' 00:17:58.694 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:58.694 { 00:17:58.694 "nqn": "nqn.2016-06.io.spdk:cnode11674", 00:17:58.694 "model_number": "SPDK_Controller\u001f", 00:17:58.694 "method": "nvmf_create_subsystem", 00:17:58.694 "req_id": 1 00:17:58.694 } 00:17:58.694 Got JSON-RPC error response 00:17:58.694 response: 00:17:58.694 { 00:17:58.694 "code": -32602, 00:17:58.694 "message": "Invalid MN SPDK_Controller\u001f" 00:17:58.694 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:58.694 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:58.694 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:58.694 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:58.694 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:58.694 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:58.694 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:58.694 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.694 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:17:58.694 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:17:58.694 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:17:58.694 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:58.694 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.694 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:58.694 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:58.694 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:58.694 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:58.694 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.694 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:17:58.694 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:17:58.694 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:17:58.694 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:58.694 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.694 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:17:58.694 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:17:58.694 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:17:58.694 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:58.694 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.694 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:17:58.694 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:17:58.694 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:17:58.695 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:17:58.695 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:58.695 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.695 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:17:58.695 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:17:58.695 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:17:58.695 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:58.695 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.695 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:17:58.695 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:17:58.695 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:17:58.695 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:58.695 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.695 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:17:58.695 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:17:58.695 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:17:58.695 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:58.695 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.695 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:17:58.695 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:17:58.695 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:17:58.695 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:58.695 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.695 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:17:58.695 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:17:58.695 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:17:58.695 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:58.695 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.695 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:17:58.695 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:17:58.695 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:17:58.695 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:58.695 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.695 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ a == \- ]] 00:17:58.695 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'a.$V,Q4ZQk:jUHYBXV[q,' 00:17:58.695 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'a.$V,Q4ZQk:jUHYBXV[q,' nqn.2016-06.io.spdk:cnode7103 00:17:58.958 [2024-10-01 17:17:57.277615] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7103: invalid serial number 'a.$V,Q4ZQk:jUHYBXV[q,' 00:17:58.958 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:58.958 { 00:17:58.958 "nqn": "nqn.2016-06.io.spdk:cnode7103", 00:17:58.958 "serial_number": "a.$V,Q4ZQk:jUHYBXV[q,", 00:17:58.958 "method": "nvmf_create_subsystem", 00:17:58.958 "req_id": 1 00:17:58.958 } 00:17:58.958 Got JSON-RPC error response 00:17:58.958 response: 00:17:58.958 { 00:17:58.958 "code": -32602, 00:17:58.958 "message": "Invalid SN a.$V,Q4ZQk:jUHYBXV[q," 00:17:58.958 }' 00:17:58.958 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:58.958 { 00:17:58.958 "nqn": "nqn.2016-06.io.spdk:cnode7103", 00:17:58.958 "serial_number": "a.$V,Q4ZQk:jUHYBXV[q,", 00:17:58.958 "method": "nvmf_create_subsystem", 00:17:58.958 "req_id": 1 00:17:58.958 } 00:17:58.958 Got JSON-RPC error response 00:17:58.958 response: 00:17:58.958 { 00:17:58.958 "code": -32602, 00:17:58.958 "message": "Invalid SN a.$V,Q4ZQk:jUHYBXV[q," 00:17:58.958 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:58.958 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:58.958 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:58.958 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:58.958 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:58.958 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:58.958 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:58.958 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.958 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:17:58.958 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:17:58.958 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:17:58.958 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:58.958 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.958 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:17:58.958 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:17:58.958 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:58.959 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:17:59.221 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.222 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.222 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:17:59.222 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:17:59.222 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:17:59.222 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.222 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.222 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:17:59.222 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:59.222 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:17:59.222 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.222 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.222 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ e == \- ]] 00:17:59.222 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'ea;b`4SqJ=~*!h/="[yZuLc|5>2D`NNbW*,tx( }:' 00:17:59.222 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'ea;b`4SqJ=~*!h/="[yZuLc|5>2D`NNbW*,tx( }:' nqn.2016-06.io.spdk:cnode4688 00:17:59.482 [2024-10-01 17:17:57.787277] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4688: invalid model number 'ea;b`4SqJ=~*!h/="[yZuLc|5>2D`NNbW*,tx( }:' 00:17:59.482 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:17:59.482 { 00:17:59.482 "nqn": "nqn.2016-06.io.spdk:cnode4688", 00:17:59.482 "model_number": "ea;b`4SqJ=~*!h/=\"[yZuLc|5>2D`NNbW*,tx( }:", 00:17:59.482 "method": "nvmf_create_subsystem", 00:17:59.482 "req_id": 1 00:17:59.482 } 00:17:59.482 Got JSON-RPC error response 00:17:59.482 response: 00:17:59.483 { 00:17:59.483 "code": -32602, 00:17:59.483 "message": "Invalid MN ea;b`4SqJ=~*!h/=\"[yZuLc|5>2D`NNbW*,tx( }:" 00:17:59.483 }' 00:17:59.483 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:17:59.483 { 00:17:59.483 "nqn": "nqn.2016-06.io.spdk:cnode4688", 00:17:59.483 "model_number": "ea;b`4SqJ=~*!h/=\"[yZuLc|5>2D`NNbW*,tx( }:", 00:17:59.483 "method": "nvmf_create_subsystem", 00:17:59.483 "req_id": 1 00:17:59.483 } 00:17:59.483 Got JSON-RPC error response 00:17:59.483 response: 00:17:59.483 { 00:17:59.483 "code": -32602, 00:17:59.483 "message": "Invalid MN ea;b`4SqJ=~*!h/=\"[yZuLc|5>2D`NNbW*,tx( }:" 00:17:59.483 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:59.483 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:17:59.483 [2024-10-01 17:17:57.971955] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:59.483 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:17:59.743 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:17:59.743 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:17:59.743 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:17:59.743 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:17:59.743 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:18:00.004 [2024-10-01 17:17:58.357206] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:18:00.004 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:18:00.004 { 00:18:00.004 "nqn": "nqn.2016-06.io.spdk:cnode", 00:18:00.004 "listen_address": { 00:18:00.004 "trtype": "tcp", 00:18:00.004 "traddr": "", 00:18:00.004 "trsvcid": "4421" 00:18:00.004 }, 00:18:00.004 "method": "nvmf_subsystem_remove_listener", 00:18:00.004 "req_id": 1 00:18:00.004 } 00:18:00.004 Got JSON-RPC error response 00:18:00.004 response: 00:18:00.004 { 00:18:00.004 "code": -32602, 00:18:00.004 "message": "Invalid parameters" 00:18:00.004 }' 00:18:00.004 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:18:00.004 { 00:18:00.004 "nqn": "nqn.2016-06.io.spdk:cnode", 00:18:00.004 "listen_address": { 00:18:00.004 "trtype": "tcp", 00:18:00.004 "traddr": "", 00:18:00.004 "trsvcid": "4421" 00:18:00.004 }, 00:18:00.004 "method": "nvmf_subsystem_remove_listener", 00:18:00.004 "req_id": 1 00:18:00.004 } 00:18:00.004 Got JSON-RPC error response 00:18:00.004 response: 00:18:00.004 { 00:18:00.004 "code": -32602, 00:18:00.004 "message": "Invalid parameters" 00:18:00.004 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:18:00.004 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23494 -i 0 00:18:00.004 [2024-10-01 17:17:58.545772] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23494: invalid cntlid range [0-65519] 00:18:00.272 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:18:00.272 { 00:18:00.272 "nqn": "nqn.2016-06.io.spdk:cnode23494", 00:18:00.272 "min_cntlid": 0, 00:18:00.272 "method": "nvmf_create_subsystem", 00:18:00.272 "req_id": 1 00:18:00.272 } 00:18:00.272 Got JSON-RPC error response 00:18:00.272 response: 00:18:00.272 { 00:18:00.272 "code": -32602, 00:18:00.272 "message": "Invalid cntlid range [0-65519]" 00:18:00.272 }' 00:18:00.272 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:18:00.272 { 00:18:00.272 "nqn": "nqn.2016-06.io.spdk:cnode23494", 00:18:00.272 "min_cntlid": 0, 00:18:00.272 "method": "nvmf_create_subsystem", 00:18:00.272 "req_id": 1 00:18:00.272 } 00:18:00.272 Got JSON-RPC error response 00:18:00.272 response: 00:18:00.272 { 00:18:00.272 "code": -32602, 00:18:00.272 "message": "Invalid cntlid range [0-65519]" 00:18:00.272 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:00.272 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11488 -i 65520 00:18:00.272 [2024-10-01 17:17:58.734384] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11488: invalid cntlid range [65520-65519] 00:18:00.272 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:18:00.272 { 00:18:00.272 "nqn": "nqn.2016-06.io.spdk:cnode11488", 00:18:00.272 "min_cntlid": 65520, 00:18:00.272 "method": "nvmf_create_subsystem", 00:18:00.272 "req_id": 1 00:18:00.272 } 00:18:00.272 Got JSON-RPC error response 00:18:00.272 response: 00:18:00.272 { 00:18:00.272 "code": -32602, 00:18:00.272 "message": "Invalid cntlid range [65520-65519]" 00:18:00.272 }' 00:18:00.272 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:18:00.272 { 00:18:00.272 "nqn": "nqn.2016-06.io.spdk:cnode11488", 00:18:00.272 "min_cntlid": 65520, 00:18:00.272 "method": "nvmf_create_subsystem", 00:18:00.272 "req_id": 1 00:18:00.272 } 00:18:00.272 Got JSON-RPC error response 00:18:00.272 response: 00:18:00.272 { 00:18:00.272 "code": -32602, 00:18:00.272 "message": "Invalid cntlid range [65520-65519]" 00:18:00.272 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:00.272 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16342 -I 0 00:18:00.539 [2024-10-01 17:17:58.914979] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16342: invalid cntlid range [1-0] 00:18:00.539 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:18:00.539 { 00:18:00.539 "nqn": "nqn.2016-06.io.spdk:cnode16342", 00:18:00.539 "max_cntlid": 0, 00:18:00.539 "method": "nvmf_create_subsystem", 00:18:00.539 "req_id": 1 00:18:00.539 } 00:18:00.539 Got JSON-RPC error response 00:18:00.539 response: 00:18:00.539 { 00:18:00.539 "code": -32602, 00:18:00.539 "message": "Invalid cntlid range [1-0]" 00:18:00.539 }' 00:18:00.539 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:18:00.539 { 00:18:00.539 "nqn": "nqn.2016-06.io.spdk:cnode16342", 00:18:00.539 "max_cntlid": 0, 00:18:00.539 "method": "nvmf_create_subsystem", 00:18:00.539 "req_id": 1 00:18:00.539 } 00:18:00.539 Got JSON-RPC error response 00:18:00.539 response: 00:18:00.539 { 00:18:00.539 "code": -32602, 00:18:00.539 "message": "Invalid cntlid range [1-0]" 00:18:00.539 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:00.539 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24061 -I 65520 00:18:00.800 [2024-10-01 17:17:59.103569] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24061: invalid cntlid range [1-65520] 00:18:00.800 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:18:00.800 { 00:18:00.800 "nqn": "nqn.2016-06.io.spdk:cnode24061", 00:18:00.800 "max_cntlid": 65520, 00:18:00.800 "method": "nvmf_create_subsystem", 00:18:00.800 "req_id": 1 00:18:00.800 } 00:18:00.800 Got JSON-RPC error response 00:18:00.800 response: 00:18:00.800 { 00:18:00.800 "code": -32602, 00:18:00.800 "message": "Invalid cntlid range [1-65520]" 00:18:00.800 }' 00:18:00.800 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:18:00.800 { 00:18:00.800 "nqn": "nqn.2016-06.io.spdk:cnode24061", 00:18:00.800 "max_cntlid": 65520, 00:18:00.800 "method": "nvmf_create_subsystem", 00:18:00.800 "req_id": 1 00:18:00.800 } 00:18:00.800 Got JSON-RPC error response 00:18:00.800 response: 00:18:00.800 { 00:18:00.800 "code": -32602, 00:18:00.800 "message": "Invalid cntlid range [1-65520]" 00:18:00.800 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:00.800 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12386 -i 6 -I 5 00:18:00.800 [2024-10-01 17:17:59.284136] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12386: invalid cntlid range [6-5] 00:18:00.800 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:18:00.800 { 00:18:00.800 "nqn": "nqn.2016-06.io.spdk:cnode12386", 00:18:00.800 "min_cntlid": 6, 00:18:00.800 "max_cntlid": 5, 00:18:00.800 "method": "nvmf_create_subsystem", 00:18:00.800 "req_id": 1 00:18:00.800 } 00:18:00.800 Got JSON-RPC error response 00:18:00.800 response: 00:18:00.800 { 00:18:00.800 "code": -32602, 00:18:00.800 "message": "Invalid cntlid range [6-5]" 00:18:00.800 }' 00:18:00.801 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:18:00.801 { 00:18:00.801 "nqn": "nqn.2016-06.io.spdk:cnode12386", 00:18:00.801 "min_cntlid": 6, 00:18:00.801 "max_cntlid": 5, 00:18:00.801 "method": "nvmf_create_subsystem", 00:18:00.801 "req_id": 1 00:18:00.801 } 00:18:00.801 Got JSON-RPC error response 00:18:00.801 response: 00:18:00.801 { 00:18:00.801 "code": -32602, 00:18:00.801 "message": "Invalid cntlid range [6-5]" 00:18:00.801 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:00.801 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:18:01.128 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:18:01.128 { 00:18:01.128 "name": "foobar", 00:18:01.128 "method": "nvmf_delete_target", 00:18:01.128 "req_id": 1 00:18:01.128 } 00:18:01.128 Got JSON-RPC error response 00:18:01.128 response: 00:18:01.128 { 00:18:01.128 "code": -32602, 00:18:01.128 "message": "The specified target doesn'\''t exist, cannot delete it." 00:18:01.128 }' 00:18:01.128 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:18:01.128 { 00:18:01.128 "name": "foobar", 00:18:01.128 "method": "nvmf_delete_target", 00:18:01.128 "req_id": 1 00:18:01.128 } 00:18:01.128 Got JSON-RPC error response 00:18:01.128 response: 00:18:01.128 { 00:18:01.128 "code": -32602, 00:18:01.129 "message": "The specified target doesn't exist, cannot delete it." 00:18:01.129 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:18:01.129 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:18:01.129 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:18:01.129 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:01.129 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:18:01.129 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:01.129 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:18:01.129 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:01.129 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:01.129 rmmod nvme_tcp 00:18:01.129 rmmod nvme_fabrics 00:18:01.129 rmmod nvme_keyring 00:18:01.129 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:01.129 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:18:01.129 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:18:01.129 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@515 -- # '[' -n 2979654 ']' 00:18:01.129 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # killprocess 2979654 00:18:01.129 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 2979654 ']' 00:18:01.129 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 2979654 00:18:01.129 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:18:01.129 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:01.129 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2979654 00:18:01.129 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:01.129 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:01.129 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2979654' 00:18:01.129 killing process with pid 2979654 00:18:01.129 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 2979654 00:18:01.129 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 2979654 00:18:01.129 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:01.129 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:01.129 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:01.129 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:18:01.129 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-save 00:18:01.129 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:01.129 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-restore 00:18:01.391 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:01.391 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:01.391 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:01.391 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:01.391 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.304 17:18:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:03.304 00:18:03.304 real 0m13.170s 00:18:03.304 user 0m17.956s 00:18:03.304 sys 0m6.402s 00:18:03.304 17:18:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:03.304 17:18:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:03.304 ************************************ 00:18:03.305 END TEST nvmf_invalid 00:18:03.305 ************************************ 00:18:03.305 17:18:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:18:03.305 17:18:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:03.305 17:18:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:03.305 17:18:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:03.305 ************************************ 00:18:03.305 START TEST nvmf_connect_stress 00:18:03.305 ************************************ 00:18:03.305 17:18:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:18:03.566 * Looking for test storage... 00:18:03.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:03.566 17:18:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:03.566 17:18:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:18:03.566 17:18:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:03.566 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:03.566 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:03.566 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:03.566 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:03.566 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:18:03.566 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:18:03.566 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:18:03.566 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:18:03.566 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:18:03.566 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:18:03.566 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:18:03.566 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:03.566 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:18:03.566 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:18:03.566 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:03.566 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:03.566 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:18:03.566 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:18:03.566 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:03.566 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:18:03.566 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:18:03.566 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:18:03.566 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:18:03.566 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:03.566 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:18:03.566 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:18:03.566 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:03.566 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:03.566 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:18:03.566 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:03.566 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:03.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.566 --rc genhtml_branch_coverage=1 00:18:03.566 --rc genhtml_function_coverage=1 00:18:03.566 --rc genhtml_legend=1 00:18:03.566 --rc geninfo_all_blocks=1 00:18:03.567 --rc geninfo_unexecuted_blocks=1 00:18:03.567 00:18:03.567 ' 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:03.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.567 --rc genhtml_branch_coverage=1 00:18:03.567 --rc genhtml_function_coverage=1 00:18:03.567 --rc genhtml_legend=1 00:18:03.567 --rc geninfo_all_blocks=1 00:18:03.567 --rc geninfo_unexecuted_blocks=1 00:18:03.567 00:18:03.567 ' 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:03.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.567 --rc genhtml_branch_coverage=1 00:18:03.567 --rc genhtml_function_coverage=1 00:18:03.567 --rc genhtml_legend=1 00:18:03.567 --rc geninfo_all_blocks=1 00:18:03.567 --rc geninfo_unexecuted_blocks=1 00:18:03.567 00:18:03.567 ' 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:03.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.567 --rc genhtml_branch_coverage=1 00:18:03.567 --rc genhtml_function_coverage=1 00:18:03.567 --rc genhtml_legend=1 00:18:03.567 --rc geninfo_all_blocks=1 00:18:03.567 --rc geninfo_unexecuted_blocks=1 00:18:03.567 00:18:03.567 ' 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:03.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:03.567 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:03.568 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:18:03.568 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:11.714 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:11.714 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:18:11.714 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:11.714 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:11.714 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:11.714 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:11.714 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:11.714 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:18:11.714 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:11.714 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:18:11.714 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:18:11.714 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:18:11.714 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:18:11.714 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:18:11.714 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:18:11.714 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:11.714 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:11.714 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:11.714 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:11.714 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:11.714 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:11.714 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:11.714 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:11.714 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:11.714 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:11.714 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:11.714 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:11.714 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:11.714 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:11.714 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:11.714 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:11.714 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:11.714 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:11.715 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:11.715 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:11.715 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:11.715 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:11.715 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:11.715 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.559 ms 00:18:11.715 00:18:11.715 --- 10.0.0.2 ping statistics --- 00:18:11.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.715 rtt min/avg/max/mdev = 0.559/0.559/0.559/0.000 ms 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:11.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:11.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:18:11.715 00:18:11.715 --- 10.0.0.1 ping statistics --- 00:18:11.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.715 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # return 0 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # nvmfpid=2985133 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # waitforlisten 2985133 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 2985133 ']' 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:11.715 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:11.715 [2024-10-01 17:18:09.461104] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:18:11.715 [2024-10-01 17:18:09.461172] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:11.715 [2024-10-01 17:18:09.550780] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:11.716 [2024-10-01 17:18:09.599299] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:11.716 [2024-10-01 17:18:09.599359] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:11.716 [2024-10-01 17:18:09.599368] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:11.716 [2024-10-01 17:18:09.599375] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:11.716 [2024-10-01 17:18:09.599381] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:11.716 [2024-10-01 17:18:09.599516] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:11.716 [2024-10-01 17:18:09.599681] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.716 [2024-10-01 17:18:09.599682] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:11.977 [2024-10-01 17:18:10.327529] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:11.977 [2024-10-01 17:18:10.361455] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:11.977 NULL1 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2985364 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2985364 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.977 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:12.548 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.548 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2985364 00:18:12.548 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:12.548 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.548 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:12.808 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.808 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2985364 00:18:12.808 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:12.808 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.808 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:13.068 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.068 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2985364 00:18:13.068 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:13.068 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.068 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:13.328 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.328 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2985364 00:18:13.328 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:13.328 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.328 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:13.588 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.588 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2985364 00:18:13.588 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:13.588 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.588 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:14.159 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.159 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2985364 00:18:14.159 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:14.159 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.159 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:14.419 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.419 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2985364 00:18:14.419 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:14.419 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.419 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:14.679 17:18:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.679 17:18:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2985364 00:18:14.679 17:18:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:14.679 17:18:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.679 17:18:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:14.939 17:18:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.939 17:18:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2985364 00:18:14.939 17:18:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:14.939 17:18:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.939 17:18:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:15.305 17:18:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.305 17:18:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2985364 00:18:15.305 17:18:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:15.305 17:18:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.305 17:18:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:15.613 17:18:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.613 17:18:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2985364 00:18:15.613 17:18:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:15.613 17:18:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.613 17:18:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:15.873 17:18:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.873 17:18:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2985364 00:18:15.873 17:18:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:15.873 17:18:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.873 17:18:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:16.444 17:18:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.444 17:18:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2985364 00:18:16.444 17:18:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:16.444 17:18:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.444 17:18:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:16.704 17:18:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.704 17:18:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2985364 00:18:16.704 17:18:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:16.704 17:18:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.704 17:18:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:16.964 17:18:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.964 17:18:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2985364 00:18:16.964 17:18:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:16.964 17:18:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.964 17:18:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:17.225 17:18:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.225 17:18:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2985364 00:18:17.225 17:18:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:17.225 17:18:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.225 17:18:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:17.485 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.485 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2985364 00:18:17.485 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:17.485 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.485 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:18.055 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.055 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2985364 00:18:18.055 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:18.055 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.055 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:18.316 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.316 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2985364 00:18:18.316 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:18.316 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.316 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:18.576 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.576 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2985364 00:18:18.576 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:18.576 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.576 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:18.836 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.836 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2985364 00:18:18.836 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:18.836 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.836 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:19.096 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.096 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2985364 00:18:19.096 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:19.096 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.096 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:19.667 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.667 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2985364 00:18:19.667 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:19.667 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.667 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:19.928 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.928 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2985364 00:18:19.928 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:19.928 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.928 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:20.188 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.188 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2985364 00:18:20.188 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:20.188 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.188 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:20.449 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.449 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2985364 00:18:20.449 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:20.449 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.449 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:21.020 17:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.020 17:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2985364 00:18:21.020 17:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:21.020 17:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.020 17:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:21.280 17:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.280 17:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2985364 00:18:21.280 17:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:21.280 17:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.280 17:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:21.541 17:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.541 17:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2985364 00:18:21.541 17:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:21.541 17:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.541 17:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:21.802 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.802 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2985364 00:18:21.802 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:21.802 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.802 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:22.063 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:22.063 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.063 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2985364 00:18:22.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2985364) - No such process 00:18:22.063 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2985364 00:18:22.063 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:22.063 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:22.063 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:18:22.063 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:22.063 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:18:22.063 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:22.063 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:18:22.063 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:22.063 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:22.063 rmmod nvme_tcp 00:18:22.063 rmmod nvme_fabrics 00:18:22.324 rmmod nvme_keyring 00:18:22.324 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:22.324 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:18:22.324 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:18:22.324 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@515 -- # '[' -n 2985133 ']' 00:18:22.324 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # killprocess 2985133 00:18:22.324 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 2985133 ']' 00:18:22.324 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 2985133 00:18:22.324 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:18:22.324 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:22.324 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2985133 00:18:22.324 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:22.324 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:22.324 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2985133' 00:18:22.324 killing process with pid 2985133 00:18:22.324 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 2985133 00:18:22.324 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 2985133 00:18:22.324 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:22.324 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:22.324 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:22.324 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:18:22.324 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-save 00:18:22.324 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:22.324 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-restore 00:18:22.324 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:22.324 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:22.324 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.324 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:22.324 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.870 17:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:24.870 00:18:24.870 real 0m21.076s 00:18:24.870 user 0m42.332s 00:18:24.870 sys 0m9.023s 00:18:24.870 17:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:24.870 17:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:24.870 ************************************ 00:18:24.870 END TEST nvmf_connect_stress 00:18:24.870 ************************************ 00:18:24.870 17:18:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:24.870 17:18:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:24.870 17:18:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:24.870 17:18:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:24.870 ************************************ 00:18:24.870 START TEST nvmf_fused_ordering 00:18:24.870 ************************************ 00:18:24.870 17:18:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:24.870 * Looking for test storage... 00:18:24.870 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lcov --version 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:24.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.870 --rc genhtml_branch_coverage=1 00:18:24.870 --rc genhtml_function_coverage=1 00:18:24.870 --rc genhtml_legend=1 00:18:24.870 --rc geninfo_all_blocks=1 00:18:24.870 --rc geninfo_unexecuted_blocks=1 00:18:24.870 00:18:24.870 ' 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:24.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.870 --rc genhtml_branch_coverage=1 00:18:24.870 --rc genhtml_function_coverage=1 00:18:24.870 --rc genhtml_legend=1 00:18:24.870 --rc geninfo_all_blocks=1 00:18:24.870 --rc geninfo_unexecuted_blocks=1 00:18:24.870 00:18:24.870 ' 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:24.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.870 --rc genhtml_branch_coverage=1 00:18:24.870 --rc genhtml_function_coverage=1 00:18:24.870 --rc genhtml_legend=1 00:18:24.870 --rc geninfo_all_blocks=1 00:18:24.870 --rc geninfo_unexecuted_blocks=1 00:18:24.870 00:18:24.870 ' 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:24.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.870 --rc genhtml_branch_coverage=1 00:18:24.870 --rc genhtml_function_coverage=1 00:18:24.870 --rc genhtml_legend=1 00:18:24.870 --rc geninfo_all_blocks=1 00:18:24.870 --rc geninfo_unexecuted_blocks=1 00:18:24.870 00:18:24.870 ' 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:24.870 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:24.871 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:24.871 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.871 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.871 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.871 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:18:24.871 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.871 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:18:24.871 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:24.871 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:24.871 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:24.871 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:24.871 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:24.871 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:24.871 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:24.871 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:24.871 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:24.871 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:24.871 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:18:24.871 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:24.871 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:24.871 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:24.871 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:24.871 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:24.871 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.871 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:24.871 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.871 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:24.871 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:24.871 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:18:24.871 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:33.020 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:33.020 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:33.020 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:33.020 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # is_hw=yes 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:33.020 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:33.021 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:33.021 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:33.021 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:33.021 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:33.021 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:33.021 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:33.021 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:33.021 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:33.021 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:33.021 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:33.021 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:33.021 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:33.021 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:33.021 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:33.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:33.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.558 ms 00:18:33.021 00:18:33.021 --- 10.0.0.2 ping statistics --- 00:18:33.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:33.021 rtt min/avg/max/mdev = 0.558/0.558/0.558/0.000 ms 00:18:33.021 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:33.021 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:33.021 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:18:33.021 00:18:33.021 --- 10.0.0.1 ping statistics --- 00:18:33.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:33.021 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:18:33.021 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:33.021 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # return 0 00:18:33.021 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:33.021 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:33.021 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:33.021 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:33.021 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:33.021 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:33.021 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:33.021 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:18:33.021 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:33.021 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:33.021 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:33.021 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # nvmfpid=2991774 00:18:33.021 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # waitforlisten 2991774 00:18:33.021 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:33.021 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 2991774 ']' 00:18:33.021 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.021 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:33.021 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.021 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:33.021 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:33.021 [2024-10-01 17:18:30.566230] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:18:33.021 [2024-10-01 17:18:30.566285] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:33.021 [2024-10-01 17:18:30.651664] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.021 [2024-10-01 17:18:30.685267] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:33.021 [2024-10-01 17:18:30.685312] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:33.021 [2024-10-01 17:18:30.685320] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:33.021 [2024-10-01 17:18:30.685327] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:33.021 [2024-10-01 17:18:30.685333] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:33.021 [2024-10-01 17:18:30.685355] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.021 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:33.021 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:18:33.021 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:33.021 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:33.021 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:33.021 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:33.021 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:33.021 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.021 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:33.021 [2024-10-01 17:18:31.424148] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:33.021 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.021 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:33.021 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.021 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:33.021 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.021 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:33.021 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.021 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:33.021 [2024-10-01 17:18:31.440416] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:33.021 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.021 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:33.021 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.021 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:33.021 NULL1 00:18:33.021 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.021 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:18:33.021 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.021 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:33.021 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.022 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:18:33.022 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.022 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:33.022 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.022 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:33.022 [2024-10-01 17:18:31.498294] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:18:33.022 [2024-10-01 17:18:31.498349] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2991884 ] 00:18:33.594 Attached to nqn.2016-06.io.spdk:cnode1 00:18:33.594 Namespace ID: 1 size: 1GB 00:18:33.594 fused_ordering(0) 00:18:33.594 fused_ordering(1) 00:18:33.594 fused_ordering(2) 00:18:33.594 fused_ordering(3) 00:18:33.594 fused_ordering(4) 00:18:33.594 fused_ordering(5) 00:18:33.594 fused_ordering(6) 00:18:33.594 fused_ordering(7) 00:18:33.594 fused_ordering(8) 00:18:33.594 fused_ordering(9) 00:18:33.594 fused_ordering(10) 00:18:33.594 fused_ordering(11) 00:18:33.594 fused_ordering(12) 00:18:33.594 fused_ordering(13) 00:18:33.594 fused_ordering(14) 00:18:33.594 fused_ordering(15) 00:18:33.594 fused_ordering(16) 00:18:33.594 fused_ordering(17) 00:18:33.594 fused_ordering(18) 00:18:33.594 fused_ordering(19) 00:18:33.594 fused_ordering(20) 00:18:33.594 fused_ordering(21) 00:18:33.594 fused_ordering(22) 00:18:33.594 fused_ordering(23) 00:18:33.594 fused_ordering(24) 00:18:33.594 fused_ordering(25) 00:18:33.594 fused_ordering(26) 00:18:33.594 fused_ordering(27) 00:18:33.594 fused_ordering(28) 00:18:33.594 fused_ordering(29) 00:18:33.594 fused_ordering(30) 00:18:33.594 fused_ordering(31) 00:18:33.594 fused_ordering(32) 00:18:33.594 fused_ordering(33) 00:18:33.594 fused_ordering(34) 00:18:33.594 fused_ordering(35) 00:18:33.594 fused_ordering(36) 00:18:33.594 fused_ordering(37) 00:18:33.594 fused_ordering(38) 00:18:33.594 fused_ordering(39) 00:18:33.594 fused_ordering(40) 00:18:33.594 fused_ordering(41) 00:18:33.594 fused_ordering(42) 00:18:33.594 fused_ordering(43) 00:18:33.594 fused_ordering(44) 00:18:33.594 fused_ordering(45) 00:18:33.594 fused_ordering(46) 00:18:33.594 fused_ordering(47) 00:18:33.594 fused_ordering(48) 00:18:33.594 fused_ordering(49) 00:18:33.594 fused_ordering(50) 00:18:33.594 fused_ordering(51) 00:18:33.594 fused_ordering(52) 00:18:33.594 fused_ordering(53) 00:18:33.594 fused_ordering(54) 00:18:33.594 fused_ordering(55) 00:18:33.594 fused_ordering(56) 00:18:33.594 fused_ordering(57) 00:18:33.594 fused_ordering(58) 00:18:33.594 fused_ordering(59) 00:18:33.594 fused_ordering(60) 00:18:33.594 fused_ordering(61) 00:18:33.594 fused_ordering(62) 00:18:33.594 fused_ordering(63) 00:18:33.594 fused_ordering(64) 00:18:33.594 fused_ordering(65) 00:18:33.594 fused_ordering(66) 00:18:33.594 fused_ordering(67) 00:18:33.594 fused_ordering(68) 00:18:33.594 fused_ordering(69) 00:18:33.594 fused_ordering(70) 00:18:33.594 fused_ordering(71) 00:18:33.594 fused_ordering(72) 00:18:33.594 fused_ordering(73) 00:18:33.594 fused_ordering(74) 00:18:33.594 fused_ordering(75) 00:18:33.594 fused_ordering(76) 00:18:33.594 fused_ordering(77) 00:18:33.594 fused_ordering(78) 00:18:33.594 fused_ordering(79) 00:18:33.594 fused_ordering(80) 00:18:33.594 fused_ordering(81) 00:18:33.594 fused_ordering(82) 00:18:33.594 fused_ordering(83) 00:18:33.594 fused_ordering(84) 00:18:33.594 fused_ordering(85) 00:18:33.594 fused_ordering(86) 00:18:33.594 fused_ordering(87) 00:18:33.594 fused_ordering(88) 00:18:33.594 fused_ordering(89) 00:18:33.594 fused_ordering(90) 00:18:33.594 fused_ordering(91) 00:18:33.594 fused_ordering(92) 00:18:33.594 fused_ordering(93) 00:18:33.594 fused_ordering(94) 00:18:33.594 fused_ordering(95) 00:18:33.594 fused_ordering(96) 00:18:33.594 fused_ordering(97) 00:18:33.594 fused_ordering(98) 00:18:33.594 fused_ordering(99) 00:18:33.594 fused_ordering(100) 00:18:33.594 fused_ordering(101) 00:18:33.594 fused_ordering(102) 00:18:33.594 fused_ordering(103) 00:18:33.594 fused_ordering(104) 00:18:33.594 fused_ordering(105) 00:18:33.594 fused_ordering(106) 00:18:33.594 fused_ordering(107) 00:18:33.594 fused_ordering(108) 00:18:33.594 fused_ordering(109) 00:18:33.594 fused_ordering(110) 00:18:33.594 fused_ordering(111) 00:18:33.594 fused_ordering(112) 00:18:33.594 fused_ordering(113) 00:18:33.594 fused_ordering(114) 00:18:33.594 fused_ordering(115) 00:18:33.594 fused_ordering(116) 00:18:33.594 fused_ordering(117) 00:18:33.594 fused_ordering(118) 00:18:33.594 fused_ordering(119) 00:18:33.594 fused_ordering(120) 00:18:33.594 fused_ordering(121) 00:18:33.594 fused_ordering(122) 00:18:33.594 fused_ordering(123) 00:18:33.594 fused_ordering(124) 00:18:33.594 fused_ordering(125) 00:18:33.594 fused_ordering(126) 00:18:33.594 fused_ordering(127) 00:18:33.594 fused_ordering(128) 00:18:33.594 fused_ordering(129) 00:18:33.594 fused_ordering(130) 00:18:33.594 fused_ordering(131) 00:18:33.594 fused_ordering(132) 00:18:33.594 fused_ordering(133) 00:18:33.594 fused_ordering(134) 00:18:33.595 fused_ordering(135) 00:18:33.595 fused_ordering(136) 00:18:33.595 fused_ordering(137) 00:18:33.595 fused_ordering(138) 00:18:33.595 fused_ordering(139) 00:18:33.595 fused_ordering(140) 00:18:33.595 fused_ordering(141) 00:18:33.595 fused_ordering(142) 00:18:33.595 fused_ordering(143) 00:18:33.595 fused_ordering(144) 00:18:33.595 fused_ordering(145) 00:18:33.595 fused_ordering(146) 00:18:33.595 fused_ordering(147) 00:18:33.595 fused_ordering(148) 00:18:33.595 fused_ordering(149) 00:18:33.595 fused_ordering(150) 00:18:33.595 fused_ordering(151) 00:18:33.595 fused_ordering(152) 00:18:33.595 fused_ordering(153) 00:18:33.595 fused_ordering(154) 00:18:33.595 fused_ordering(155) 00:18:33.595 fused_ordering(156) 00:18:33.595 fused_ordering(157) 00:18:33.595 fused_ordering(158) 00:18:33.595 fused_ordering(159) 00:18:33.595 fused_ordering(160) 00:18:33.595 fused_ordering(161) 00:18:33.595 fused_ordering(162) 00:18:33.595 fused_ordering(163) 00:18:33.595 fused_ordering(164) 00:18:33.595 fused_ordering(165) 00:18:33.595 fused_ordering(166) 00:18:33.595 fused_ordering(167) 00:18:33.595 fused_ordering(168) 00:18:33.595 fused_ordering(169) 00:18:33.595 fused_ordering(170) 00:18:33.595 fused_ordering(171) 00:18:33.595 fused_ordering(172) 00:18:33.595 fused_ordering(173) 00:18:33.595 fused_ordering(174) 00:18:33.595 fused_ordering(175) 00:18:33.595 fused_ordering(176) 00:18:33.595 fused_ordering(177) 00:18:33.595 fused_ordering(178) 00:18:33.595 fused_ordering(179) 00:18:33.595 fused_ordering(180) 00:18:33.595 fused_ordering(181) 00:18:33.595 fused_ordering(182) 00:18:33.595 fused_ordering(183) 00:18:33.595 fused_ordering(184) 00:18:33.595 fused_ordering(185) 00:18:33.595 fused_ordering(186) 00:18:33.595 fused_ordering(187) 00:18:33.595 fused_ordering(188) 00:18:33.595 fused_ordering(189) 00:18:33.595 fused_ordering(190) 00:18:33.595 fused_ordering(191) 00:18:33.595 fused_ordering(192) 00:18:33.595 fused_ordering(193) 00:18:33.595 fused_ordering(194) 00:18:33.595 fused_ordering(195) 00:18:33.595 fused_ordering(196) 00:18:33.595 fused_ordering(197) 00:18:33.595 fused_ordering(198) 00:18:33.595 fused_ordering(199) 00:18:33.595 fused_ordering(200) 00:18:33.595 fused_ordering(201) 00:18:33.595 fused_ordering(202) 00:18:33.595 fused_ordering(203) 00:18:33.595 fused_ordering(204) 00:18:33.595 fused_ordering(205) 00:18:33.856 fused_ordering(206) 00:18:33.856 fused_ordering(207) 00:18:33.856 fused_ordering(208) 00:18:33.856 fused_ordering(209) 00:18:33.856 fused_ordering(210) 00:18:33.856 fused_ordering(211) 00:18:33.856 fused_ordering(212) 00:18:33.856 fused_ordering(213) 00:18:33.856 fused_ordering(214) 00:18:33.856 fused_ordering(215) 00:18:33.856 fused_ordering(216) 00:18:33.856 fused_ordering(217) 00:18:33.856 fused_ordering(218) 00:18:33.856 fused_ordering(219) 00:18:33.856 fused_ordering(220) 00:18:33.856 fused_ordering(221) 00:18:33.856 fused_ordering(222) 00:18:33.856 fused_ordering(223) 00:18:33.856 fused_ordering(224) 00:18:33.856 fused_ordering(225) 00:18:33.856 fused_ordering(226) 00:18:33.856 fused_ordering(227) 00:18:33.856 fused_ordering(228) 00:18:33.856 fused_ordering(229) 00:18:33.856 fused_ordering(230) 00:18:33.856 fused_ordering(231) 00:18:33.856 fused_ordering(232) 00:18:33.856 fused_ordering(233) 00:18:33.856 fused_ordering(234) 00:18:33.856 fused_ordering(235) 00:18:33.856 fused_ordering(236) 00:18:33.856 fused_ordering(237) 00:18:33.856 fused_ordering(238) 00:18:33.856 fused_ordering(239) 00:18:33.856 fused_ordering(240) 00:18:33.856 fused_ordering(241) 00:18:33.856 fused_ordering(242) 00:18:33.856 fused_ordering(243) 00:18:33.856 fused_ordering(244) 00:18:33.856 fused_ordering(245) 00:18:33.856 fused_ordering(246) 00:18:33.856 fused_ordering(247) 00:18:33.856 fused_ordering(248) 00:18:33.856 fused_ordering(249) 00:18:33.856 fused_ordering(250) 00:18:33.856 fused_ordering(251) 00:18:33.856 fused_ordering(252) 00:18:33.856 fused_ordering(253) 00:18:33.856 fused_ordering(254) 00:18:33.856 fused_ordering(255) 00:18:33.856 fused_ordering(256) 00:18:33.856 fused_ordering(257) 00:18:33.856 fused_ordering(258) 00:18:33.856 fused_ordering(259) 00:18:33.856 fused_ordering(260) 00:18:33.856 fused_ordering(261) 00:18:33.856 fused_ordering(262) 00:18:33.856 fused_ordering(263) 00:18:33.856 fused_ordering(264) 00:18:33.856 fused_ordering(265) 00:18:33.856 fused_ordering(266) 00:18:33.856 fused_ordering(267) 00:18:33.856 fused_ordering(268) 00:18:33.856 fused_ordering(269) 00:18:33.856 fused_ordering(270) 00:18:33.856 fused_ordering(271) 00:18:33.856 fused_ordering(272) 00:18:33.856 fused_ordering(273) 00:18:33.856 fused_ordering(274) 00:18:33.856 fused_ordering(275) 00:18:33.856 fused_ordering(276) 00:18:33.856 fused_ordering(277) 00:18:33.856 fused_ordering(278) 00:18:33.856 fused_ordering(279) 00:18:33.856 fused_ordering(280) 00:18:33.856 fused_ordering(281) 00:18:33.856 fused_ordering(282) 00:18:33.856 fused_ordering(283) 00:18:33.856 fused_ordering(284) 00:18:33.856 fused_ordering(285) 00:18:33.856 fused_ordering(286) 00:18:33.856 fused_ordering(287) 00:18:33.856 fused_ordering(288) 00:18:33.856 fused_ordering(289) 00:18:33.856 fused_ordering(290) 00:18:33.856 fused_ordering(291) 00:18:33.856 fused_ordering(292) 00:18:33.856 fused_ordering(293) 00:18:33.856 fused_ordering(294) 00:18:33.856 fused_ordering(295) 00:18:33.856 fused_ordering(296) 00:18:33.856 fused_ordering(297) 00:18:33.856 fused_ordering(298) 00:18:33.856 fused_ordering(299) 00:18:33.856 fused_ordering(300) 00:18:33.856 fused_ordering(301) 00:18:33.856 fused_ordering(302) 00:18:33.856 fused_ordering(303) 00:18:33.856 fused_ordering(304) 00:18:33.856 fused_ordering(305) 00:18:33.856 fused_ordering(306) 00:18:33.856 fused_ordering(307) 00:18:33.856 fused_ordering(308) 00:18:33.856 fused_ordering(309) 00:18:33.856 fused_ordering(310) 00:18:33.856 fused_ordering(311) 00:18:33.856 fused_ordering(312) 00:18:33.856 fused_ordering(313) 00:18:33.856 fused_ordering(314) 00:18:33.856 fused_ordering(315) 00:18:33.856 fused_ordering(316) 00:18:33.856 fused_ordering(317) 00:18:33.856 fused_ordering(318) 00:18:33.856 fused_ordering(319) 00:18:33.856 fused_ordering(320) 00:18:33.856 fused_ordering(321) 00:18:33.856 fused_ordering(322) 00:18:33.856 fused_ordering(323) 00:18:33.856 fused_ordering(324) 00:18:33.856 fused_ordering(325) 00:18:33.856 fused_ordering(326) 00:18:33.856 fused_ordering(327) 00:18:33.856 fused_ordering(328) 00:18:33.856 fused_ordering(329) 00:18:33.856 fused_ordering(330) 00:18:33.856 fused_ordering(331) 00:18:33.856 fused_ordering(332) 00:18:33.856 fused_ordering(333) 00:18:33.856 fused_ordering(334) 00:18:33.856 fused_ordering(335) 00:18:33.856 fused_ordering(336) 00:18:33.856 fused_ordering(337) 00:18:33.856 fused_ordering(338) 00:18:33.856 fused_ordering(339) 00:18:33.856 fused_ordering(340) 00:18:33.856 fused_ordering(341) 00:18:33.856 fused_ordering(342) 00:18:33.856 fused_ordering(343) 00:18:33.856 fused_ordering(344) 00:18:33.856 fused_ordering(345) 00:18:33.856 fused_ordering(346) 00:18:33.856 fused_ordering(347) 00:18:33.856 fused_ordering(348) 00:18:33.856 fused_ordering(349) 00:18:33.856 fused_ordering(350) 00:18:33.856 fused_ordering(351) 00:18:33.856 fused_ordering(352) 00:18:33.856 fused_ordering(353) 00:18:33.856 fused_ordering(354) 00:18:33.856 fused_ordering(355) 00:18:33.856 fused_ordering(356) 00:18:33.856 fused_ordering(357) 00:18:33.856 fused_ordering(358) 00:18:33.856 fused_ordering(359) 00:18:33.856 fused_ordering(360) 00:18:33.856 fused_ordering(361) 00:18:33.856 fused_ordering(362) 00:18:33.856 fused_ordering(363) 00:18:33.856 fused_ordering(364) 00:18:33.856 fused_ordering(365) 00:18:33.856 fused_ordering(366) 00:18:33.856 fused_ordering(367) 00:18:33.856 fused_ordering(368) 00:18:33.856 fused_ordering(369) 00:18:33.856 fused_ordering(370) 00:18:33.856 fused_ordering(371) 00:18:33.856 fused_ordering(372) 00:18:33.856 fused_ordering(373) 00:18:33.856 fused_ordering(374) 00:18:33.856 fused_ordering(375) 00:18:33.856 fused_ordering(376) 00:18:33.856 fused_ordering(377) 00:18:33.856 fused_ordering(378) 00:18:33.856 fused_ordering(379) 00:18:33.856 fused_ordering(380) 00:18:33.856 fused_ordering(381) 00:18:33.856 fused_ordering(382) 00:18:33.856 fused_ordering(383) 00:18:33.856 fused_ordering(384) 00:18:33.856 fused_ordering(385) 00:18:33.856 fused_ordering(386) 00:18:33.856 fused_ordering(387) 00:18:33.856 fused_ordering(388) 00:18:33.856 fused_ordering(389) 00:18:33.856 fused_ordering(390) 00:18:33.856 fused_ordering(391) 00:18:33.856 fused_ordering(392) 00:18:33.856 fused_ordering(393) 00:18:33.856 fused_ordering(394) 00:18:33.856 fused_ordering(395) 00:18:33.856 fused_ordering(396) 00:18:33.856 fused_ordering(397) 00:18:33.856 fused_ordering(398) 00:18:33.856 fused_ordering(399) 00:18:33.856 fused_ordering(400) 00:18:33.856 fused_ordering(401) 00:18:33.856 fused_ordering(402) 00:18:33.856 fused_ordering(403) 00:18:33.856 fused_ordering(404) 00:18:33.856 fused_ordering(405) 00:18:33.856 fused_ordering(406) 00:18:33.856 fused_ordering(407) 00:18:33.856 fused_ordering(408) 00:18:33.856 fused_ordering(409) 00:18:33.856 fused_ordering(410) 00:18:34.427 fused_ordering(411) 00:18:34.427 fused_ordering(412) 00:18:34.427 fused_ordering(413) 00:18:34.427 fused_ordering(414) 00:18:34.427 fused_ordering(415) 00:18:34.427 fused_ordering(416) 00:18:34.427 fused_ordering(417) 00:18:34.427 fused_ordering(418) 00:18:34.427 fused_ordering(419) 00:18:34.427 fused_ordering(420) 00:18:34.427 fused_ordering(421) 00:18:34.427 fused_ordering(422) 00:18:34.427 fused_ordering(423) 00:18:34.427 fused_ordering(424) 00:18:34.427 fused_ordering(425) 00:18:34.427 fused_ordering(426) 00:18:34.427 fused_ordering(427) 00:18:34.427 fused_ordering(428) 00:18:34.427 fused_ordering(429) 00:18:34.427 fused_ordering(430) 00:18:34.427 fused_ordering(431) 00:18:34.427 fused_ordering(432) 00:18:34.427 fused_ordering(433) 00:18:34.427 fused_ordering(434) 00:18:34.427 fused_ordering(435) 00:18:34.427 fused_ordering(436) 00:18:34.427 fused_ordering(437) 00:18:34.427 fused_ordering(438) 00:18:34.427 fused_ordering(439) 00:18:34.427 fused_ordering(440) 00:18:34.427 fused_ordering(441) 00:18:34.427 fused_ordering(442) 00:18:34.427 fused_ordering(443) 00:18:34.427 fused_ordering(444) 00:18:34.427 fused_ordering(445) 00:18:34.427 fused_ordering(446) 00:18:34.427 fused_ordering(447) 00:18:34.427 fused_ordering(448) 00:18:34.427 fused_ordering(449) 00:18:34.427 fused_ordering(450) 00:18:34.427 fused_ordering(451) 00:18:34.427 fused_ordering(452) 00:18:34.427 fused_ordering(453) 00:18:34.427 fused_ordering(454) 00:18:34.427 fused_ordering(455) 00:18:34.427 fused_ordering(456) 00:18:34.427 fused_ordering(457) 00:18:34.427 fused_ordering(458) 00:18:34.427 fused_ordering(459) 00:18:34.427 fused_ordering(460) 00:18:34.427 fused_ordering(461) 00:18:34.427 fused_ordering(462) 00:18:34.427 fused_ordering(463) 00:18:34.427 fused_ordering(464) 00:18:34.427 fused_ordering(465) 00:18:34.427 fused_ordering(466) 00:18:34.427 fused_ordering(467) 00:18:34.427 fused_ordering(468) 00:18:34.427 fused_ordering(469) 00:18:34.427 fused_ordering(470) 00:18:34.427 fused_ordering(471) 00:18:34.427 fused_ordering(472) 00:18:34.427 fused_ordering(473) 00:18:34.427 fused_ordering(474) 00:18:34.427 fused_ordering(475) 00:18:34.427 fused_ordering(476) 00:18:34.427 fused_ordering(477) 00:18:34.427 fused_ordering(478) 00:18:34.427 fused_ordering(479) 00:18:34.427 fused_ordering(480) 00:18:34.427 fused_ordering(481) 00:18:34.427 fused_ordering(482) 00:18:34.427 fused_ordering(483) 00:18:34.427 fused_ordering(484) 00:18:34.427 fused_ordering(485) 00:18:34.427 fused_ordering(486) 00:18:34.427 fused_ordering(487) 00:18:34.427 fused_ordering(488) 00:18:34.427 fused_ordering(489) 00:18:34.427 fused_ordering(490) 00:18:34.427 fused_ordering(491) 00:18:34.427 fused_ordering(492) 00:18:34.427 fused_ordering(493) 00:18:34.427 fused_ordering(494) 00:18:34.427 fused_ordering(495) 00:18:34.427 fused_ordering(496) 00:18:34.427 fused_ordering(497) 00:18:34.427 fused_ordering(498) 00:18:34.427 fused_ordering(499) 00:18:34.427 fused_ordering(500) 00:18:34.427 fused_ordering(501) 00:18:34.427 fused_ordering(502) 00:18:34.427 fused_ordering(503) 00:18:34.427 fused_ordering(504) 00:18:34.427 fused_ordering(505) 00:18:34.427 fused_ordering(506) 00:18:34.427 fused_ordering(507) 00:18:34.427 fused_ordering(508) 00:18:34.427 fused_ordering(509) 00:18:34.427 fused_ordering(510) 00:18:34.427 fused_ordering(511) 00:18:34.427 fused_ordering(512) 00:18:34.427 fused_ordering(513) 00:18:34.427 fused_ordering(514) 00:18:34.427 fused_ordering(515) 00:18:34.427 fused_ordering(516) 00:18:34.427 fused_ordering(517) 00:18:34.427 fused_ordering(518) 00:18:34.427 fused_ordering(519) 00:18:34.427 fused_ordering(520) 00:18:34.427 fused_ordering(521) 00:18:34.427 fused_ordering(522) 00:18:34.427 fused_ordering(523) 00:18:34.427 fused_ordering(524) 00:18:34.427 fused_ordering(525) 00:18:34.427 fused_ordering(526) 00:18:34.427 fused_ordering(527) 00:18:34.427 fused_ordering(528) 00:18:34.427 fused_ordering(529) 00:18:34.427 fused_ordering(530) 00:18:34.427 fused_ordering(531) 00:18:34.427 fused_ordering(532) 00:18:34.427 fused_ordering(533) 00:18:34.427 fused_ordering(534) 00:18:34.427 fused_ordering(535) 00:18:34.427 fused_ordering(536) 00:18:34.427 fused_ordering(537) 00:18:34.427 fused_ordering(538) 00:18:34.427 fused_ordering(539) 00:18:34.427 fused_ordering(540) 00:18:34.427 fused_ordering(541) 00:18:34.427 fused_ordering(542) 00:18:34.427 fused_ordering(543) 00:18:34.427 fused_ordering(544) 00:18:34.427 fused_ordering(545) 00:18:34.427 fused_ordering(546) 00:18:34.427 fused_ordering(547) 00:18:34.427 fused_ordering(548) 00:18:34.427 fused_ordering(549) 00:18:34.427 fused_ordering(550) 00:18:34.427 fused_ordering(551) 00:18:34.427 fused_ordering(552) 00:18:34.427 fused_ordering(553) 00:18:34.427 fused_ordering(554) 00:18:34.427 fused_ordering(555) 00:18:34.427 fused_ordering(556) 00:18:34.427 fused_ordering(557) 00:18:34.427 fused_ordering(558) 00:18:34.427 fused_ordering(559) 00:18:34.427 fused_ordering(560) 00:18:34.427 fused_ordering(561) 00:18:34.427 fused_ordering(562) 00:18:34.427 fused_ordering(563) 00:18:34.427 fused_ordering(564) 00:18:34.427 fused_ordering(565) 00:18:34.427 fused_ordering(566) 00:18:34.427 fused_ordering(567) 00:18:34.427 fused_ordering(568) 00:18:34.427 fused_ordering(569) 00:18:34.427 fused_ordering(570) 00:18:34.427 fused_ordering(571) 00:18:34.427 fused_ordering(572) 00:18:34.427 fused_ordering(573) 00:18:34.427 fused_ordering(574) 00:18:34.427 fused_ordering(575) 00:18:34.427 fused_ordering(576) 00:18:34.427 fused_ordering(577) 00:18:34.427 fused_ordering(578) 00:18:34.427 fused_ordering(579) 00:18:34.427 fused_ordering(580) 00:18:34.427 fused_ordering(581) 00:18:34.427 fused_ordering(582) 00:18:34.427 fused_ordering(583) 00:18:34.427 fused_ordering(584) 00:18:34.427 fused_ordering(585) 00:18:34.427 fused_ordering(586) 00:18:34.427 fused_ordering(587) 00:18:34.427 fused_ordering(588) 00:18:34.427 fused_ordering(589) 00:18:34.427 fused_ordering(590) 00:18:34.427 fused_ordering(591) 00:18:34.427 fused_ordering(592) 00:18:34.427 fused_ordering(593) 00:18:34.427 fused_ordering(594) 00:18:34.427 fused_ordering(595) 00:18:34.427 fused_ordering(596) 00:18:34.427 fused_ordering(597) 00:18:34.427 fused_ordering(598) 00:18:34.427 fused_ordering(599) 00:18:34.427 fused_ordering(600) 00:18:34.427 fused_ordering(601) 00:18:34.427 fused_ordering(602) 00:18:34.427 fused_ordering(603) 00:18:34.427 fused_ordering(604) 00:18:34.427 fused_ordering(605) 00:18:34.427 fused_ordering(606) 00:18:34.427 fused_ordering(607) 00:18:34.427 fused_ordering(608) 00:18:34.427 fused_ordering(609) 00:18:34.427 fused_ordering(610) 00:18:34.427 fused_ordering(611) 00:18:34.427 fused_ordering(612) 00:18:34.427 fused_ordering(613) 00:18:34.427 fused_ordering(614) 00:18:34.427 fused_ordering(615) 00:18:34.687 fused_ordering(616) 00:18:34.687 fused_ordering(617) 00:18:34.687 fused_ordering(618) 00:18:34.687 fused_ordering(619) 00:18:34.687 fused_ordering(620) 00:18:34.687 fused_ordering(621) 00:18:34.687 fused_ordering(622) 00:18:34.687 fused_ordering(623) 00:18:34.687 fused_ordering(624) 00:18:34.687 fused_ordering(625) 00:18:34.687 fused_ordering(626) 00:18:34.687 fused_ordering(627) 00:18:34.687 fused_ordering(628) 00:18:34.687 fused_ordering(629) 00:18:34.687 fused_ordering(630) 00:18:34.687 fused_ordering(631) 00:18:34.687 fused_ordering(632) 00:18:34.687 fused_ordering(633) 00:18:34.687 fused_ordering(634) 00:18:34.687 fused_ordering(635) 00:18:34.687 fused_ordering(636) 00:18:34.687 fused_ordering(637) 00:18:34.687 fused_ordering(638) 00:18:34.687 fused_ordering(639) 00:18:34.687 fused_ordering(640) 00:18:34.687 fused_ordering(641) 00:18:34.687 fused_ordering(642) 00:18:34.687 fused_ordering(643) 00:18:34.687 fused_ordering(644) 00:18:34.687 fused_ordering(645) 00:18:34.687 fused_ordering(646) 00:18:34.687 fused_ordering(647) 00:18:34.687 fused_ordering(648) 00:18:34.687 fused_ordering(649) 00:18:34.687 fused_ordering(650) 00:18:34.687 fused_ordering(651) 00:18:34.687 fused_ordering(652) 00:18:34.687 fused_ordering(653) 00:18:34.687 fused_ordering(654) 00:18:34.687 fused_ordering(655) 00:18:34.687 fused_ordering(656) 00:18:34.687 fused_ordering(657) 00:18:34.687 fused_ordering(658) 00:18:34.687 fused_ordering(659) 00:18:34.687 fused_ordering(660) 00:18:34.687 fused_ordering(661) 00:18:34.687 fused_ordering(662) 00:18:34.687 fused_ordering(663) 00:18:34.687 fused_ordering(664) 00:18:34.687 fused_ordering(665) 00:18:34.687 fused_ordering(666) 00:18:34.687 fused_ordering(667) 00:18:34.687 fused_ordering(668) 00:18:34.687 fused_ordering(669) 00:18:34.687 fused_ordering(670) 00:18:34.687 fused_ordering(671) 00:18:34.687 fused_ordering(672) 00:18:34.687 fused_ordering(673) 00:18:34.687 fused_ordering(674) 00:18:34.687 fused_ordering(675) 00:18:34.687 fused_ordering(676) 00:18:34.687 fused_ordering(677) 00:18:34.687 fused_ordering(678) 00:18:34.687 fused_ordering(679) 00:18:34.687 fused_ordering(680) 00:18:34.687 fused_ordering(681) 00:18:34.687 fused_ordering(682) 00:18:34.687 fused_ordering(683) 00:18:34.687 fused_ordering(684) 00:18:34.687 fused_ordering(685) 00:18:34.687 fused_ordering(686) 00:18:34.687 fused_ordering(687) 00:18:34.687 fused_ordering(688) 00:18:34.687 fused_ordering(689) 00:18:34.687 fused_ordering(690) 00:18:34.687 fused_ordering(691) 00:18:34.687 fused_ordering(692) 00:18:34.687 fused_ordering(693) 00:18:34.687 fused_ordering(694) 00:18:34.687 fused_ordering(695) 00:18:34.687 fused_ordering(696) 00:18:34.687 fused_ordering(697) 00:18:34.687 fused_ordering(698) 00:18:34.687 fused_ordering(699) 00:18:34.687 fused_ordering(700) 00:18:34.687 fused_ordering(701) 00:18:34.687 fused_ordering(702) 00:18:34.687 fused_ordering(703) 00:18:34.687 fused_ordering(704) 00:18:34.687 fused_ordering(705) 00:18:34.687 fused_ordering(706) 00:18:34.687 fused_ordering(707) 00:18:34.687 fused_ordering(708) 00:18:34.687 fused_ordering(709) 00:18:34.687 fused_ordering(710) 00:18:34.687 fused_ordering(711) 00:18:34.687 fused_ordering(712) 00:18:34.687 fused_ordering(713) 00:18:34.687 fused_ordering(714) 00:18:34.687 fused_ordering(715) 00:18:34.687 fused_ordering(716) 00:18:34.687 fused_ordering(717) 00:18:34.687 fused_ordering(718) 00:18:34.687 fused_ordering(719) 00:18:34.687 fused_ordering(720) 00:18:34.687 fused_ordering(721) 00:18:34.687 fused_ordering(722) 00:18:34.687 fused_ordering(723) 00:18:34.687 fused_ordering(724) 00:18:34.687 fused_ordering(725) 00:18:34.687 fused_ordering(726) 00:18:34.687 fused_ordering(727) 00:18:34.687 fused_ordering(728) 00:18:34.687 fused_ordering(729) 00:18:34.687 fused_ordering(730) 00:18:34.687 fused_ordering(731) 00:18:34.687 fused_ordering(732) 00:18:34.687 fused_ordering(733) 00:18:34.687 fused_ordering(734) 00:18:34.687 fused_ordering(735) 00:18:34.687 fused_ordering(736) 00:18:34.688 fused_ordering(737) 00:18:34.688 fused_ordering(738) 00:18:34.688 fused_ordering(739) 00:18:34.688 fused_ordering(740) 00:18:34.688 fused_ordering(741) 00:18:34.688 fused_ordering(742) 00:18:34.688 fused_ordering(743) 00:18:34.688 fused_ordering(744) 00:18:34.688 fused_ordering(745) 00:18:34.688 fused_ordering(746) 00:18:34.688 fused_ordering(747) 00:18:34.688 fused_ordering(748) 00:18:34.688 fused_ordering(749) 00:18:34.688 fused_ordering(750) 00:18:34.688 fused_ordering(751) 00:18:34.688 fused_ordering(752) 00:18:34.688 fused_ordering(753) 00:18:34.688 fused_ordering(754) 00:18:34.688 fused_ordering(755) 00:18:34.688 fused_ordering(756) 00:18:34.688 fused_ordering(757) 00:18:34.688 fused_ordering(758) 00:18:34.688 fused_ordering(759) 00:18:34.688 fused_ordering(760) 00:18:34.688 fused_ordering(761) 00:18:34.688 fused_ordering(762) 00:18:34.688 fused_ordering(763) 00:18:34.688 fused_ordering(764) 00:18:34.688 fused_ordering(765) 00:18:34.688 fused_ordering(766) 00:18:34.688 fused_ordering(767) 00:18:34.688 fused_ordering(768) 00:18:34.688 fused_ordering(769) 00:18:34.688 fused_ordering(770) 00:18:34.688 fused_ordering(771) 00:18:34.688 fused_ordering(772) 00:18:34.688 fused_ordering(773) 00:18:34.688 fused_ordering(774) 00:18:34.688 fused_ordering(775) 00:18:34.688 fused_ordering(776) 00:18:34.688 fused_ordering(777) 00:18:34.688 fused_ordering(778) 00:18:34.688 fused_ordering(779) 00:18:34.688 fused_ordering(780) 00:18:34.688 fused_ordering(781) 00:18:34.688 fused_ordering(782) 00:18:34.688 fused_ordering(783) 00:18:34.688 fused_ordering(784) 00:18:34.688 fused_ordering(785) 00:18:34.688 fused_ordering(786) 00:18:34.688 fused_ordering(787) 00:18:34.688 fused_ordering(788) 00:18:34.688 fused_ordering(789) 00:18:34.688 fused_ordering(790) 00:18:34.688 fused_ordering(791) 00:18:34.688 fused_ordering(792) 00:18:34.688 fused_ordering(793) 00:18:34.688 fused_ordering(794) 00:18:34.688 fused_ordering(795) 00:18:34.688 fused_ordering(796) 00:18:34.688 fused_ordering(797) 00:18:34.688 fused_ordering(798) 00:18:34.688 fused_ordering(799) 00:18:34.688 fused_ordering(800) 00:18:34.688 fused_ordering(801) 00:18:34.688 fused_ordering(802) 00:18:34.688 fused_ordering(803) 00:18:34.688 fused_ordering(804) 00:18:34.688 fused_ordering(805) 00:18:34.688 fused_ordering(806) 00:18:34.688 fused_ordering(807) 00:18:34.688 fused_ordering(808) 00:18:34.688 fused_ordering(809) 00:18:34.688 fused_ordering(810) 00:18:34.688 fused_ordering(811) 00:18:34.688 fused_ordering(812) 00:18:34.688 fused_ordering(813) 00:18:34.688 fused_ordering(814) 00:18:34.688 fused_ordering(815) 00:18:34.688 fused_ordering(816) 00:18:34.688 fused_ordering(817) 00:18:34.688 fused_ordering(818) 00:18:34.688 fused_ordering(819) 00:18:34.688 fused_ordering(820) 00:18:35.259 fused_ordering(821) 00:18:35.259 fused_ordering(822) 00:18:35.259 fused_ordering(823) 00:18:35.259 fused_ordering(824) 00:18:35.259 fused_ordering(825) 00:18:35.259 fused_ordering(826) 00:18:35.259 fused_ordering(827) 00:18:35.259 fused_ordering(828) 00:18:35.259 fused_ordering(829) 00:18:35.259 fused_ordering(830) 00:18:35.259 fused_ordering(831) 00:18:35.259 fused_ordering(832) 00:18:35.259 fused_ordering(833) 00:18:35.259 fused_ordering(834) 00:18:35.259 fused_ordering(835) 00:18:35.259 fused_ordering(836) 00:18:35.259 fused_ordering(837) 00:18:35.259 fused_ordering(838) 00:18:35.259 fused_ordering(839) 00:18:35.259 fused_ordering(840) 00:18:35.259 fused_ordering(841) 00:18:35.259 fused_ordering(842) 00:18:35.259 fused_ordering(843) 00:18:35.259 fused_ordering(844) 00:18:35.259 fused_ordering(845) 00:18:35.259 fused_ordering(846) 00:18:35.259 fused_ordering(847) 00:18:35.259 fused_ordering(848) 00:18:35.259 fused_ordering(849) 00:18:35.259 fused_ordering(850) 00:18:35.259 fused_ordering(851) 00:18:35.259 fused_ordering(852) 00:18:35.259 fused_ordering(853) 00:18:35.259 fused_ordering(854) 00:18:35.259 fused_ordering(855) 00:18:35.259 fused_ordering(856) 00:18:35.259 fused_ordering(857) 00:18:35.259 fused_ordering(858) 00:18:35.259 fused_ordering(859) 00:18:35.259 fused_ordering(860) 00:18:35.259 fused_ordering(861) 00:18:35.259 fused_ordering(862) 00:18:35.259 fused_ordering(863) 00:18:35.259 fused_ordering(864) 00:18:35.259 fused_ordering(865) 00:18:35.259 fused_ordering(866) 00:18:35.259 fused_ordering(867) 00:18:35.259 fused_ordering(868) 00:18:35.259 fused_ordering(869) 00:18:35.259 fused_ordering(870) 00:18:35.259 fused_ordering(871) 00:18:35.259 fused_ordering(872) 00:18:35.259 fused_ordering(873) 00:18:35.259 fused_ordering(874) 00:18:35.259 fused_ordering(875) 00:18:35.259 fused_ordering(876) 00:18:35.259 fused_ordering(877) 00:18:35.259 fused_ordering(878) 00:18:35.259 fused_ordering(879) 00:18:35.259 fused_ordering(880) 00:18:35.259 fused_ordering(881) 00:18:35.259 fused_ordering(882) 00:18:35.259 fused_ordering(883) 00:18:35.259 fused_ordering(884) 00:18:35.259 fused_ordering(885) 00:18:35.259 fused_ordering(886) 00:18:35.259 fused_ordering(887) 00:18:35.259 fused_ordering(888) 00:18:35.259 fused_ordering(889) 00:18:35.259 fused_ordering(890) 00:18:35.259 fused_ordering(891) 00:18:35.259 fused_ordering(892) 00:18:35.259 fused_ordering(893) 00:18:35.259 fused_ordering(894) 00:18:35.259 fused_ordering(895) 00:18:35.259 fused_ordering(896) 00:18:35.259 fused_ordering(897) 00:18:35.259 fused_ordering(898) 00:18:35.259 fused_ordering(899) 00:18:35.259 fused_ordering(900) 00:18:35.259 fused_ordering(901) 00:18:35.259 fused_ordering(902) 00:18:35.259 fused_ordering(903) 00:18:35.259 fused_ordering(904) 00:18:35.259 fused_ordering(905) 00:18:35.259 fused_ordering(906) 00:18:35.259 fused_ordering(907) 00:18:35.259 fused_ordering(908) 00:18:35.259 fused_ordering(909) 00:18:35.259 fused_ordering(910) 00:18:35.259 fused_ordering(911) 00:18:35.259 fused_ordering(912) 00:18:35.259 fused_ordering(913) 00:18:35.259 fused_ordering(914) 00:18:35.259 fused_ordering(915) 00:18:35.259 fused_ordering(916) 00:18:35.259 fused_ordering(917) 00:18:35.259 fused_ordering(918) 00:18:35.259 fused_ordering(919) 00:18:35.259 fused_ordering(920) 00:18:35.259 fused_ordering(921) 00:18:35.259 fused_ordering(922) 00:18:35.259 fused_ordering(923) 00:18:35.259 fused_ordering(924) 00:18:35.259 fused_ordering(925) 00:18:35.259 fused_ordering(926) 00:18:35.259 fused_ordering(927) 00:18:35.259 fused_ordering(928) 00:18:35.259 fused_ordering(929) 00:18:35.259 fused_ordering(930) 00:18:35.259 fused_ordering(931) 00:18:35.259 fused_ordering(932) 00:18:35.259 fused_ordering(933) 00:18:35.259 fused_ordering(934) 00:18:35.259 fused_ordering(935) 00:18:35.259 fused_ordering(936) 00:18:35.259 fused_ordering(937) 00:18:35.259 fused_ordering(938) 00:18:35.259 fused_ordering(939) 00:18:35.259 fused_ordering(940) 00:18:35.259 fused_ordering(941) 00:18:35.259 fused_ordering(942) 00:18:35.259 fused_ordering(943) 00:18:35.259 fused_ordering(944) 00:18:35.259 fused_ordering(945) 00:18:35.259 fused_ordering(946) 00:18:35.259 fused_ordering(947) 00:18:35.259 fused_ordering(948) 00:18:35.259 fused_ordering(949) 00:18:35.259 fused_ordering(950) 00:18:35.259 fused_ordering(951) 00:18:35.259 fused_ordering(952) 00:18:35.259 fused_ordering(953) 00:18:35.259 fused_ordering(954) 00:18:35.259 fused_ordering(955) 00:18:35.259 fused_ordering(956) 00:18:35.259 fused_ordering(957) 00:18:35.259 fused_ordering(958) 00:18:35.259 fused_ordering(959) 00:18:35.259 fused_ordering(960) 00:18:35.259 fused_ordering(961) 00:18:35.259 fused_ordering(962) 00:18:35.259 fused_ordering(963) 00:18:35.259 fused_ordering(964) 00:18:35.259 fused_ordering(965) 00:18:35.259 fused_ordering(966) 00:18:35.259 fused_ordering(967) 00:18:35.259 fused_ordering(968) 00:18:35.259 fused_ordering(969) 00:18:35.259 fused_ordering(970) 00:18:35.259 fused_ordering(971) 00:18:35.259 fused_ordering(972) 00:18:35.259 fused_ordering(973) 00:18:35.259 fused_ordering(974) 00:18:35.259 fused_ordering(975) 00:18:35.259 fused_ordering(976) 00:18:35.259 fused_ordering(977) 00:18:35.259 fused_ordering(978) 00:18:35.259 fused_ordering(979) 00:18:35.259 fused_ordering(980) 00:18:35.259 fused_ordering(981) 00:18:35.259 fused_ordering(982) 00:18:35.259 fused_ordering(983) 00:18:35.259 fused_ordering(984) 00:18:35.259 fused_ordering(985) 00:18:35.259 fused_ordering(986) 00:18:35.259 fused_ordering(987) 00:18:35.259 fused_ordering(988) 00:18:35.259 fused_ordering(989) 00:18:35.259 fused_ordering(990) 00:18:35.259 fused_ordering(991) 00:18:35.259 fused_ordering(992) 00:18:35.259 fused_ordering(993) 00:18:35.259 fused_ordering(994) 00:18:35.259 fused_ordering(995) 00:18:35.259 fused_ordering(996) 00:18:35.259 fused_ordering(997) 00:18:35.259 fused_ordering(998) 00:18:35.259 fused_ordering(999) 00:18:35.259 fused_ordering(1000) 00:18:35.259 fused_ordering(1001) 00:18:35.259 fused_ordering(1002) 00:18:35.259 fused_ordering(1003) 00:18:35.259 fused_ordering(1004) 00:18:35.259 fused_ordering(1005) 00:18:35.259 fused_ordering(1006) 00:18:35.259 fused_ordering(1007) 00:18:35.259 fused_ordering(1008) 00:18:35.259 fused_ordering(1009) 00:18:35.259 fused_ordering(1010) 00:18:35.259 fused_ordering(1011) 00:18:35.259 fused_ordering(1012) 00:18:35.259 fused_ordering(1013) 00:18:35.259 fused_ordering(1014) 00:18:35.259 fused_ordering(1015) 00:18:35.259 fused_ordering(1016) 00:18:35.259 fused_ordering(1017) 00:18:35.259 fused_ordering(1018) 00:18:35.259 fused_ordering(1019) 00:18:35.259 fused_ordering(1020) 00:18:35.259 fused_ordering(1021) 00:18:35.259 fused_ordering(1022) 00:18:35.259 fused_ordering(1023) 00:18:35.259 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:18:35.259 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:18:35.259 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:35.259 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:18:35.259 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:35.259 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:18:35.259 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:35.520 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:35.520 rmmod nvme_tcp 00:18:35.520 rmmod nvme_fabrics 00:18:35.520 rmmod nvme_keyring 00:18:35.520 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:35.520 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:18:35.520 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:18:35.520 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@515 -- # '[' -n 2991774 ']' 00:18:35.520 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # killprocess 2991774 00:18:35.520 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 2991774 ']' 00:18:35.520 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 2991774 00:18:35.520 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:18:35.520 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:35.520 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2991774 00:18:35.520 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:35.520 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:35.520 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2991774' 00:18:35.520 killing process with pid 2991774 00:18:35.520 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 2991774 00:18:35.520 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 2991774 00:18:35.781 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:35.781 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:35.781 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:35.781 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:18:35.781 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-save 00:18:35.781 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:35.781 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-restore 00:18:35.781 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:35.781 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:35.781 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.781 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:35.781 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.695 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:37.695 00:18:37.695 real 0m13.173s 00:18:37.695 user 0m7.120s 00:18:37.695 sys 0m6.818s 00:18:37.695 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:37.695 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:37.695 ************************************ 00:18:37.695 END TEST nvmf_fused_ordering 00:18:37.695 ************************************ 00:18:37.695 17:18:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:18:37.695 17:18:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:37.695 17:18:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:37.695 17:18:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:37.957 ************************************ 00:18:37.957 START TEST nvmf_ns_masking 00:18:37.957 ************************************ 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:18:37.957 * Looking for test storage... 00:18:37.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lcov --version 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:37.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.957 --rc genhtml_branch_coverage=1 00:18:37.957 --rc genhtml_function_coverage=1 00:18:37.957 --rc genhtml_legend=1 00:18:37.957 --rc geninfo_all_blocks=1 00:18:37.957 --rc geninfo_unexecuted_blocks=1 00:18:37.957 00:18:37.957 ' 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:37.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.957 --rc genhtml_branch_coverage=1 00:18:37.957 --rc genhtml_function_coverage=1 00:18:37.957 --rc genhtml_legend=1 00:18:37.957 --rc geninfo_all_blocks=1 00:18:37.957 --rc geninfo_unexecuted_blocks=1 00:18:37.957 00:18:37.957 ' 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:37.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.957 --rc genhtml_branch_coverage=1 00:18:37.957 --rc genhtml_function_coverage=1 00:18:37.957 --rc genhtml_legend=1 00:18:37.957 --rc geninfo_all_blocks=1 00:18:37.957 --rc geninfo_unexecuted_blocks=1 00:18:37.957 00:18:37.957 ' 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:37.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.957 --rc genhtml_branch_coverage=1 00:18:37.957 --rc genhtml_function_coverage=1 00:18:37.957 --rc genhtml_legend=1 00:18:37.957 --rc geninfo_all_blocks=1 00:18:37.957 --rc geninfo_unexecuted_blocks=1 00:18:37.957 00:18:37.957 ' 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:37.957 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:37.958 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:37.958 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:37.958 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:37.958 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:18:37.958 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:37.958 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:37.958 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:37.958 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.958 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.958 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.958 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:18:37.958 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.958 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:18:37.958 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:37.958 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:37.958 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:37.958 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:37.958 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:37.958 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:37.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:37.958 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:37.958 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:37.958 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:37.958 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:37.958 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:18:37.958 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:18:37.958 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:18:37.958 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=0904e457-8239-411e-b1da-1e43f2aba1fb 00:18:37.958 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:18:37.958 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=3174105c-5ffd-4b21-9e22-3a8059b214a3 00:18:37.958 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:18:37.958 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:18:37.958 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:18:38.220 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:18:38.220 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=fe4e2428-8eac-4b68-908d-bd43490dd6ea 00:18:38.220 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:18:38.220 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:38.220 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:38.220 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:38.220 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:38.220 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:38.220 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:38.220 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:38.220 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:38.220 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:38.220 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:38.220 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:18:38.220 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:46.364 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:46.364 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:46.364 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:46.364 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # is_hw=yes 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:46.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:46.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:18:46.364 00:18:46.364 --- 10.0.0.2 ping statistics --- 00:18:46.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.364 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:46.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:46.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:18:46.364 00:18:46.364 --- 10.0.0.1 ping statistics --- 00:18:46.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.364 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # return 0 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # nvmfpid=2996599 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # waitforlisten 2996599 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:46.364 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2996599 ']' 00:18:46.365 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:46.365 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:46.365 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:46.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:46.365 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:46.365 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:46.365 [2024-10-01 17:18:44.016705] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:18:46.365 [2024-10-01 17:18:44.016774] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:46.365 [2024-10-01 17:18:44.090091] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.365 [2024-10-01 17:18:44.128215] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:46.365 [2024-10-01 17:18:44.128263] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:46.365 [2024-10-01 17:18:44.128272] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:46.365 [2024-10-01 17:18:44.128279] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:46.365 [2024-10-01 17:18:44.128285] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:46.365 [2024-10-01 17:18:44.128306] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.365 17:18:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:46.365 17:18:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:18:46.365 17:18:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:46.365 17:18:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:46.365 17:18:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:46.365 17:18:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:46.365 17:18:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:46.624 [2024-10-01 17:18:45.000316] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:46.624 17:18:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:18:46.624 17:18:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:18:46.624 17:18:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:46.884 Malloc1 00:18:46.884 17:18:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:46.884 Malloc2 00:18:46.884 17:18:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:47.145 17:18:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:18:47.404 17:18:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:47.404 [2024-10-01 17:18:45.840655] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:47.404 17:18:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:18:47.404 17:18:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I fe4e2428-8eac-4b68-908d-bd43490dd6ea -a 10.0.0.2 -s 4420 -i 4 00:18:47.665 17:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:18:47.665 17:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:18:47.665 17:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:47.665 17:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:47.665 17:18:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:18:49.578 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:49.578 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:49.578 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:49.578 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:49.578 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:49.578 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:18:49.578 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:49.578 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:49.839 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:49.839 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:49.839 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:18:49.839 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:49.839 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:49.839 [ 0]:0x1 00:18:49.839 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:49.839 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:49.839 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8e9dbec48aeb41cc8a40ded10f9b75f6 00:18:49.839 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8e9dbec48aeb41cc8a40ded10f9b75f6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:49.839 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:18:49.839 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:18:49.839 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:49.839 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:50.100 [ 0]:0x1 00:18:50.100 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:50.100 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:50.100 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8e9dbec48aeb41cc8a40ded10f9b75f6 00:18:50.100 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8e9dbec48aeb41cc8a40ded10f9b75f6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:50.100 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:18:50.100 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:50.100 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:50.100 [ 1]:0x2 00:18:50.100 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:50.100 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:50.100 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1f23d0169f3d4065825bd2f2341f41b1 00:18:50.100 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1f23d0169f3d4065825bd2f2341f41b1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:50.100 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:18:50.100 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:50.100 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:50.100 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:50.361 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:18:50.624 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:18:50.624 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I fe4e2428-8eac-4b68-908d-bd43490dd6ea -a 10.0.0.2 -s 4420 -i 4 00:18:50.624 17:18:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:18:50.624 17:18:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:18:50.624 17:18:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:50.624 17:18:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:18:50.624 17:18:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:18:50.624 17:18:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:53.170 [ 0]:0x2 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1f23d0169f3d4065825bd2f2341f41b1 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1f23d0169f3d4065825bd2f2341f41b1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:53.170 [ 0]:0x1 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8e9dbec48aeb41cc8a40ded10f9b75f6 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8e9dbec48aeb41cc8a40ded10f9b75f6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:53.170 [ 1]:0x2 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1f23d0169f3d4065825bd2f2341f41b1 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1f23d0169f3d4065825bd2f2341f41b1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:53.170 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:53.432 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:18:53.432 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:53.432 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:53.432 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:53.432 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:53.432 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:53.432 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:53.432 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:53.432 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:53.432 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:53.432 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:53.432 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:53.432 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:53.432 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:53.432 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:53.432 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:53.432 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:53.432 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:53.432 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:18:53.432 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:53.432 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:53.432 [ 0]:0x2 00:18:53.432 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:53.432 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:53.432 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1f23d0169f3d4065825bd2f2341f41b1 00:18:53.432 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1f23d0169f3d4065825bd2f2341f41b1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:53.432 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:18:53.432 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:53.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:53.693 17:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:53.693 17:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:18:53.694 17:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I fe4e2428-8eac-4b68-908d-bd43490dd6ea -a 10.0.0.2 -s 4420 -i 4 00:18:53.954 17:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:53.954 17:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:18:53.954 17:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:53.954 17:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:18:53.954 17:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:18:53.954 17:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:18:55.870 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:56.131 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:56.131 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:56.131 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:18:56.131 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:56.131 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:18:56.131 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:56.131 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:56.131 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:56.131 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:56.131 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:18:56.131 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:56.131 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:56.131 [ 0]:0x1 00:18:56.131 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:56.131 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:56.131 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8e9dbec48aeb41cc8a40ded10f9b75f6 00:18:56.131 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8e9dbec48aeb41cc8a40ded10f9b75f6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:56.131 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:18:56.131 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:56.131 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:56.131 [ 1]:0x2 00:18:56.131 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:56.131 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:56.131 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1f23d0169f3d4065825bd2f2341f41b1 00:18:56.131 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1f23d0169f3d4065825bd2f2341f41b1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:56.131 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:56.392 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:18:56.393 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:56.393 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:56.393 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:56.393 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:56.393 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:56.393 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:56.393 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:56.393 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:56.393 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:56.393 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:56.393 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:56.393 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:56.393 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:56.393 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:56.393 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:56.393 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:56.393 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:56.393 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:18:56.393 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:56.393 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:56.393 [ 0]:0x2 00:18:56.393 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:56.393 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:56.393 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1f23d0169f3d4065825bd2f2341f41b1 00:18:56.393 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1f23d0169f3d4065825bd2f2341f41b1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:56.393 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:56.393 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:56.393 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:56.393 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:56.393 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:56.393 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:56.393 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:56.393 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:56.393 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:56.393 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:56.393 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:56.393 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:56.654 [2024-10-01 17:18:55.047652] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:56.654 request: 00:18:56.654 { 00:18:56.654 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:56.654 "nsid": 2, 00:18:56.654 "host": "nqn.2016-06.io.spdk:host1", 00:18:56.654 "method": "nvmf_ns_remove_host", 00:18:56.654 "req_id": 1 00:18:56.654 } 00:18:56.655 Got JSON-RPC error response 00:18:56.655 response: 00:18:56.655 { 00:18:56.655 "code": -32602, 00:18:56.655 "message": "Invalid parameters" 00:18:56.655 } 00:18:56.655 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:56.655 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:56.655 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:56.655 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:56.655 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:18:56.655 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:56.655 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:56.655 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:56.655 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:56.655 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:56.655 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:56.655 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:56.655 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:56.655 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:56.655 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:56.655 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:56.655 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:56.655 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:56.655 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:56.655 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:56.655 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:56.655 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:56.655 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:18:56.655 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:56.655 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:56.655 [ 0]:0x2 00:18:56.655 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:56.655 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:56.655 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1f23d0169f3d4065825bd2f2341f41b1 00:18:56.655 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1f23d0169f3d4065825bd2f2341f41b1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:56.655 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:18:56.655 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:56.916 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:56.916 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2998980 00:18:56.916 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:18:56.916 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:18:56.916 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2998980 /var/tmp/host.sock 00:18:56.916 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2998980 ']' 00:18:56.916 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:18:56.916 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:56.916 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:56.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:56.916 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:56.916 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:56.916 [2024-10-01 17:18:55.317522] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:18:56.916 [2024-10-01 17:18:55.317581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2998980 ] 00:18:56.916 [2024-10-01 17:18:55.397041] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.916 [2024-10-01 17:18:55.427749] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:57.860 17:18:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:57.860 17:18:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:18:57.860 17:18:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:57.860 17:18:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:58.121 17:18:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 0904e457-8239-411e-b1da-1e43f2aba1fb 00:18:58.121 17:18:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:18:58.121 17:18:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 0904E4578239411EB1DA1E43F2ABA1FB -i 00:18:58.121 17:18:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 3174105c-5ffd-4b21-9e22-3a8059b214a3 00:18:58.121 17:18:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:18:58.121 17:18:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 3174105C5FFD4B219E223A8059B214A3 -i 00:18:58.382 17:18:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:58.643 17:18:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:18:58.643 17:18:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:58.643 17:18:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:58.904 nvme0n1 00:18:58.904 17:18:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:58.904 17:18:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:59.165 nvme1n2 00:18:59.165 17:18:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:18:59.165 17:18:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:59.165 17:18:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:18:59.165 17:18:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:18:59.165 17:18:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:18:59.426 17:18:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:18:59.426 17:18:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:18:59.426 17:18:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:18:59.426 17:18:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:18:59.426 17:18:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 0904e457-8239-411e-b1da-1e43f2aba1fb == \0\9\0\4\e\4\5\7\-\8\2\3\9\-\4\1\1\e\-\b\1\d\a\-\1\e\4\3\f\2\a\b\a\1\f\b ]] 00:18:59.426 17:18:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:18:59.426 17:18:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:18:59.427 17:18:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:18:59.687 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 3174105c-5ffd-4b21-9e22-3a8059b214a3 == \3\1\7\4\1\0\5\c\-\5\f\f\d\-\4\b\2\1\-\9\e\2\2\-\3\a\8\0\5\9\b\2\1\4\a\3 ]] 00:18:59.687 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2998980 00:18:59.687 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2998980 ']' 00:18:59.687 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2998980 00:18:59.687 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:18:59.687 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:59.687 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2998980 00:18:59.687 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:59.687 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:59.687 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2998980' 00:18:59.687 killing process with pid 2998980 00:18:59.687 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2998980 00:18:59.687 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2998980 00:18:59.948 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:00.209 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:19:00.209 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:19:00.209 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:00.209 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:19:00.209 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:00.209 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:19:00.209 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:00.209 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:00.209 rmmod nvme_tcp 00:19:00.209 rmmod nvme_fabrics 00:19:00.209 rmmod nvme_keyring 00:19:00.209 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:00.209 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:19:00.209 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:19:00.209 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@515 -- # '[' -n 2996599 ']' 00:19:00.209 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # killprocess 2996599 00:19:00.209 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2996599 ']' 00:19:00.209 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2996599 00:19:00.209 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:19:00.209 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:00.209 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2996599 00:19:00.209 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:00.209 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:00.209 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2996599' 00:19:00.209 killing process with pid 2996599 00:19:00.209 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2996599 00:19:00.209 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2996599 00:19:00.470 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:19:00.470 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:19:00.470 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:19:00.470 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:19:00.470 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-save 00:19:00.470 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:19:00.470 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-restore 00:19:00.470 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:00.470 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:00.470 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:00.470 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:00.470 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:03.018 17:19:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:03.018 00:19:03.018 real 0m24.689s 00:19:03.018 user 0m24.634s 00:19:03.018 sys 0m7.835s 00:19:03.018 17:19:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:03.018 17:19:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:03.018 ************************************ 00:19:03.018 END TEST nvmf_ns_masking 00:19:03.018 ************************************ 00:19:03.018 17:19:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:19:03.018 17:19:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:19:03.018 17:19:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:03.018 17:19:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:03.018 17:19:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:03.018 ************************************ 00:19:03.018 START TEST nvmf_nvme_cli 00:19:03.018 ************************************ 00:19:03.018 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:19:03.018 * Looking for test storage... 00:19:03.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:03.018 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:03.018 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:03.018 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lcov --version 00:19:03.018 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:03.018 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:03.018 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:03.018 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:03.018 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:19:03.018 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:19:03.018 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:19:03.018 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:19:03.018 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:19:03.018 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:19:03.018 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:19:03.018 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:03.018 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:19:03.018 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:19:03.018 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:03.018 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:03.018 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:19:03.018 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:19:03.018 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:03.018 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:19:03.018 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:19:03.018 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:19:03.018 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:19:03.018 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:03.018 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:19:03.018 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:19:03.018 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:03.018 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:03.018 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:19:03.018 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:03.018 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:03.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.018 --rc genhtml_branch_coverage=1 00:19:03.018 --rc genhtml_function_coverage=1 00:19:03.018 --rc genhtml_legend=1 00:19:03.018 --rc geninfo_all_blocks=1 00:19:03.018 --rc geninfo_unexecuted_blocks=1 00:19:03.018 00:19:03.018 ' 00:19:03.018 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:03.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.018 --rc genhtml_branch_coverage=1 00:19:03.018 --rc genhtml_function_coverage=1 00:19:03.018 --rc genhtml_legend=1 00:19:03.018 --rc geninfo_all_blocks=1 00:19:03.018 --rc geninfo_unexecuted_blocks=1 00:19:03.018 00:19:03.018 ' 00:19:03.018 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:03.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.018 --rc genhtml_branch_coverage=1 00:19:03.018 --rc genhtml_function_coverage=1 00:19:03.018 --rc genhtml_legend=1 00:19:03.018 --rc geninfo_all_blocks=1 00:19:03.018 --rc geninfo_unexecuted_blocks=1 00:19:03.018 00:19:03.018 ' 00:19:03.018 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:03.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.018 --rc genhtml_branch_coverage=1 00:19:03.018 --rc genhtml_function_coverage=1 00:19:03.018 --rc genhtml_legend=1 00:19:03.018 --rc geninfo_all_blocks=1 00:19:03.018 --rc geninfo_unexecuted_blocks=1 00:19:03.018 00:19:03.018 ' 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:03.019 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # prepare_net_devs 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # local -g is_hw=no 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # remove_spdk_ns 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:19:03.019 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:11.163 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:11.163 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:11.163 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:11.163 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:11.163 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # is_hw=yes 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:11.164 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:11.164 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.689 ms 00:19:11.164 00:19:11.164 --- 10.0.0.2 ping statistics --- 00:19:11.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.164 rtt min/avg/max/mdev = 0.689/0.689/0.689/0.000 ms 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:11.164 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:11.164 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:19:11.164 00:19:11.164 --- 10.0.0.1 ping statistics --- 00:19:11.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.164 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # return 0 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # nvmfpid=3003854 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # waitforlisten 3003854 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 3003854 ']' 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:11.164 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:11.164 [2024-10-01 17:19:08.713071] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:19:11.164 [2024-10-01 17:19:08.713142] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:11.164 [2024-10-01 17:19:08.792439] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:11.164 [2024-10-01 17:19:08.832940] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:11.164 [2024-10-01 17:19:08.832987] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:11.164 [2024-10-01 17:19:08.833001] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:11.164 [2024-10-01 17:19:08.833008] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:11.164 [2024-10-01 17:19:08.833015] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:11.164 [2024-10-01 17:19:08.833066] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:11.164 [2024-10-01 17:19:08.833102] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:11.164 [2024-10-01 17:19:08.833262] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.164 [2024-10-01 17:19:08.833263] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:19:11.164 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:11.164 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:19:11.164 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:11.164 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:11.164 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:11.164 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:11.164 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:11.164 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.164 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:11.164 [2024-10-01 17:19:09.560829] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:11.164 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.164 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:11.164 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.164 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:11.164 Malloc0 00:19:11.164 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.164 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:11.164 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.164 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:11.164 Malloc1 00:19:11.164 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.164 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:19:11.164 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.164 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:11.164 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.164 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:11.164 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.165 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:11.165 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.165 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:11.165 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.165 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:11.165 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.165 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:11.165 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.165 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:11.165 [2024-10-01 17:19:09.626867] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:11.165 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.165 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:11.165 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.165 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:11.165 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.165 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:19:11.425 00:19:11.425 Discovery Log Number of Records 2, Generation counter 2 00:19:11.425 =====Discovery Log Entry 0====== 00:19:11.425 trtype: tcp 00:19:11.425 adrfam: ipv4 00:19:11.425 subtype: current discovery subsystem 00:19:11.425 treq: not required 00:19:11.425 portid: 0 00:19:11.425 trsvcid: 4420 00:19:11.425 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:11.425 traddr: 10.0.0.2 00:19:11.425 eflags: explicit discovery connections, duplicate discovery information 00:19:11.425 sectype: none 00:19:11.425 =====Discovery Log Entry 1====== 00:19:11.425 trtype: tcp 00:19:11.425 adrfam: ipv4 00:19:11.425 subtype: nvme subsystem 00:19:11.425 treq: not required 00:19:11.425 portid: 0 00:19:11.425 trsvcid: 4420 00:19:11.425 subnqn: nqn.2016-06.io.spdk:cnode1 00:19:11.425 traddr: 10.0.0.2 00:19:11.425 eflags: none 00:19:11.425 sectype: none 00:19:11.425 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:19:11.425 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:19:11.425 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:19:11.425 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:11.425 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:19:11.425 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:19:11.425 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:11.425 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:19:11.425 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:11.425 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:19:11.425 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:13.337 17:19:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:13.337 17:19:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:19:13.337 17:19:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:13.337 17:19:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:19:13.337 17:19:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:19:13.337 17:19:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:19:15.250 /dev/nvme0n2 ]] 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:15.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:15.250 rmmod nvme_tcp 00:19:15.250 rmmod nvme_fabrics 00:19:15.250 rmmod nvme_keyring 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@515 -- # '[' -n 3003854 ']' 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # killprocess 3003854 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 3003854 ']' 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 3003854 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:19:15.250 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:15.251 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3003854 00:19:15.251 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:15.251 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:15.251 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3003854' 00:19:15.251 killing process with pid 3003854 00:19:15.251 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 3003854 00:19:15.251 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 3003854 00:19:15.514 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:19:15.514 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:19:15.514 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:19:15.514 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:19:15.514 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-restore 00:19:15.514 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-save 00:19:15.514 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:19:15.514 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:15.514 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:15.514 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.514 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:15.514 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.426 17:19:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:17.426 00:19:17.426 real 0m14.950s 00:19:17.426 user 0m22.384s 00:19:17.426 sys 0m6.167s 00:19:17.426 17:19:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:17.426 17:19:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:17.426 ************************************ 00:19:17.426 END TEST nvmf_nvme_cli 00:19:17.426 ************************************ 00:19:17.686 17:19:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:19:17.686 17:19:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:19:17.686 17:19:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:17.686 17:19:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:17.686 17:19:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:17.686 ************************************ 00:19:17.686 START TEST nvmf_vfio_user 00:19:17.686 ************************************ 00:19:17.686 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:19:17.686 * Looking for test storage... 00:19:17.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:17.686 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:17.686 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lcov --version 00:19:17.686 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:17.686 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:17.686 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:17.686 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:17.686 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:17.686 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:19:17.686 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:19:17.686 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:19:17.686 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:19:17.686 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:19:17.686 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:19:17.686 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:19:17.686 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:17.686 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:19:17.686 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:19:17.686 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:17.686 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:17.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.946 --rc genhtml_branch_coverage=1 00:19:17.946 --rc genhtml_function_coverage=1 00:19:17.946 --rc genhtml_legend=1 00:19:17.946 --rc geninfo_all_blocks=1 00:19:17.946 --rc geninfo_unexecuted_blocks=1 00:19:17.946 00:19:17.946 ' 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:17.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.946 --rc genhtml_branch_coverage=1 00:19:17.946 --rc genhtml_function_coverage=1 00:19:17.946 --rc genhtml_legend=1 00:19:17.946 --rc geninfo_all_blocks=1 00:19:17.946 --rc geninfo_unexecuted_blocks=1 00:19:17.946 00:19:17.946 ' 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:17.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.946 --rc genhtml_branch_coverage=1 00:19:17.946 --rc genhtml_function_coverage=1 00:19:17.946 --rc genhtml_legend=1 00:19:17.946 --rc geninfo_all_blocks=1 00:19:17.946 --rc geninfo_unexecuted_blocks=1 00:19:17.946 00:19:17.946 ' 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:17.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.946 --rc genhtml_branch_coverage=1 00:19:17.946 --rc genhtml_function_coverage=1 00:19:17.946 --rc genhtml_legend=1 00:19:17.946 --rc geninfo_all_blocks=1 00:19:17.946 --rc geninfo_unexecuted_blocks=1 00:19:17.946 00:19:17.946 ' 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:17.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:17.946 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:19:17.947 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:17.947 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:17.947 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:17.947 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:19:17.947 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:19:17.947 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:19:17.947 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:19:17.947 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3005495 00:19:17.947 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3005495' 00:19:17.947 Process pid: 3005495 00:19:17.947 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:17.947 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3005495 00:19:17.947 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 3005495 ']' 00:19:17.947 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:19:17.947 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.947 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:17.947 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.947 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:17.947 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:17.947 [2024-10-01 17:19:16.339766] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:19:17.947 [2024-10-01 17:19:16.339816] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:17.947 [2024-10-01 17:19:16.401116] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:17.947 [2024-10-01 17:19:16.432680] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:17.947 [2024-10-01 17:19:16.432721] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:17.947 [2024-10-01 17:19:16.432729] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:17.947 [2024-10-01 17:19:16.432735] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:17.947 [2024-10-01 17:19:16.432742] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:17.947 [2024-10-01 17:19:16.432888] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:17.947 [2024-10-01 17:19:16.433007] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:17.947 [2024-10-01 17:19:16.433131] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.947 [2024-10-01 17:19:16.433131] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:19:18.206 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:18.206 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:19:18.206 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:19:19.145 17:19:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:19:19.405 17:19:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:19:19.405 17:19:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:19:19.405 17:19:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:19.405 17:19:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:19:19.405 17:19:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:19.405 Malloc1 00:19:19.405 17:19:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:19:19.665 17:19:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:19:19.925 17:19:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:19:19.925 17:19:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:19.925 17:19:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:19:19.925 17:19:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:20.185 Malloc2 00:19:20.185 17:19:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:19:20.445 17:19:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:19:20.704 17:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:19:20.704 17:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:19:20.704 17:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:19:20.704 17:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:20.704 17:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:19:20.704 17:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:19:20.704 17:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:19:20.704 [2024-10-01 17:19:19.247755] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:19:20.704 [2024-10-01 17:19:19.247830] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3006077 ] 00:19:20.965 [2024-10-01 17:19:19.284614] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:19:20.965 [2024-10-01 17:19:19.289904] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:20.965 [2024-10-01 17:19:19.289923] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f9a27d5f000 00:19:20.965 [2024-10-01 17:19:19.290907] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:20.965 [2024-10-01 17:19:19.291900] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:20.965 [2024-10-01 17:19:19.292908] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:20.965 [2024-10-01 17:19:19.293922] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:20.965 [2024-10-01 17:19:19.294925] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:20.965 [2024-10-01 17:19:19.295931] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:20.965 [2024-10-01 17:19:19.296932] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:20.965 [2024-10-01 17:19:19.297938] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:20.965 [2024-10-01 17:19:19.298944] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:20.965 [2024-10-01 17:19:19.298954] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f9a26a69000 00:19:20.965 [2024-10-01 17:19:19.300282] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:20.965 [2024-10-01 17:19:19.321157] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:19:20.965 [2024-10-01 17:19:19.321181] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:19:20.965 [2024-10-01 17:19:19.324089] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:19:20.965 [2024-10-01 17:19:19.324133] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:19:20.965 [2024-10-01 17:19:19.324217] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:19:20.965 [2024-10-01 17:19:19.324234] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:19:20.965 [2024-10-01 17:19:19.324239] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:19:20.965 [2024-10-01 17:19:19.325095] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:19:20.965 [2024-10-01 17:19:19.325105] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:19:20.965 [2024-10-01 17:19:19.325112] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:19:20.965 [2024-10-01 17:19:19.326098] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:19:20.965 [2024-10-01 17:19:19.326107] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:19:20.965 [2024-10-01 17:19:19.326115] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:19:20.965 [2024-10-01 17:19:19.327104] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:19:20.965 [2024-10-01 17:19:19.327113] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:20.965 [2024-10-01 17:19:19.328108] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:19:20.965 [2024-10-01 17:19:19.328116] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:19:20.965 [2024-10-01 17:19:19.328121] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:19:20.965 [2024-10-01 17:19:19.328128] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:20.965 [2024-10-01 17:19:19.328233] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:19:20.965 [2024-10-01 17:19:19.328238] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:20.965 [2024-10-01 17:19:19.328243] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:19:20.965 [2024-10-01 17:19:19.329115] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:19:20.965 [2024-10-01 17:19:19.330124] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:19:20.965 [2024-10-01 17:19:19.331126] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:19:20.965 [2024-10-01 17:19:19.332122] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:20.965 [2024-10-01 17:19:19.332266] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:20.965 [2024-10-01 17:19:19.333133] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:19:20.965 [2024-10-01 17:19:19.333141] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:20.965 [2024-10-01 17:19:19.333146] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:19:20.965 [2024-10-01 17:19:19.333167] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:19:20.965 [2024-10-01 17:19:19.333178] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:19:20.965 [2024-10-01 17:19:19.333192] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:20.965 [2024-10-01 17:19:19.333197] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:20.965 [2024-10-01 17:19:19.333201] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:20.965 [2024-10-01 17:19:19.333214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:20.965 [2024-10-01 17:19:19.333250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:19:20.965 [2024-10-01 17:19:19.333259] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:19:20.965 [2024-10-01 17:19:19.333264] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:19:20.965 [2024-10-01 17:19:19.333268] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:19:20.965 [2024-10-01 17:19:19.333275] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:19:20.965 [2024-10-01 17:19:19.333281] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:19:20.965 [2024-10-01 17:19:19.333285] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:19:20.965 [2024-10-01 17:19:19.333290] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:19:20.965 [2024-10-01 17:19:19.333298] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:19:20.965 [2024-10-01 17:19:19.333308] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:19:20.966 [2024-10-01 17:19:19.333322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:19:20.966 [2024-10-01 17:19:19.333333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:20.966 [2024-10-01 17:19:19.333341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:20.966 [2024-10-01 17:19:19.333350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:20.966 [2024-10-01 17:19:19.333358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:20.966 [2024-10-01 17:19:19.333363] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:19:20.966 [2024-10-01 17:19:19.333373] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:20.966 [2024-10-01 17:19:19.333382] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:19:20.966 [2024-10-01 17:19:19.333392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:19:20.966 [2024-10-01 17:19:19.333397] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:19:20.966 [2024-10-01 17:19:19.333403] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:20.966 [2024-10-01 17:19:19.333409] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:19:20.966 [2024-10-01 17:19:19.333417] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:19:20.966 [2024-10-01 17:19:19.333426] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:20.966 [2024-10-01 17:19:19.333435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:19:20.966 [2024-10-01 17:19:19.333497] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:19:20.966 [2024-10-01 17:19:19.333505] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:19:20.966 [2024-10-01 17:19:19.333512] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:19:20.966 [2024-10-01 17:19:19.333517] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:19:20.966 [2024-10-01 17:19:19.333522] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:20.966 [2024-10-01 17:19:19.333528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:19:20.966 [2024-10-01 17:19:19.333538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:19:20.966 [2024-10-01 17:19:19.333547] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:19:20.966 [2024-10-01 17:19:19.333555] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:19:20.966 [2024-10-01 17:19:19.333563] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:19:20.966 [2024-10-01 17:19:19.333570] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:20.966 [2024-10-01 17:19:19.333574] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:20.966 [2024-10-01 17:19:19.333578] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:20.966 [2024-10-01 17:19:19.333584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:20.966 [2024-10-01 17:19:19.333605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:19:20.966 [2024-10-01 17:19:19.333616] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:20.966 [2024-10-01 17:19:19.333624] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:20.966 [2024-10-01 17:19:19.333630] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:20.966 [2024-10-01 17:19:19.333635] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:20.966 [2024-10-01 17:19:19.333638] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:20.966 [2024-10-01 17:19:19.333644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:20.966 [2024-10-01 17:19:19.333656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:19:20.966 [2024-10-01 17:19:19.333663] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:20.966 [2024-10-01 17:19:19.333670] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:19:20.966 [2024-10-01 17:19:19.333677] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:19:20.966 [2024-10-01 17:19:19.333683] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:19:20.966 [2024-10-01 17:19:19.333688] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:20.966 [2024-10-01 17:19:19.333693] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:19:20.966 [2024-10-01 17:19:19.333698] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:19:20.966 [2024-10-01 17:19:19.333703] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:19:20.966 [2024-10-01 17:19:19.333710] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:19:20.966 [2024-10-01 17:19:19.333727] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:19:20.966 [2024-10-01 17:19:19.333737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:19:20.966 [2024-10-01 17:19:19.333749] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:19:20.966 [2024-10-01 17:19:19.333759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:19:20.966 [2024-10-01 17:19:19.333770] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:19:20.966 [2024-10-01 17:19:19.333782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:19:20.966 [2024-10-01 17:19:19.333794] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:20.966 [2024-10-01 17:19:19.333801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:19:20.966 [2024-10-01 17:19:19.333813] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:19:20.966 [2024-10-01 17:19:19.333818] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:19:20.966 [2024-10-01 17:19:19.333822] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:19:20.966 [2024-10-01 17:19:19.333825] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:19:20.966 [2024-10-01 17:19:19.333829] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:19:20.966 [2024-10-01 17:19:19.333835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:19:20.966 [2024-10-01 17:19:19.333842] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:19:20.966 [2024-10-01 17:19:19.333847] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:19:20.966 [2024-10-01 17:19:19.333850] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:20.966 [2024-10-01 17:19:19.333856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:19:20.966 [2024-10-01 17:19:19.333863] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:19:20.966 [2024-10-01 17:19:19.333868] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:20.966 [2024-10-01 17:19:19.333871] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:20.966 [2024-10-01 17:19:19.333877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:20.966 [2024-10-01 17:19:19.333884] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:19:20.966 [2024-10-01 17:19:19.333889] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:19:20.966 [2024-10-01 17:19:19.333892] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:20.966 [2024-10-01 17:19:19.333898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:19:20.966 [2024-10-01 17:19:19.333905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:19:20.966 [2024-10-01 17:19:19.333919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:19:20.966 [2024-10-01 17:19:19.333930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:19:20.966 [2024-10-01 17:19:19.333938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:19:20.966 ===================================================== 00:19:20.966 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:20.966 ===================================================== 00:19:20.966 Controller Capabilities/Features 00:19:20.966 ================================ 00:19:20.966 Vendor ID: 4e58 00:19:20.966 Subsystem Vendor ID: 4e58 00:19:20.966 Serial Number: SPDK1 00:19:20.966 Model Number: SPDK bdev Controller 00:19:20.966 Firmware Version: 25.01 00:19:20.966 Recommended Arb Burst: 6 00:19:20.966 IEEE OUI Identifier: 8d 6b 50 00:19:20.966 Multi-path I/O 00:19:20.966 May have multiple subsystem ports: Yes 00:19:20.966 May have multiple controllers: Yes 00:19:20.966 Associated with SR-IOV VF: No 00:19:20.966 Max Data Transfer Size: 131072 00:19:20.967 Max Number of Namespaces: 32 00:19:20.967 Max Number of I/O Queues: 127 00:19:20.967 NVMe Specification Version (VS): 1.3 00:19:20.967 NVMe Specification Version (Identify): 1.3 00:19:20.967 Maximum Queue Entries: 256 00:19:20.967 Contiguous Queues Required: Yes 00:19:20.967 Arbitration Mechanisms Supported 00:19:20.967 Weighted Round Robin: Not Supported 00:19:20.967 Vendor Specific: Not Supported 00:19:20.967 Reset Timeout: 15000 ms 00:19:20.967 Doorbell Stride: 4 bytes 00:19:20.967 NVM Subsystem Reset: Not Supported 00:19:20.967 Command Sets Supported 00:19:20.967 NVM Command Set: Supported 00:19:20.967 Boot Partition: Not Supported 00:19:20.967 Memory Page Size Minimum: 4096 bytes 00:19:20.967 Memory Page Size Maximum: 4096 bytes 00:19:20.967 Persistent Memory Region: Not Supported 00:19:20.967 Optional Asynchronous Events Supported 00:19:20.967 Namespace Attribute Notices: Supported 00:19:20.967 Firmware Activation Notices: Not Supported 00:19:20.967 ANA Change Notices: Not Supported 00:19:20.967 PLE Aggregate Log Change Notices: Not Supported 00:19:20.967 LBA Status Info Alert Notices: Not Supported 00:19:20.967 EGE Aggregate Log Change Notices: Not Supported 00:19:20.967 Normal NVM Subsystem Shutdown event: Not Supported 00:19:20.967 Zone Descriptor Change Notices: Not Supported 00:19:20.967 Discovery Log Change Notices: Not Supported 00:19:20.967 Controller Attributes 00:19:20.967 128-bit Host Identifier: Supported 00:19:20.967 Non-Operational Permissive Mode: Not Supported 00:19:20.967 NVM Sets: Not Supported 00:19:20.967 Read Recovery Levels: Not Supported 00:19:20.967 Endurance Groups: Not Supported 00:19:20.967 Predictable Latency Mode: Not Supported 00:19:20.967 Traffic Based Keep ALive: Not Supported 00:19:20.967 Namespace Granularity: Not Supported 00:19:20.967 SQ Associations: Not Supported 00:19:20.967 UUID List: Not Supported 00:19:20.967 Multi-Domain Subsystem: Not Supported 00:19:20.967 Fixed Capacity Management: Not Supported 00:19:20.967 Variable Capacity Management: Not Supported 00:19:20.967 Delete Endurance Group: Not Supported 00:19:20.967 Delete NVM Set: Not Supported 00:19:20.967 Extended LBA Formats Supported: Not Supported 00:19:20.967 Flexible Data Placement Supported: Not Supported 00:19:20.967 00:19:20.967 Controller Memory Buffer Support 00:19:20.967 ================================ 00:19:20.967 Supported: No 00:19:20.967 00:19:20.967 Persistent Memory Region Support 00:19:20.967 ================================ 00:19:20.967 Supported: No 00:19:20.967 00:19:20.967 Admin Command Set Attributes 00:19:20.967 ============================ 00:19:20.967 Security Send/Receive: Not Supported 00:19:20.967 Format NVM: Not Supported 00:19:20.967 Firmware Activate/Download: Not Supported 00:19:20.967 Namespace Management: Not Supported 00:19:20.967 Device Self-Test: Not Supported 00:19:20.967 Directives: Not Supported 00:19:20.967 NVMe-MI: Not Supported 00:19:20.967 Virtualization Management: Not Supported 00:19:20.967 Doorbell Buffer Config: Not Supported 00:19:20.967 Get LBA Status Capability: Not Supported 00:19:20.967 Command & Feature Lockdown Capability: Not Supported 00:19:20.967 Abort Command Limit: 4 00:19:20.967 Async Event Request Limit: 4 00:19:20.967 Number of Firmware Slots: N/A 00:19:20.967 Firmware Slot 1 Read-Only: N/A 00:19:20.967 Firmware Activation Without Reset: N/A 00:19:20.967 Multiple Update Detection Support: N/A 00:19:20.967 Firmware Update Granularity: No Information Provided 00:19:20.967 Per-Namespace SMART Log: No 00:19:20.967 Asymmetric Namespace Access Log Page: Not Supported 00:19:20.967 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:19:20.967 Command Effects Log Page: Supported 00:19:20.967 Get Log Page Extended Data: Supported 00:19:20.967 Telemetry Log Pages: Not Supported 00:19:20.967 Persistent Event Log Pages: Not Supported 00:19:20.967 Supported Log Pages Log Page: May Support 00:19:20.967 Commands Supported & Effects Log Page: Not Supported 00:19:20.967 Feature Identifiers & Effects Log Page:May Support 00:19:20.967 NVMe-MI Commands & Effects Log Page: May Support 00:19:20.967 Data Area 4 for Telemetry Log: Not Supported 00:19:20.967 Error Log Page Entries Supported: 128 00:19:20.967 Keep Alive: Supported 00:19:20.967 Keep Alive Granularity: 10000 ms 00:19:20.967 00:19:20.967 NVM Command Set Attributes 00:19:20.967 ========================== 00:19:20.967 Submission Queue Entry Size 00:19:20.967 Max: 64 00:19:20.967 Min: 64 00:19:20.967 Completion Queue Entry Size 00:19:20.967 Max: 16 00:19:20.967 Min: 16 00:19:20.967 Number of Namespaces: 32 00:19:20.967 Compare Command: Supported 00:19:20.967 Write Uncorrectable Command: Not Supported 00:19:20.967 Dataset Management Command: Supported 00:19:20.967 Write Zeroes Command: Supported 00:19:20.967 Set Features Save Field: Not Supported 00:19:20.967 Reservations: Not Supported 00:19:20.967 Timestamp: Not Supported 00:19:20.967 Copy: Supported 00:19:20.967 Volatile Write Cache: Present 00:19:20.967 Atomic Write Unit (Normal): 1 00:19:20.967 Atomic Write Unit (PFail): 1 00:19:20.967 Atomic Compare & Write Unit: 1 00:19:20.967 Fused Compare & Write: Supported 00:19:20.967 Scatter-Gather List 00:19:20.967 SGL Command Set: Supported (Dword aligned) 00:19:20.967 SGL Keyed: Not Supported 00:19:20.967 SGL Bit Bucket Descriptor: Not Supported 00:19:20.967 SGL Metadata Pointer: Not Supported 00:19:20.967 Oversized SGL: Not Supported 00:19:20.967 SGL Metadata Address: Not Supported 00:19:20.967 SGL Offset: Not Supported 00:19:20.967 Transport SGL Data Block: Not Supported 00:19:20.967 Replay Protected Memory Block: Not Supported 00:19:20.967 00:19:20.967 Firmware Slot Information 00:19:20.967 ========================= 00:19:20.967 Active slot: 1 00:19:20.967 Slot 1 Firmware Revision: 25.01 00:19:20.967 00:19:20.967 00:19:20.967 Commands Supported and Effects 00:19:20.967 ============================== 00:19:20.967 Admin Commands 00:19:20.967 -------------- 00:19:20.967 Get Log Page (02h): Supported 00:19:20.967 Identify (06h): Supported 00:19:20.967 Abort (08h): Supported 00:19:20.967 Set Features (09h): Supported 00:19:20.967 Get Features (0Ah): Supported 00:19:20.967 Asynchronous Event Request (0Ch): Supported 00:19:20.967 Keep Alive (18h): Supported 00:19:20.967 I/O Commands 00:19:20.967 ------------ 00:19:20.967 Flush (00h): Supported LBA-Change 00:19:20.967 Write (01h): Supported LBA-Change 00:19:20.967 Read (02h): Supported 00:19:20.967 Compare (05h): Supported 00:19:20.967 Write Zeroes (08h): Supported LBA-Change 00:19:20.967 Dataset Management (09h): Supported LBA-Change 00:19:20.967 Copy (19h): Supported LBA-Change 00:19:20.967 00:19:20.967 Error Log 00:19:20.967 ========= 00:19:20.967 00:19:20.967 Arbitration 00:19:20.967 =========== 00:19:20.967 Arbitration Burst: 1 00:19:20.967 00:19:20.967 Power Management 00:19:20.967 ================ 00:19:20.967 Number of Power States: 1 00:19:20.967 Current Power State: Power State #0 00:19:20.967 Power State #0: 00:19:20.967 Max Power: 0.00 W 00:19:20.967 Non-Operational State: Operational 00:19:20.967 Entry Latency: Not Reported 00:19:20.967 Exit Latency: Not Reported 00:19:20.967 Relative Read Throughput: 0 00:19:20.967 Relative Read Latency: 0 00:19:20.967 Relative Write Throughput: 0 00:19:20.967 Relative Write Latency: 0 00:19:20.967 Idle Power: Not Reported 00:19:20.967 Active Power: Not Reported 00:19:20.967 Non-Operational Permissive Mode: Not Supported 00:19:20.967 00:19:20.967 Health Information 00:19:20.967 ================== 00:19:20.967 Critical Warnings: 00:19:20.967 Available Spare Space: OK 00:19:20.967 Temperature: OK 00:19:20.967 Device Reliability: OK 00:19:20.967 Read Only: No 00:19:20.967 Volatile Memory Backup: OK 00:19:20.967 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:20.967 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:20.967 Available Spare: 0% 00:19:20.967 Available Sp[2024-10-01 17:19:19.334044] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:19:20.967 [2024-10-01 17:19:19.334053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:19:20.967 [2024-10-01 17:19:19.334081] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:19:20.967 [2024-10-01 17:19:19.334092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.967 [2024-10-01 17:19:19.334098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.967 [2024-10-01 17:19:19.334105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.967 [2024-10-01 17:19:19.334111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.967 [2024-10-01 17:19:19.337002] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:19:20.967 [2024-10-01 17:19:19.337013] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:19:20.968 [2024-10-01 17:19:19.337160] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:20.968 [2024-10-01 17:19:19.337201] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:19:20.968 [2024-10-01 17:19:19.337207] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:19:20.968 [2024-10-01 17:19:19.338161] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:19:20.968 [2024-10-01 17:19:19.338172] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:19:20.968 [2024-10-01 17:19:19.338234] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:19:20.968 [2024-10-01 17:19:19.340190] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:20.968 are Threshold: 0% 00:19:20.968 Life Percentage Used: 0% 00:19:20.968 Data Units Read: 0 00:19:20.968 Data Units Written: 0 00:19:20.968 Host Read Commands: 0 00:19:20.968 Host Write Commands: 0 00:19:20.968 Controller Busy Time: 0 minutes 00:19:20.968 Power Cycles: 0 00:19:20.968 Power On Hours: 0 hours 00:19:20.968 Unsafe Shutdowns: 0 00:19:20.968 Unrecoverable Media Errors: 0 00:19:20.968 Lifetime Error Log Entries: 0 00:19:20.968 Warning Temperature Time: 0 minutes 00:19:20.968 Critical Temperature Time: 0 minutes 00:19:20.968 00:19:20.968 Number of Queues 00:19:20.968 ================ 00:19:20.968 Number of I/O Submission Queues: 127 00:19:20.968 Number of I/O Completion Queues: 127 00:19:20.968 00:19:20.968 Active Namespaces 00:19:20.968 ================= 00:19:20.968 Namespace ID:1 00:19:20.968 Error Recovery Timeout: Unlimited 00:19:20.968 Command Set Identifier: NVM (00h) 00:19:20.968 Deallocate: Supported 00:19:20.968 Deallocated/Unwritten Error: Not Supported 00:19:20.968 Deallocated Read Value: Unknown 00:19:20.968 Deallocate in Write Zeroes: Not Supported 00:19:20.968 Deallocated Guard Field: 0xFFFF 00:19:20.968 Flush: Supported 00:19:20.968 Reservation: Supported 00:19:20.968 Namespace Sharing Capabilities: Multiple Controllers 00:19:20.968 Size (in LBAs): 131072 (0GiB) 00:19:20.968 Capacity (in LBAs): 131072 (0GiB) 00:19:20.968 Utilization (in LBAs): 131072 (0GiB) 00:19:20.968 NGUID: AADE631005254CA8BC653060DAA19B41 00:19:20.968 UUID: aade6310-0525-4ca8-bc65-3060daa19b41 00:19:20.968 Thin Provisioning: Not Supported 00:19:20.968 Per-NS Atomic Units: Yes 00:19:20.968 Atomic Boundary Size (Normal): 0 00:19:20.968 Atomic Boundary Size (PFail): 0 00:19:20.968 Atomic Boundary Offset: 0 00:19:20.968 Maximum Single Source Range Length: 65535 00:19:20.968 Maximum Copy Length: 65535 00:19:20.968 Maximum Source Range Count: 1 00:19:20.968 NGUID/EUI64 Never Reused: No 00:19:20.968 Namespace Write Protected: No 00:19:20.968 Number of LBA Formats: 1 00:19:20.968 Current LBA Format: LBA Format #00 00:19:20.968 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:20.968 00:19:20.968 17:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:19:21.227 [2024-10-01 17:19:19.524601] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:26.509 Initializing NVMe Controllers 00:19:26.509 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:26.509 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:19:26.509 Initialization complete. Launching workers. 00:19:26.509 ======================================================== 00:19:26.509 Latency(us) 00:19:26.509 Device Information : IOPS MiB/s Average min max 00:19:26.509 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39948.87 156.05 3203.78 843.27 8377.31 00:19:26.509 ======================================================== 00:19:26.509 Total : 39948.87 156.05 3203.78 843.27 8377.31 00:19:26.509 00:19:26.509 [2024-10-01 17:19:24.543498] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:26.509 17:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:19:26.509 [2024-10-01 17:19:24.726394] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:31.823 Initializing NVMe Controllers 00:19:31.823 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:31.823 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:19:31.823 Initialization complete. Launching workers. 00:19:31.823 ======================================================== 00:19:31.823 Latency(us) 00:19:31.823 Device Information : IOPS MiB/s Average min max 00:19:31.823 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16055.97 62.72 7977.66 5988.50 9970.56 00:19:31.823 ======================================================== 00:19:31.823 Total : 16055.97 62.72 7977.66 5988.50 9970.56 00:19:31.823 00:19:31.823 [2024-10-01 17:19:29.769251] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:31.823 17:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:19:31.823 [2024-10-01 17:19:29.964140] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:37.112 [2024-10-01 17:19:35.040233] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:37.112 Initializing NVMe Controllers 00:19:37.112 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:37.112 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:37.112 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:19:37.112 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:19:37.112 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:19:37.112 Initialization complete. Launching workers. 00:19:37.112 Starting thread on core 2 00:19:37.112 Starting thread on core 3 00:19:37.112 Starting thread on core 1 00:19:37.112 17:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:19:37.112 [2024-10-01 17:19:35.292563] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:40.414 [2024-10-01 17:19:38.352318] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:40.414 Initializing NVMe Controllers 00:19:40.414 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:40.414 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:40.414 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:19:40.414 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:19:40.414 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:19:40.414 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:19:40.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:19:40.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:19:40.414 Initialization complete. Launching workers. 00:19:40.414 Starting thread on core 1 with urgent priority queue 00:19:40.414 Starting thread on core 2 with urgent priority queue 00:19:40.414 Starting thread on core 3 with urgent priority queue 00:19:40.414 Starting thread on core 0 with urgent priority queue 00:19:40.414 SPDK bdev Controller (SPDK1 ) core 0: 15417.00 IO/s 6.49 secs/100000 ios 00:19:40.414 SPDK bdev Controller (SPDK1 ) core 1: 12363.67 IO/s 8.09 secs/100000 ios 00:19:40.414 SPDK bdev Controller (SPDK1 ) core 2: 13315.67 IO/s 7.51 secs/100000 ios 00:19:40.414 SPDK bdev Controller (SPDK1 ) core 3: 12707.67 IO/s 7.87 secs/100000 ios 00:19:40.414 ======================================================== 00:19:40.414 00:19:40.414 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:19:40.414 [2024-10-01 17:19:38.617450] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:40.414 Initializing NVMe Controllers 00:19:40.414 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:40.414 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:40.414 Namespace ID: 1 size: 0GB 00:19:40.414 Initialization complete. 00:19:40.414 INFO: using host memory buffer for IO 00:19:40.414 Hello world! 00:19:40.414 [2024-10-01 17:19:38.654651] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:40.414 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:19:40.414 [2024-10-01 17:19:38.908235] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:41.798 Initializing NVMe Controllers 00:19:41.798 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:41.798 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:41.798 Initialization complete. Launching workers. 00:19:41.798 submit (in ns) avg, min, max = 7592.9, 3900.0, 3999928.3 00:19:41.798 complete (in ns) avg, min, max = 17466.9, 2378.3, 4029686.7 00:19:41.798 00:19:41.798 Submit histogram 00:19:41.798 ================ 00:19:41.798 Range in us Cumulative Count 00:19:41.798 3.893 - 3.920: 1.5934% ( 305) 00:19:41.798 3.920 - 3.947: 8.5053% ( 1323) 00:19:41.798 3.947 - 3.973: 18.9489% ( 1999) 00:19:41.798 3.973 - 4.000: 32.0777% ( 2513) 00:19:41.798 4.000 - 4.027: 42.0459% ( 1908) 00:19:41.798 4.027 - 4.053: 54.2553% ( 2337) 00:19:41.798 4.053 - 4.080: 72.8802% ( 3565) 00:19:41.798 4.080 - 4.107: 87.1898% ( 2739) 00:19:41.798 4.107 - 4.133: 95.3503% ( 1562) 00:19:41.798 4.133 - 4.160: 98.5424% ( 611) 00:19:41.798 4.160 - 4.187: 99.2686% ( 139) 00:19:41.798 4.187 - 4.213: 99.4671% ( 38) 00:19:41.798 4.213 - 4.240: 99.5298% ( 12) 00:19:41.798 4.240 - 4.267: 99.5403% ( 2) 00:19:41.798 4.267 - 4.293: 99.5455% ( 1) 00:19:41.798 4.320 - 4.347: 99.5507% ( 1) 00:19:41.798 4.507 - 4.533: 99.5559% ( 1) 00:19:41.798 4.667 - 4.693: 99.5612% ( 1) 00:19:41.798 4.720 - 4.747: 99.5664% ( 1) 00:19:41.798 4.773 - 4.800: 99.5716% ( 1) 00:19:41.798 4.907 - 4.933: 99.5768% ( 1) 00:19:41.798 5.120 - 5.147: 99.5820% ( 1) 00:19:41.798 5.173 - 5.200: 99.5873% ( 1) 00:19:41.798 5.280 - 5.307: 99.5977% ( 2) 00:19:41.798 5.440 - 5.467: 99.6029% ( 1) 00:19:41.798 5.547 - 5.573: 99.6082% ( 1) 00:19:41.798 5.760 - 5.787: 99.6134% ( 1) 00:19:41.798 5.813 - 5.840: 99.6186% ( 1) 00:19:41.798 5.920 - 5.947: 99.6238% ( 1) 00:19:41.798 5.973 - 6.000: 99.6291% ( 1) 00:19:41.798 6.027 - 6.053: 99.6343% ( 1) 00:19:41.798 6.053 - 6.080: 99.6500% ( 3) 00:19:41.798 6.080 - 6.107: 99.6552% ( 1) 00:19:41.798 6.107 - 6.133: 99.6604% ( 1) 00:19:41.798 6.160 - 6.187: 99.6656% ( 1) 00:19:41.798 6.187 - 6.213: 99.6761% ( 2) 00:19:41.798 6.213 - 6.240: 99.6918% ( 3) 00:19:41.798 6.240 - 6.267: 99.6970% ( 1) 00:19:41.798 6.293 - 6.320: 99.7074% ( 2) 00:19:41.798 6.320 - 6.347: 99.7127% ( 1) 00:19:41.798 6.347 - 6.373: 99.7179% ( 1) 00:19:41.798 6.453 - 6.480: 99.7231% ( 1) 00:19:41.798 6.507 - 6.533: 99.7283% ( 1) 00:19:41.798 6.560 - 6.587: 99.7336% ( 1) 00:19:41.798 6.587 - 6.613: 99.7388% ( 1) 00:19:41.798 6.693 - 6.720: 99.7440% ( 1) 00:19:41.798 6.720 - 6.747: 99.7545% ( 2) 00:19:41.798 6.800 - 6.827: 99.7597% ( 1) 00:19:41.798 6.827 - 6.880: 99.7649% ( 1) 00:19:41.798 6.880 - 6.933: 99.7701% ( 1) 00:19:41.798 6.933 - 6.987: 99.7858% ( 3) 00:19:41.798 6.987 - 7.040: 99.7962% ( 2) 00:19:41.798 7.040 - 7.093: 99.8119% ( 3) 00:19:41.798 7.093 - 7.147: 99.8224% ( 2) 00:19:41.798 7.200 - 7.253: 99.8328% ( 2) 00:19:41.798 7.253 - 7.307: 99.8380% ( 1) 00:19:41.798 7.307 - 7.360: 99.8433% ( 1) 00:19:41.798 7.360 - 7.413: 99.8485% ( 1) 00:19:41.798 7.413 - 7.467: 99.8746% ( 5) 00:19:41.798 7.520 - 7.573: 99.8798% ( 1) 00:19:41.798 7.627 - 7.680: 99.8851% ( 1) 00:19:41.798 7.787 - 7.840: 99.8903% ( 1) 00:19:41.798 8.000 - 8.053: 99.8955% ( 1) 00:19:41.798 8.107 - 8.160: 99.9007% ( 1) 00:19:41.798 8.427 - 8.480: 99.9060% ( 1) 00:19:41.798 9.333 - 9.387: 99.9112% ( 1) 00:19:41.798 3986.773 - 4014.080: 100.0000% ( 17) 00:19:41.798 00:19:41.798 Complete histogram 00:19:41.798 ================== 00:19:41.798 Range in us Cumulative Count 00:19:41.798 2.373 - 2.387: 0.0052% ( 1) 00:19:41.798 2.387 - 2.400: 0.0366% ( 6) 00:19:41.798 2.400 - 2.413: 1.1546% ( 214) 00:19:41.798 2.413 - 2.427: 1.3165% ( 31) 00:19:41.798 2.427 - 2.440: 1.4158% ( 19) 00:19:41.798 2.440 - 2.453: 1.4524% ( 7) 00:19:41.798 2.453 - 2.467: 49.2085% ( 9141) 00:19:41.798 2.467 - 2.480: 68.8679% ( 3763) 00:19:41.798 2.480 - 2.493: 77.1067% ( 1577) 00:19:41.798 2.493 - 2.507: 83.1357% ( 1154) 00:19:41.798 2.507 - [2024-10-01 17:19:39.930803] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:41.798 2.520: 84.8597% ( 330) 00:19:41.799 2.520 - 2.533: 87.8533% ( 573) 00:19:41.799 2.533 - 2.547: 93.5688% ( 1094) 00:19:41.799 2.547 - 2.560: 97.0012% ( 657) 00:19:41.799 2.560 - 2.573: 98.4066% ( 269) 00:19:41.799 2.573 - 2.587: 99.1484% ( 142) 00:19:41.799 2.587 - 2.600: 99.3835% ( 45) 00:19:41.799 2.600 - 2.613: 99.4253% ( 8) 00:19:41.799 2.613 - 2.627: 99.4410% ( 3) 00:19:41.799 2.693 - 2.707: 99.4462% ( 1) 00:19:41.799 4.667 - 4.693: 99.4514% ( 1) 00:19:41.799 4.747 - 4.773: 99.4567% ( 1) 00:19:41.799 4.773 - 4.800: 99.4671% ( 2) 00:19:41.799 4.880 - 4.907: 99.4723% ( 1) 00:19:41.799 4.933 - 4.960: 99.4776% ( 1) 00:19:41.799 5.147 - 5.173: 99.4932% ( 3) 00:19:41.799 5.200 - 5.227: 99.4985% ( 1) 00:19:41.799 5.253 - 5.280: 99.5037% ( 1) 00:19:41.799 5.280 - 5.307: 99.5141% ( 2) 00:19:41.799 5.307 - 5.333: 99.5194% ( 1) 00:19:41.799 5.333 - 5.360: 99.5246% ( 1) 00:19:41.799 5.387 - 5.413: 99.5298% ( 1) 00:19:41.799 5.440 - 5.467: 99.5350% ( 1) 00:19:41.799 5.573 - 5.600: 99.5403% ( 1) 00:19:41.799 5.627 - 5.653: 99.5559% ( 3) 00:19:41.799 5.680 - 5.707: 99.5612% ( 1) 00:19:41.799 5.733 - 5.760: 99.5664% ( 1) 00:19:41.799 5.787 - 5.813: 99.5716% ( 1) 00:19:41.799 5.813 - 5.840: 99.5768% ( 1) 00:19:41.799 5.867 - 5.893: 99.5820% ( 1) 00:19:41.799 6.000 - 6.027: 99.5977% ( 3) 00:19:41.799 6.107 - 6.133: 99.6029% ( 1) 00:19:41.799 7.573 - 7.627: 99.6082% ( 1) 00:19:41.799 9.813 - 9.867: 99.6134% ( 1) 00:19:41.799 11.520 - 11.573: 99.6186% ( 1) 00:19:41.799 12.960 - 13.013: 99.6238% ( 1) 00:19:41.799 3072.000 - 3085.653: 99.6291% ( 1) 00:19:41.799 3986.773 - 4014.080: 99.9948% ( 70) 00:19:41.799 4014.080 - 4041.387: 100.0000% ( 1) 00:19:41.799 00:19:41.799 17:19:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:19:41.799 17:19:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:19:41.799 17:19:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:19:41.799 17:19:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:19:41.799 17:19:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:41.799 [ 00:19:41.799 { 00:19:41.799 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:41.799 "subtype": "Discovery", 00:19:41.799 "listen_addresses": [], 00:19:41.799 "allow_any_host": true, 00:19:41.799 "hosts": [] 00:19:41.799 }, 00:19:41.799 { 00:19:41.799 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:41.799 "subtype": "NVMe", 00:19:41.799 "listen_addresses": [ 00:19:41.799 { 00:19:41.799 "trtype": "VFIOUSER", 00:19:41.799 "adrfam": "IPv4", 00:19:41.799 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:41.799 "trsvcid": "0" 00:19:41.799 } 00:19:41.799 ], 00:19:41.799 "allow_any_host": true, 00:19:41.799 "hosts": [], 00:19:41.799 "serial_number": "SPDK1", 00:19:41.799 "model_number": "SPDK bdev Controller", 00:19:41.799 "max_namespaces": 32, 00:19:41.799 "min_cntlid": 1, 00:19:41.799 "max_cntlid": 65519, 00:19:41.799 "namespaces": [ 00:19:41.799 { 00:19:41.799 "nsid": 1, 00:19:41.799 "bdev_name": "Malloc1", 00:19:41.799 "name": "Malloc1", 00:19:41.799 "nguid": "AADE631005254CA8BC653060DAA19B41", 00:19:41.799 "uuid": "aade6310-0525-4ca8-bc65-3060daa19b41" 00:19:41.799 } 00:19:41.799 ] 00:19:41.799 }, 00:19:41.799 { 00:19:41.799 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:41.799 "subtype": "NVMe", 00:19:41.799 "listen_addresses": [ 00:19:41.799 { 00:19:41.799 "trtype": "VFIOUSER", 00:19:41.799 "adrfam": "IPv4", 00:19:41.799 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:41.799 "trsvcid": "0" 00:19:41.799 } 00:19:41.799 ], 00:19:41.799 "allow_any_host": true, 00:19:41.799 "hosts": [], 00:19:41.799 "serial_number": "SPDK2", 00:19:41.799 "model_number": "SPDK bdev Controller", 00:19:41.799 "max_namespaces": 32, 00:19:41.799 "min_cntlid": 1, 00:19:41.799 "max_cntlid": 65519, 00:19:41.799 "namespaces": [ 00:19:41.799 { 00:19:41.799 "nsid": 1, 00:19:41.799 "bdev_name": "Malloc2", 00:19:41.799 "name": "Malloc2", 00:19:41.799 "nguid": "824537347E7D4678A8B735D783C0C258", 00:19:41.799 "uuid": "82453734-7e7d-4678-a8b7-35d783c0c258" 00:19:41.799 } 00:19:41.799 ] 00:19:41.799 } 00:19:41.799 ] 00:19:41.799 17:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:41.799 17:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:19:41.799 17:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3010165 00:19:41.799 17:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:19:41.799 17:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:19:41.799 17:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:41.799 17:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:41.799 17:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:19:41.799 17:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:41.799 17:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:19:41.799 [2024-10-01 17:19:40.324668] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:42.060 Malloc3 00:19:42.060 17:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:19:42.060 [2024-10-01 17:19:40.527090] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:42.060 17:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:42.060 Asynchronous Event Request test 00:19:42.060 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:42.060 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:42.060 Registering asynchronous event callbacks... 00:19:42.060 Starting namespace attribute notice tests for all controllers... 00:19:42.060 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:42.060 aer_cb - Changed Namespace 00:19:42.060 Cleaning up... 00:19:42.321 [ 00:19:42.321 { 00:19:42.321 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:42.321 "subtype": "Discovery", 00:19:42.321 "listen_addresses": [], 00:19:42.321 "allow_any_host": true, 00:19:42.321 "hosts": [] 00:19:42.321 }, 00:19:42.321 { 00:19:42.321 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:42.321 "subtype": "NVMe", 00:19:42.321 "listen_addresses": [ 00:19:42.321 { 00:19:42.321 "trtype": "VFIOUSER", 00:19:42.321 "adrfam": "IPv4", 00:19:42.321 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:42.321 "trsvcid": "0" 00:19:42.321 } 00:19:42.321 ], 00:19:42.321 "allow_any_host": true, 00:19:42.321 "hosts": [], 00:19:42.321 "serial_number": "SPDK1", 00:19:42.321 "model_number": "SPDK bdev Controller", 00:19:42.321 "max_namespaces": 32, 00:19:42.321 "min_cntlid": 1, 00:19:42.321 "max_cntlid": 65519, 00:19:42.321 "namespaces": [ 00:19:42.321 { 00:19:42.321 "nsid": 1, 00:19:42.321 "bdev_name": "Malloc1", 00:19:42.321 "name": "Malloc1", 00:19:42.321 "nguid": "AADE631005254CA8BC653060DAA19B41", 00:19:42.321 "uuid": "aade6310-0525-4ca8-bc65-3060daa19b41" 00:19:42.321 }, 00:19:42.321 { 00:19:42.321 "nsid": 2, 00:19:42.321 "bdev_name": "Malloc3", 00:19:42.321 "name": "Malloc3", 00:19:42.321 "nguid": "4F4FA6ED3C944A639E72F2343CA78BB8", 00:19:42.321 "uuid": "4f4fa6ed-3c94-4a63-9e72-f2343ca78bb8" 00:19:42.321 } 00:19:42.321 ] 00:19:42.321 }, 00:19:42.321 { 00:19:42.321 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:42.321 "subtype": "NVMe", 00:19:42.321 "listen_addresses": [ 00:19:42.321 { 00:19:42.321 "trtype": "VFIOUSER", 00:19:42.321 "adrfam": "IPv4", 00:19:42.321 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:42.321 "trsvcid": "0" 00:19:42.321 } 00:19:42.321 ], 00:19:42.321 "allow_any_host": true, 00:19:42.321 "hosts": [], 00:19:42.321 "serial_number": "SPDK2", 00:19:42.321 "model_number": "SPDK bdev Controller", 00:19:42.321 "max_namespaces": 32, 00:19:42.321 "min_cntlid": 1, 00:19:42.321 "max_cntlid": 65519, 00:19:42.321 "namespaces": [ 00:19:42.321 { 00:19:42.321 "nsid": 1, 00:19:42.321 "bdev_name": "Malloc2", 00:19:42.321 "name": "Malloc2", 00:19:42.321 "nguid": "824537347E7D4678A8B735D783C0C258", 00:19:42.321 "uuid": "82453734-7e7d-4678-a8b7-35d783c0c258" 00:19:42.321 } 00:19:42.321 ] 00:19:42.321 } 00:19:42.321 ] 00:19:42.321 17:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3010165 00:19:42.321 17:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:42.321 17:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:42.321 17:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:19:42.321 17:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:19:42.321 [2024-10-01 17:19:40.744199] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:19:42.321 [2024-10-01 17:19:40.744221] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3010219 ] 00:19:42.321 [2024-10-01 17:19:40.772541] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:19:42.321 [2024-10-01 17:19:40.781213] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:42.321 [2024-10-01 17:19:40.781235] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f753fc80000 00:19:42.321 [2024-10-01 17:19:40.782222] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:42.321 [2024-10-01 17:19:40.783222] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:42.321 [2024-10-01 17:19:40.784229] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:42.321 [2024-10-01 17:19:40.785238] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:42.321 [2024-10-01 17:19:40.786244] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:42.321 [2024-10-01 17:19:40.787250] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:42.321 [2024-10-01 17:19:40.788259] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:42.321 [2024-10-01 17:19:40.789267] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:42.321 [2024-10-01 17:19:40.790275] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:42.321 [2024-10-01 17:19:40.790285] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f753e98a000 00:19:42.321 [2024-10-01 17:19:40.791611] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:42.321 [2024-10-01 17:19:40.807813] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:19:42.321 [2024-10-01 17:19:40.807838] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:19:42.321 [2024-10-01 17:19:40.812915] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:19:42.321 [2024-10-01 17:19:40.812958] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:19:42.321 [2024-10-01 17:19:40.813042] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:19:42.321 [2024-10-01 17:19:40.813063] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:19:42.321 [2024-10-01 17:19:40.813068] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:19:42.321 [2024-10-01 17:19:40.813926] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:19:42.321 [2024-10-01 17:19:40.813935] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:19:42.321 [2024-10-01 17:19:40.813942] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:19:42.321 [2024-10-01 17:19:40.814926] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:19:42.321 [2024-10-01 17:19:40.814935] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:19:42.322 [2024-10-01 17:19:40.814943] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:19:42.322 [2024-10-01 17:19:40.815930] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:19:42.322 [2024-10-01 17:19:40.815940] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:42.322 [2024-10-01 17:19:40.816931] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:19:42.322 [2024-10-01 17:19:40.816939] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:19:42.322 [2024-10-01 17:19:40.816944] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:19:42.322 [2024-10-01 17:19:40.816951] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:42.322 [2024-10-01 17:19:40.817057] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:19:42.322 [2024-10-01 17:19:40.817062] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:42.322 [2024-10-01 17:19:40.817068] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:19:42.322 [2024-10-01 17:19:40.817944] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:19:42.322 [2024-10-01 17:19:40.818951] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:19:42.322 [2024-10-01 17:19:40.819953] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:19:42.322 [2024-10-01 17:19:40.820954] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:42.322 [2024-10-01 17:19:40.820999] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:42.322 [2024-10-01 17:19:40.821964] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:19:42.322 [2024-10-01 17:19:40.821972] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:42.322 [2024-10-01 17:19:40.821979] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:19:42.322 [2024-10-01 17:19:40.822004] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:19:42.322 [2024-10-01 17:19:40.822015] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:19:42.322 [2024-10-01 17:19:40.822027] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:42.322 [2024-10-01 17:19:40.822033] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:42.322 [2024-10-01 17:19:40.822036] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:42.322 [2024-10-01 17:19:40.822048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:42.322 [2024-10-01 17:19:40.826003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:19:42.322 [2024-10-01 17:19:40.826015] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:19:42.322 [2024-10-01 17:19:40.826020] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:19:42.322 [2024-10-01 17:19:40.826025] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:19:42.322 [2024-10-01 17:19:40.826029] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:19:42.322 [2024-10-01 17:19:40.826034] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:19:42.322 [2024-10-01 17:19:40.826039] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:19:42.322 [2024-10-01 17:19:40.826044] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:19:42.322 [2024-10-01 17:19:40.826051] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:19:42.322 [2024-10-01 17:19:40.826062] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:19:42.322 [2024-10-01 17:19:40.833999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:19:42.322 [2024-10-01 17:19:40.834011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.322 [2024-10-01 17:19:40.834020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.322 [2024-10-01 17:19:40.834029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.322 [2024-10-01 17:19:40.834037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.322 [2024-10-01 17:19:40.834042] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:19:42.322 [2024-10-01 17:19:40.834052] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:42.322 [2024-10-01 17:19:40.834061] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:19:42.322 [2024-10-01 17:19:40.842000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:19:42.322 [2024-10-01 17:19:40.842011] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:19:42.322 [2024-10-01 17:19:40.842016] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:42.322 [2024-10-01 17:19:40.842023] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:19:42.322 [2024-10-01 17:19:40.842030] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:19:42.322 [2024-10-01 17:19:40.842039] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:42.322 [2024-10-01 17:19:40.850000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:19:42.322 [2024-10-01 17:19:40.850064] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:19:42.322 [2024-10-01 17:19:40.850072] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:19:42.322 [2024-10-01 17:19:40.850080] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:19:42.322 [2024-10-01 17:19:40.850085] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:19:42.322 [2024-10-01 17:19:40.850088] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:42.322 [2024-10-01 17:19:40.850095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:19:42.322 [2024-10-01 17:19:40.857998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:19:42.322 [2024-10-01 17:19:40.858009] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:19:42.322 [2024-10-01 17:19:40.858018] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:19:42.322 [2024-10-01 17:19:40.858026] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:19:42.322 [2024-10-01 17:19:40.858034] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:42.322 [2024-10-01 17:19:40.858038] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:42.322 [2024-10-01 17:19:40.858042] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:42.322 [2024-10-01 17:19:40.858048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:42.322 [2024-10-01 17:19:40.866000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:19:42.322 [2024-10-01 17:19:40.866014] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:42.322 [2024-10-01 17:19:40.866022] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:42.322 [2024-10-01 17:19:40.866029] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:42.322 [2024-10-01 17:19:40.866034] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:42.322 [2024-10-01 17:19:40.866037] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:42.322 [2024-10-01 17:19:40.866046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:42.585 [2024-10-01 17:19:40.873999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:19:42.585 [2024-10-01 17:19:40.874010] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:42.585 [2024-10-01 17:19:40.874017] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:19:42.585 [2024-10-01 17:19:40.874026] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:19:42.585 [2024-10-01 17:19:40.874032] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:19:42.585 [2024-10-01 17:19:40.874037] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:42.585 [2024-10-01 17:19:40.874042] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:19:42.585 [2024-10-01 17:19:40.874048] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:19:42.585 [2024-10-01 17:19:40.874052] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:19:42.585 [2024-10-01 17:19:40.874057] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:19:42.585 [2024-10-01 17:19:40.874074] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:19:42.585 [2024-10-01 17:19:40.881999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:19:42.585 [2024-10-01 17:19:40.882013] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:19:42.585 [2024-10-01 17:19:40.889999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:19:42.585 [2024-10-01 17:19:40.890020] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:19:42.585 [2024-10-01 17:19:40.897998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:19:42.585 [2024-10-01 17:19:40.898011] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:42.585 [2024-10-01 17:19:40.906000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:19:42.585 [2024-10-01 17:19:40.906019] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:19:42.585 [2024-10-01 17:19:40.906023] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:19:42.585 [2024-10-01 17:19:40.906027] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:19:42.585 [2024-10-01 17:19:40.906031] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:19:42.585 [2024-10-01 17:19:40.906035] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:19:42.585 [2024-10-01 17:19:40.906041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:19:42.585 [2024-10-01 17:19:40.906049] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:19:42.585 [2024-10-01 17:19:40.906055] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:19:42.585 [2024-10-01 17:19:40.906059] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:42.585 [2024-10-01 17:19:40.906065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:19:42.585 [2024-10-01 17:19:40.906072] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:19:42.585 [2024-10-01 17:19:40.906077] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:42.585 [2024-10-01 17:19:40.906080] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:42.585 [2024-10-01 17:19:40.906087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:42.585 [2024-10-01 17:19:40.906094] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:19:42.585 [2024-10-01 17:19:40.906099] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:19:42.585 [2024-10-01 17:19:40.906102] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:42.585 [2024-10-01 17:19:40.906108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:19:42.585 [2024-10-01 17:19:40.913999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:19:42.585 [2024-10-01 17:19:40.914013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:19:42.585 [2024-10-01 17:19:40.914024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:19:42.585 [2024-10-01 17:19:40.914031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:19:42.585 ===================================================== 00:19:42.585 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:42.585 ===================================================== 00:19:42.585 Controller Capabilities/Features 00:19:42.585 ================================ 00:19:42.585 Vendor ID: 4e58 00:19:42.585 Subsystem Vendor ID: 4e58 00:19:42.585 Serial Number: SPDK2 00:19:42.585 Model Number: SPDK bdev Controller 00:19:42.585 Firmware Version: 25.01 00:19:42.585 Recommended Arb Burst: 6 00:19:42.585 IEEE OUI Identifier: 8d 6b 50 00:19:42.585 Multi-path I/O 00:19:42.585 May have multiple subsystem ports: Yes 00:19:42.585 May have multiple controllers: Yes 00:19:42.585 Associated with SR-IOV VF: No 00:19:42.585 Max Data Transfer Size: 131072 00:19:42.585 Max Number of Namespaces: 32 00:19:42.585 Max Number of I/O Queues: 127 00:19:42.585 NVMe Specification Version (VS): 1.3 00:19:42.585 NVMe Specification Version (Identify): 1.3 00:19:42.585 Maximum Queue Entries: 256 00:19:42.585 Contiguous Queues Required: Yes 00:19:42.585 Arbitration Mechanisms Supported 00:19:42.585 Weighted Round Robin: Not Supported 00:19:42.585 Vendor Specific: Not Supported 00:19:42.585 Reset Timeout: 15000 ms 00:19:42.585 Doorbell Stride: 4 bytes 00:19:42.585 NVM Subsystem Reset: Not Supported 00:19:42.585 Command Sets Supported 00:19:42.585 NVM Command Set: Supported 00:19:42.585 Boot Partition: Not Supported 00:19:42.585 Memory Page Size Minimum: 4096 bytes 00:19:42.585 Memory Page Size Maximum: 4096 bytes 00:19:42.585 Persistent Memory Region: Not Supported 00:19:42.585 Optional Asynchronous Events Supported 00:19:42.585 Namespace Attribute Notices: Supported 00:19:42.585 Firmware Activation Notices: Not Supported 00:19:42.585 ANA Change Notices: Not Supported 00:19:42.585 PLE Aggregate Log Change Notices: Not Supported 00:19:42.586 LBA Status Info Alert Notices: Not Supported 00:19:42.586 EGE Aggregate Log Change Notices: Not Supported 00:19:42.586 Normal NVM Subsystem Shutdown event: Not Supported 00:19:42.586 Zone Descriptor Change Notices: Not Supported 00:19:42.586 Discovery Log Change Notices: Not Supported 00:19:42.586 Controller Attributes 00:19:42.586 128-bit Host Identifier: Supported 00:19:42.586 Non-Operational Permissive Mode: Not Supported 00:19:42.586 NVM Sets: Not Supported 00:19:42.586 Read Recovery Levels: Not Supported 00:19:42.586 Endurance Groups: Not Supported 00:19:42.586 Predictable Latency Mode: Not Supported 00:19:42.586 Traffic Based Keep ALive: Not Supported 00:19:42.586 Namespace Granularity: Not Supported 00:19:42.586 SQ Associations: Not Supported 00:19:42.586 UUID List: Not Supported 00:19:42.586 Multi-Domain Subsystem: Not Supported 00:19:42.586 Fixed Capacity Management: Not Supported 00:19:42.586 Variable Capacity Management: Not Supported 00:19:42.586 Delete Endurance Group: Not Supported 00:19:42.586 Delete NVM Set: Not Supported 00:19:42.586 Extended LBA Formats Supported: Not Supported 00:19:42.586 Flexible Data Placement Supported: Not Supported 00:19:42.586 00:19:42.586 Controller Memory Buffer Support 00:19:42.586 ================================ 00:19:42.586 Supported: No 00:19:42.586 00:19:42.586 Persistent Memory Region Support 00:19:42.586 ================================ 00:19:42.586 Supported: No 00:19:42.586 00:19:42.586 Admin Command Set Attributes 00:19:42.586 ============================ 00:19:42.586 Security Send/Receive: Not Supported 00:19:42.586 Format NVM: Not Supported 00:19:42.586 Firmware Activate/Download: Not Supported 00:19:42.586 Namespace Management: Not Supported 00:19:42.586 Device Self-Test: Not Supported 00:19:42.586 Directives: Not Supported 00:19:42.586 NVMe-MI: Not Supported 00:19:42.586 Virtualization Management: Not Supported 00:19:42.586 Doorbell Buffer Config: Not Supported 00:19:42.586 Get LBA Status Capability: Not Supported 00:19:42.586 Command & Feature Lockdown Capability: Not Supported 00:19:42.586 Abort Command Limit: 4 00:19:42.586 Async Event Request Limit: 4 00:19:42.586 Number of Firmware Slots: N/A 00:19:42.586 Firmware Slot 1 Read-Only: N/A 00:19:42.586 Firmware Activation Without Reset: N/A 00:19:42.586 Multiple Update Detection Support: N/A 00:19:42.586 Firmware Update Granularity: No Information Provided 00:19:42.586 Per-Namespace SMART Log: No 00:19:42.586 Asymmetric Namespace Access Log Page: Not Supported 00:19:42.586 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:19:42.586 Command Effects Log Page: Supported 00:19:42.586 Get Log Page Extended Data: Supported 00:19:42.586 Telemetry Log Pages: Not Supported 00:19:42.586 Persistent Event Log Pages: Not Supported 00:19:42.586 Supported Log Pages Log Page: May Support 00:19:42.586 Commands Supported & Effects Log Page: Not Supported 00:19:42.586 Feature Identifiers & Effects Log Page:May Support 00:19:42.586 NVMe-MI Commands & Effects Log Page: May Support 00:19:42.586 Data Area 4 for Telemetry Log: Not Supported 00:19:42.586 Error Log Page Entries Supported: 128 00:19:42.586 Keep Alive: Supported 00:19:42.586 Keep Alive Granularity: 10000 ms 00:19:42.586 00:19:42.586 NVM Command Set Attributes 00:19:42.586 ========================== 00:19:42.586 Submission Queue Entry Size 00:19:42.586 Max: 64 00:19:42.586 Min: 64 00:19:42.586 Completion Queue Entry Size 00:19:42.586 Max: 16 00:19:42.586 Min: 16 00:19:42.586 Number of Namespaces: 32 00:19:42.586 Compare Command: Supported 00:19:42.586 Write Uncorrectable Command: Not Supported 00:19:42.586 Dataset Management Command: Supported 00:19:42.586 Write Zeroes Command: Supported 00:19:42.586 Set Features Save Field: Not Supported 00:19:42.586 Reservations: Not Supported 00:19:42.586 Timestamp: Not Supported 00:19:42.586 Copy: Supported 00:19:42.586 Volatile Write Cache: Present 00:19:42.586 Atomic Write Unit (Normal): 1 00:19:42.586 Atomic Write Unit (PFail): 1 00:19:42.586 Atomic Compare & Write Unit: 1 00:19:42.586 Fused Compare & Write: Supported 00:19:42.586 Scatter-Gather List 00:19:42.586 SGL Command Set: Supported (Dword aligned) 00:19:42.586 SGL Keyed: Not Supported 00:19:42.586 SGL Bit Bucket Descriptor: Not Supported 00:19:42.586 SGL Metadata Pointer: Not Supported 00:19:42.586 Oversized SGL: Not Supported 00:19:42.586 SGL Metadata Address: Not Supported 00:19:42.586 SGL Offset: Not Supported 00:19:42.586 Transport SGL Data Block: Not Supported 00:19:42.586 Replay Protected Memory Block: Not Supported 00:19:42.586 00:19:42.586 Firmware Slot Information 00:19:42.586 ========================= 00:19:42.586 Active slot: 1 00:19:42.586 Slot 1 Firmware Revision: 25.01 00:19:42.586 00:19:42.586 00:19:42.586 Commands Supported and Effects 00:19:42.586 ============================== 00:19:42.586 Admin Commands 00:19:42.586 -------------- 00:19:42.586 Get Log Page (02h): Supported 00:19:42.586 Identify (06h): Supported 00:19:42.586 Abort (08h): Supported 00:19:42.586 Set Features (09h): Supported 00:19:42.586 Get Features (0Ah): Supported 00:19:42.586 Asynchronous Event Request (0Ch): Supported 00:19:42.586 Keep Alive (18h): Supported 00:19:42.586 I/O Commands 00:19:42.586 ------------ 00:19:42.586 Flush (00h): Supported LBA-Change 00:19:42.586 Write (01h): Supported LBA-Change 00:19:42.586 Read (02h): Supported 00:19:42.586 Compare (05h): Supported 00:19:42.586 Write Zeroes (08h): Supported LBA-Change 00:19:42.586 Dataset Management (09h): Supported LBA-Change 00:19:42.586 Copy (19h): Supported LBA-Change 00:19:42.586 00:19:42.586 Error Log 00:19:42.586 ========= 00:19:42.586 00:19:42.586 Arbitration 00:19:42.586 =========== 00:19:42.586 Arbitration Burst: 1 00:19:42.586 00:19:42.586 Power Management 00:19:42.586 ================ 00:19:42.586 Number of Power States: 1 00:19:42.586 Current Power State: Power State #0 00:19:42.586 Power State #0: 00:19:42.586 Max Power: 0.00 W 00:19:42.586 Non-Operational State: Operational 00:19:42.586 Entry Latency: Not Reported 00:19:42.586 Exit Latency: Not Reported 00:19:42.586 Relative Read Throughput: 0 00:19:42.586 Relative Read Latency: 0 00:19:42.586 Relative Write Throughput: 0 00:19:42.586 Relative Write Latency: 0 00:19:42.586 Idle Power: Not Reported 00:19:42.586 Active Power: Not Reported 00:19:42.586 Non-Operational Permissive Mode: Not Supported 00:19:42.586 00:19:42.586 Health Information 00:19:42.586 ================== 00:19:42.586 Critical Warnings: 00:19:42.586 Available Spare Space: OK 00:19:42.586 Temperature: OK 00:19:42.586 Device Reliability: OK 00:19:42.586 Read Only: No 00:19:42.586 Volatile Memory Backup: OK 00:19:42.586 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:42.586 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:42.586 Available Spare: 0% 00:19:42.586 Available Sp[2024-10-01 17:19:40.914129] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:19:42.586 [2024-10-01 17:19:40.921999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:19:42.586 [2024-10-01 17:19:40.922029] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:19:42.586 [2024-10-01 17:19:40.922039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.586 [2024-10-01 17:19:40.922045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.586 [2024-10-01 17:19:40.922052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.586 [2024-10-01 17:19:40.922058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.586 [2024-10-01 17:19:40.922097] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:19:42.586 [2024-10-01 17:19:40.922107] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:19:42.586 [2024-10-01 17:19:40.923109] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:42.586 [2024-10-01 17:19:40.923158] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:19:42.586 [2024-10-01 17:19:40.923165] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:19:42.586 [2024-10-01 17:19:40.924108] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:19:42.586 [2024-10-01 17:19:40.924122] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:19:42.586 [2024-10-01 17:19:40.924180] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:19:42.586 [2024-10-01 17:19:40.927001] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:42.586 are Threshold: 0% 00:19:42.586 Life Percentage Used: 0% 00:19:42.586 Data Units Read: 0 00:19:42.586 Data Units Written: 0 00:19:42.586 Host Read Commands: 0 00:19:42.586 Host Write Commands: 0 00:19:42.586 Controller Busy Time: 0 minutes 00:19:42.586 Power Cycles: 0 00:19:42.586 Power On Hours: 0 hours 00:19:42.587 Unsafe Shutdowns: 0 00:19:42.587 Unrecoverable Media Errors: 0 00:19:42.587 Lifetime Error Log Entries: 0 00:19:42.587 Warning Temperature Time: 0 minutes 00:19:42.587 Critical Temperature Time: 0 minutes 00:19:42.587 00:19:42.587 Number of Queues 00:19:42.587 ================ 00:19:42.587 Number of I/O Submission Queues: 127 00:19:42.587 Number of I/O Completion Queues: 127 00:19:42.587 00:19:42.587 Active Namespaces 00:19:42.587 ================= 00:19:42.587 Namespace ID:1 00:19:42.587 Error Recovery Timeout: Unlimited 00:19:42.587 Command Set Identifier: NVM (00h) 00:19:42.587 Deallocate: Supported 00:19:42.587 Deallocated/Unwritten Error: Not Supported 00:19:42.587 Deallocated Read Value: Unknown 00:19:42.587 Deallocate in Write Zeroes: Not Supported 00:19:42.587 Deallocated Guard Field: 0xFFFF 00:19:42.587 Flush: Supported 00:19:42.587 Reservation: Supported 00:19:42.587 Namespace Sharing Capabilities: Multiple Controllers 00:19:42.587 Size (in LBAs): 131072 (0GiB) 00:19:42.587 Capacity (in LBAs): 131072 (0GiB) 00:19:42.587 Utilization (in LBAs): 131072 (0GiB) 00:19:42.587 NGUID: 824537347E7D4678A8B735D783C0C258 00:19:42.587 UUID: 82453734-7e7d-4678-a8b7-35d783c0c258 00:19:42.587 Thin Provisioning: Not Supported 00:19:42.587 Per-NS Atomic Units: Yes 00:19:42.587 Atomic Boundary Size (Normal): 0 00:19:42.587 Atomic Boundary Size (PFail): 0 00:19:42.587 Atomic Boundary Offset: 0 00:19:42.587 Maximum Single Source Range Length: 65535 00:19:42.587 Maximum Copy Length: 65535 00:19:42.587 Maximum Source Range Count: 1 00:19:42.587 NGUID/EUI64 Never Reused: No 00:19:42.587 Namespace Write Protected: No 00:19:42.587 Number of LBA Formats: 1 00:19:42.587 Current LBA Format: LBA Format #00 00:19:42.587 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:42.587 00:19:42.587 17:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:19:42.587 [2024-10-01 17:19:41.089959] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:47.876 Initializing NVMe Controllers 00:19:47.876 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:47.877 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:19:47.877 Initialization complete. Launching workers. 00:19:47.877 ======================================================== 00:19:47.877 Latency(us) 00:19:47.877 Device Information : IOPS MiB/s Average min max 00:19:47.877 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39984.42 156.19 3201.11 836.50 7782.17 00:19:47.877 ======================================================== 00:19:47.877 Total : 39984.42 156.19 3201.11 836.50 7782.17 00:19:47.877 00:19:47.877 [2024-10-01 17:19:46.195204] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:47.877 17:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:19:47.877 [2024-10-01 17:19:46.379765] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:53.162 Initializing NVMe Controllers 00:19:53.162 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:53.162 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:19:53.162 Initialization complete. Launching workers. 00:19:53.162 ======================================================== 00:19:53.162 Latency(us) 00:19:53.162 Device Information : IOPS MiB/s Average min max 00:19:53.162 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35045.45 136.90 3651.98 1108.61 8813.80 00:19:53.162 ======================================================== 00:19:53.162 Total : 35045.45 136.90 3651.98 1108.61 8813.80 00:19:53.162 00:19:53.162 [2024-10-01 17:19:51.397724] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:53.162 17:19:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:19:53.162 [2024-10-01 17:19:51.589370] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:58.661 [2024-10-01 17:19:56.729080] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:58.661 Initializing NVMe Controllers 00:19:58.661 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:58.661 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:58.661 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:19:58.661 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:19:58.661 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:19:58.661 Initialization complete. Launching workers. 00:19:58.661 Starting thread on core 2 00:19:58.661 Starting thread on core 3 00:19:58.661 Starting thread on core 1 00:19:58.661 17:19:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:19:58.661 [2024-10-01 17:19:56.985196] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:01.960 [2024-10-01 17:20:00.062760] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:01.960 Initializing NVMe Controllers 00:20:01.960 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:01.960 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:01.960 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:20:01.960 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:20:01.960 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:20:01.960 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:20:01.960 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:20:01.960 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:20:01.960 Initialization complete. Launching workers. 00:20:01.960 Starting thread on core 1 with urgent priority queue 00:20:01.960 Starting thread on core 2 with urgent priority queue 00:20:01.960 Starting thread on core 3 with urgent priority queue 00:20:01.960 Starting thread on core 0 with urgent priority queue 00:20:01.960 SPDK bdev Controller (SPDK2 ) core 0: 11681.67 IO/s 8.56 secs/100000 ios 00:20:01.960 SPDK bdev Controller (SPDK2 ) core 1: 9304.00 IO/s 10.75 secs/100000 ios 00:20:01.960 SPDK bdev Controller (SPDK2 ) core 2: 8064.67 IO/s 12.40 secs/100000 ios 00:20:01.960 SPDK bdev Controller (SPDK2 ) core 3: 10931.00 IO/s 9.15 secs/100000 ios 00:20:01.960 ======================================================== 00:20:01.960 00:20:01.961 17:20:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:20:01.961 [2024-10-01 17:20:00.324454] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:01.961 Initializing NVMe Controllers 00:20:01.961 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:01.961 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:01.961 Namespace ID: 1 size: 0GB 00:20:01.961 Initialization complete. 00:20:01.961 INFO: using host memory buffer for IO 00:20:01.961 Hello world! 00:20:01.961 [2024-10-01 17:20:00.333505] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:01.961 17:20:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:20:02.221 [2024-10-01 17:20:00.594256] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:03.160 Initializing NVMe Controllers 00:20:03.160 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:03.160 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:03.160 Initialization complete. Launching workers. 00:20:03.160 submit (in ns) avg, min, max = 7394.4, 3900.0, 4001385.0 00:20:03.160 complete (in ns) avg, min, max = 16228.6, 2386.7, 3998776.7 00:20:03.160 00:20:03.160 Submit histogram 00:20:03.160 ================ 00:20:03.160 Range in us Cumulative Count 00:20:03.160 3.893 - 3.920: 0.6565% ( 126) 00:20:03.160 3.920 - 3.947: 5.5176% ( 933) 00:20:03.160 3.947 - 3.973: 15.3493% ( 1887) 00:20:03.160 3.973 - 4.000: 25.9522% ( 2035) 00:20:03.160 4.000 - 4.027: 36.6957% ( 2062) 00:20:03.160 4.027 - 4.053: 47.3923% ( 2053) 00:20:03.160 4.053 - 4.080: 61.7673% ( 2759) 00:20:03.160 4.080 - 4.107: 78.3827% ( 3189) 00:20:03.160 4.107 - 4.133: 90.4444% ( 2315) 00:20:03.160 4.133 - 4.160: 96.4831% ( 1159) 00:20:03.160 4.160 - 4.187: 98.7391% ( 433) 00:20:03.160 4.187 - 4.213: 99.2341% ( 95) 00:20:03.160 4.213 - 4.240: 99.3591% ( 24) 00:20:03.160 4.240 - 4.267: 99.3956% ( 7) 00:20:03.160 4.267 - 4.293: 99.4112% ( 3) 00:20:03.160 4.320 - 4.347: 99.4165% ( 1) 00:20:03.160 4.613 - 4.640: 99.4217% ( 1) 00:20:03.160 4.640 - 4.667: 99.4269% ( 1) 00:20:03.160 4.667 - 4.693: 99.4321% ( 1) 00:20:03.160 4.880 - 4.907: 99.4373% ( 1) 00:20:03.160 5.093 - 5.120: 99.4425% ( 1) 00:20:03.160 5.147 - 5.173: 99.4477% ( 1) 00:20:03.160 5.173 - 5.200: 99.4529% ( 1) 00:20:03.160 5.227 - 5.253: 99.4581% ( 1) 00:20:03.160 5.333 - 5.360: 99.4633% ( 1) 00:20:03.160 5.387 - 5.413: 99.4686% ( 1) 00:20:03.160 5.680 - 5.707: 99.4738% ( 1) 00:20:03.160 5.707 - 5.733: 99.4894% ( 3) 00:20:03.160 5.760 - 5.787: 99.4946% ( 1) 00:20:03.160 5.893 - 5.920: 99.4998% ( 1) 00:20:03.160 5.920 - 5.947: 99.5050% ( 1) 00:20:03.160 6.000 - 6.027: 99.5154% ( 2) 00:20:03.160 6.053 - 6.080: 99.5415% ( 5) 00:20:03.160 6.080 - 6.107: 99.5519% ( 2) 00:20:03.160 6.107 - 6.133: 99.5728% ( 4) 00:20:03.160 6.133 - 6.160: 99.5780% ( 1) 00:20:03.160 6.160 - 6.187: 99.5884% ( 2) 00:20:03.160 6.187 - 6.213: 99.5936% ( 1) 00:20:03.160 6.240 - 6.267: 99.6040% ( 2) 00:20:03.160 6.267 - 6.293: 99.6092% ( 1) 00:20:03.160 6.293 - 6.320: 99.6144% ( 1) 00:20:03.161 6.453 - 6.480: 99.6249% ( 2) 00:20:03.161 6.480 - 6.507: 99.6301% ( 1) 00:20:03.161 6.507 - 6.533: 99.6457% ( 3) 00:20:03.161 6.533 - 6.560: 99.6613% ( 3) 00:20:03.161 6.560 - 6.587: 99.6665% ( 1) 00:20:03.161 6.613 - 6.640: 99.6718% ( 1) 00:20:03.161 6.640 - 6.667: 99.6770% ( 1) 00:20:03.161 6.667 - 6.693: 99.6822% ( 1) 00:20:03.161 6.693 - 6.720: 99.6926% ( 2) 00:20:03.161 6.747 - 6.773: 99.7030% ( 2) 00:20:03.161 6.773 - 6.800: 99.7082% ( 1) 00:20:03.161 6.800 - 6.827: 99.7134% ( 1) 00:20:03.161 6.827 - 6.880: 99.7343% ( 4) 00:20:03.161 6.880 - 6.933: 99.7551% ( 4) 00:20:03.161 6.933 - 6.987: 99.7603% ( 1) 00:20:03.161 6.987 - 7.040: 99.7655% ( 1) 00:20:03.161 7.093 - 7.147: 99.7760% ( 2) 00:20:03.161 7.147 - 7.200: 99.7812% ( 1) 00:20:03.161 7.200 - 7.253: 99.7864% ( 1) 00:20:03.161 7.253 - 7.307: 99.8020% ( 3) 00:20:03.161 7.307 - 7.360: 99.8072% ( 1) 00:20:03.161 7.360 - 7.413: 99.8124% ( 1) 00:20:03.161 7.467 - 7.520: 99.8176% ( 1) 00:20:03.161 7.520 - 7.573: 99.8229% ( 1) 00:20:03.161 7.573 - 7.627: 99.8385% ( 3) 00:20:03.161 7.627 - 7.680: 99.8489% ( 2) 00:20:03.161 7.680 - 7.733: 99.8541% ( 1) 00:20:03.161 7.733 - 7.787: 99.8645% ( 2) 00:20:03.161 7.947 - 8.000: 99.8697% ( 1) 00:20:03.161 8.000 - 8.053: 99.8750% ( 1) 00:20:03.161 8.107 - 8.160: 99.8802% ( 1) 00:20:03.161 8.587 - 8.640: 99.8854% ( 1) 00:20:03.161 8.640 - 8.693: 99.8906% ( 1) 00:20:03.161 8.907 - 8.960: 99.8958% ( 1) 00:20:03.161 9.120 - 9.173: 99.9010% ( 1) 00:20:03.161 10.293 - 10.347: 99.9062% ( 1) 00:20:03.161 13.333 - 13.387: 99.9114% ( 1) 00:20:03.161 15.680 - 15.787: 99.9166% ( 1) 00:20:03.161 [2024-10-01 17:20:01.689665] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:03.421 3986.773 - 4014.080: 100.0000% ( 16) 00:20:03.422 00:20:03.422 Complete histogram 00:20:03.422 ================== 00:20:03.422 Range in us Cumulative Count 00:20:03.422 2.387 - 2.400: 0.3074% ( 59) 00:20:03.422 2.400 - 2.413: 1.1671% ( 165) 00:20:03.422 2.413 - 2.427: 1.2765% ( 21) 00:20:03.422 2.427 - 2.440: 1.4224% ( 28) 00:20:03.422 2.440 - 2.453: 36.2528% ( 6685) 00:20:03.422 2.453 - 2.467: 56.9218% ( 3967) 00:20:03.422 2.467 - 2.480: 69.2596% ( 2368) 00:20:03.422 2.480 - 2.493: 77.5283% ( 1587) 00:20:03.422 2.493 - 2.507: 80.9201% ( 651) 00:20:03.422 2.507 - 2.520: 83.3794% ( 472) 00:20:03.422 2.520 - 2.533: 88.8449% ( 1049) 00:20:03.422 2.533 - 2.547: 94.1176% ( 1012) 00:20:03.422 2.547 - 2.560: 96.8478% ( 524) 00:20:03.422 2.560 - 2.573: 98.6505% ( 346) 00:20:03.422 2.573 - 2.587: 99.1872% ( 103) 00:20:03.422 2.587 - 2.600: 99.3383% ( 29) 00:20:03.422 2.600 - 2.613: 99.4008% ( 12) 00:20:03.422 2.627 - 2.640: 99.4060% ( 1) 00:20:03.422 4.213 - 4.240: 99.4112% ( 1) 00:20:03.422 4.320 - 4.347: 99.4165% ( 1) 00:20:03.422 4.373 - 4.400: 99.4217% ( 1) 00:20:03.422 4.453 - 4.480: 99.4321% ( 2) 00:20:03.422 4.480 - 4.507: 99.4373% ( 1) 00:20:03.422 4.507 - 4.533: 99.4477% ( 2) 00:20:03.422 4.560 - 4.587: 99.4529% ( 1) 00:20:03.422 4.747 - 4.773: 99.4686% ( 3) 00:20:03.422 4.773 - 4.800: 99.4738% ( 1) 00:20:03.422 4.880 - 4.907: 99.4842% ( 2) 00:20:03.422 4.907 - 4.933: 99.4894% ( 1) 00:20:03.422 4.987 - 5.013: 99.4946% ( 1) 00:20:03.422 5.013 - 5.040: 99.4998% ( 1) 00:20:03.422 5.040 - 5.067: 99.5050% ( 1) 00:20:03.422 5.147 - 5.173: 99.5102% ( 1) 00:20:03.422 5.173 - 5.200: 99.5154% ( 1) 00:20:03.422 5.200 - 5.227: 99.5207% ( 1) 00:20:03.422 5.227 - 5.253: 99.5311% ( 2) 00:20:03.422 5.253 - 5.280: 99.5363% ( 1) 00:20:03.422 5.333 - 5.360: 99.5467% ( 2) 00:20:03.422 5.387 - 5.413: 99.5519% ( 1) 00:20:03.422 5.467 - 5.493: 99.5571% ( 1) 00:20:03.422 5.493 - 5.520: 99.5676% ( 2) 00:20:03.422 5.520 - 5.547: 99.5780% ( 2) 00:20:03.422 5.573 - 5.600: 99.5832% ( 1) 00:20:03.422 5.627 - 5.653: 99.5884% ( 1) 00:20:03.422 5.680 - 5.707: 99.5936% ( 1) 00:20:03.422 5.787 - 5.813: 99.5988% ( 1) 00:20:03.422 5.813 - 5.840: 99.6040% ( 1) 00:20:03.422 5.867 - 5.893: 99.6092% ( 1) 00:20:03.422 6.293 - 6.320: 99.6197% ( 2) 00:20:03.422 6.320 - 6.347: 99.6249% ( 1) 00:20:03.422 6.533 - 6.560: 99.6301% ( 1) 00:20:03.422 9.493 - 9.547: 99.6353% ( 1) 00:20:03.422 12.907 - 12.960: 99.6405% ( 1) 00:20:03.422 14.933 - 15.040: 99.6457% ( 1) 00:20:03.422 42.240 - 42.453: 99.6509% ( 1) 00:20:03.422 135.680 - 136.533: 99.6561% ( 1) 00:20:03.422 3986.773 - 4014.080: 100.0000% ( 66) 00:20:03.422 00:20:03.422 17:20:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:20:03.422 17:20:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:20:03.422 17:20:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:20:03.422 17:20:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:20:03.422 17:20:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:20:03.422 [ 00:20:03.422 { 00:20:03.422 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:03.422 "subtype": "Discovery", 00:20:03.422 "listen_addresses": [], 00:20:03.422 "allow_any_host": true, 00:20:03.422 "hosts": [] 00:20:03.422 }, 00:20:03.422 { 00:20:03.422 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:20:03.422 "subtype": "NVMe", 00:20:03.422 "listen_addresses": [ 00:20:03.422 { 00:20:03.422 "trtype": "VFIOUSER", 00:20:03.422 "adrfam": "IPv4", 00:20:03.422 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:20:03.422 "trsvcid": "0" 00:20:03.422 } 00:20:03.422 ], 00:20:03.422 "allow_any_host": true, 00:20:03.422 "hosts": [], 00:20:03.422 "serial_number": "SPDK1", 00:20:03.422 "model_number": "SPDK bdev Controller", 00:20:03.422 "max_namespaces": 32, 00:20:03.422 "min_cntlid": 1, 00:20:03.422 "max_cntlid": 65519, 00:20:03.422 "namespaces": [ 00:20:03.422 { 00:20:03.422 "nsid": 1, 00:20:03.422 "bdev_name": "Malloc1", 00:20:03.422 "name": "Malloc1", 00:20:03.422 "nguid": "AADE631005254CA8BC653060DAA19B41", 00:20:03.422 "uuid": "aade6310-0525-4ca8-bc65-3060daa19b41" 00:20:03.422 }, 00:20:03.422 { 00:20:03.422 "nsid": 2, 00:20:03.422 "bdev_name": "Malloc3", 00:20:03.422 "name": "Malloc3", 00:20:03.422 "nguid": "4F4FA6ED3C944A639E72F2343CA78BB8", 00:20:03.422 "uuid": "4f4fa6ed-3c94-4a63-9e72-f2343ca78bb8" 00:20:03.422 } 00:20:03.422 ] 00:20:03.422 }, 00:20:03.422 { 00:20:03.422 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:20:03.422 "subtype": "NVMe", 00:20:03.422 "listen_addresses": [ 00:20:03.422 { 00:20:03.422 "trtype": "VFIOUSER", 00:20:03.422 "adrfam": "IPv4", 00:20:03.422 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:20:03.422 "trsvcid": "0" 00:20:03.422 } 00:20:03.422 ], 00:20:03.422 "allow_any_host": true, 00:20:03.422 "hosts": [], 00:20:03.422 "serial_number": "SPDK2", 00:20:03.422 "model_number": "SPDK bdev Controller", 00:20:03.422 "max_namespaces": 32, 00:20:03.422 "min_cntlid": 1, 00:20:03.422 "max_cntlid": 65519, 00:20:03.422 "namespaces": [ 00:20:03.422 { 00:20:03.422 "nsid": 1, 00:20:03.422 "bdev_name": "Malloc2", 00:20:03.422 "name": "Malloc2", 00:20:03.422 "nguid": "824537347E7D4678A8B735D783C0C258", 00:20:03.422 "uuid": "82453734-7e7d-4678-a8b7-35d783c0c258" 00:20:03.422 } 00:20:03.422 ] 00:20:03.422 } 00:20:03.422 ] 00:20:03.422 17:20:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:03.422 17:20:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3014248 00:20:03.422 17:20:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:20:03.422 17:20:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:20:03.422 17:20:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:20:03.422 17:20:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:03.422 17:20:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:03.422 17:20:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:20:03.422 17:20:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:20:03.422 17:20:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:20:03.683 Malloc4 00:20:03.683 [2024-10-01 17:20:02.099500] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:03.683 17:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:20:03.943 [2024-10-01 17:20:02.277633] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:03.943 17:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:20:03.943 Asynchronous Event Request test 00:20:03.943 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:03.943 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:03.943 Registering asynchronous event callbacks... 00:20:03.943 Starting namespace attribute notice tests for all controllers... 00:20:03.943 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:03.943 aer_cb - Changed Namespace 00:20:03.943 Cleaning up... 00:20:03.943 [ 00:20:03.943 { 00:20:03.943 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:03.943 "subtype": "Discovery", 00:20:03.943 "listen_addresses": [], 00:20:03.943 "allow_any_host": true, 00:20:03.943 "hosts": [] 00:20:03.943 }, 00:20:03.943 { 00:20:03.943 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:20:03.943 "subtype": "NVMe", 00:20:03.943 "listen_addresses": [ 00:20:03.943 { 00:20:03.943 "trtype": "VFIOUSER", 00:20:03.943 "adrfam": "IPv4", 00:20:03.943 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:20:03.943 "trsvcid": "0" 00:20:03.943 } 00:20:03.943 ], 00:20:03.943 "allow_any_host": true, 00:20:03.943 "hosts": [], 00:20:03.943 "serial_number": "SPDK1", 00:20:03.943 "model_number": "SPDK bdev Controller", 00:20:03.943 "max_namespaces": 32, 00:20:03.943 "min_cntlid": 1, 00:20:03.943 "max_cntlid": 65519, 00:20:03.943 "namespaces": [ 00:20:03.943 { 00:20:03.943 "nsid": 1, 00:20:03.943 "bdev_name": "Malloc1", 00:20:03.943 "name": "Malloc1", 00:20:03.943 "nguid": "AADE631005254CA8BC653060DAA19B41", 00:20:03.943 "uuid": "aade6310-0525-4ca8-bc65-3060daa19b41" 00:20:03.943 }, 00:20:03.943 { 00:20:03.943 "nsid": 2, 00:20:03.943 "bdev_name": "Malloc3", 00:20:03.943 "name": "Malloc3", 00:20:03.943 "nguid": "4F4FA6ED3C944A639E72F2343CA78BB8", 00:20:03.943 "uuid": "4f4fa6ed-3c94-4a63-9e72-f2343ca78bb8" 00:20:03.943 } 00:20:03.943 ] 00:20:03.943 }, 00:20:03.943 { 00:20:03.943 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:20:03.943 "subtype": "NVMe", 00:20:03.943 "listen_addresses": [ 00:20:03.943 { 00:20:03.943 "trtype": "VFIOUSER", 00:20:03.943 "adrfam": "IPv4", 00:20:03.943 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:20:03.943 "trsvcid": "0" 00:20:03.943 } 00:20:03.943 ], 00:20:03.943 "allow_any_host": true, 00:20:03.943 "hosts": [], 00:20:03.943 "serial_number": "SPDK2", 00:20:03.943 "model_number": "SPDK bdev Controller", 00:20:03.943 "max_namespaces": 32, 00:20:03.943 "min_cntlid": 1, 00:20:03.943 "max_cntlid": 65519, 00:20:03.943 "namespaces": [ 00:20:03.943 { 00:20:03.943 "nsid": 1, 00:20:03.943 "bdev_name": "Malloc2", 00:20:03.943 "name": "Malloc2", 00:20:03.943 "nguid": "824537347E7D4678A8B735D783C0C258", 00:20:03.943 "uuid": "82453734-7e7d-4678-a8b7-35d783c0c258" 00:20:03.943 }, 00:20:03.943 { 00:20:03.943 "nsid": 2, 00:20:03.943 "bdev_name": "Malloc4", 00:20:03.943 "name": "Malloc4", 00:20:03.943 "nguid": "6D4A30B11ED645F18E9897CBCC403E9F", 00:20:03.943 "uuid": "6d4a30b1-1ed6-45f1-8e98-97cbcc403e9f" 00:20:03.943 } 00:20:03.943 ] 00:20:03.943 } 00:20:03.943 ] 00:20:04.204 17:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3014248 00:20:04.204 17:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:20:04.204 17:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3005495 00:20:04.204 17:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 3005495 ']' 00:20:04.204 17:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 3005495 00:20:04.204 17:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:20:04.204 17:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:04.204 17:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3005495 00:20:04.204 17:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:04.204 17:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:04.204 17:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3005495' 00:20:04.204 killing process with pid 3005495 00:20:04.204 17:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 3005495 00:20:04.204 17:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 3005495 00:20:04.204 17:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:20:04.204 17:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:20:04.204 17:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:20:04.204 17:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:20:04.204 17:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:20:04.204 17:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3014550 00:20:04.204 17:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3014550' 00:20:04.204 Process pid: 3014550 00:20:04.204 17:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:04.204 17:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:20:04.204 17:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3014550 00:20:04.204 17:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 3014550 ']' 00:20:04.204 17:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.204 17:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:04.204 17:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.204 17:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:04.204 17:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:20:04.464 [2024-10-01 17:20:02.785118] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:20:04.464 [2024-10-01 17:20:02.786068] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:20:04.464 [2024-10-01 17:20:02.786113] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:04.464 [2024-10-01 17:20:02.848908] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:04.464 [2024-10-01 17:20:02.881054] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:04.464 [2024-10-01 17:20:02.881094] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:04.464 [2024-10-01 17:20:02.881104] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:04.464 [2024-10-01 17:20:02.881111] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:04.464 [2024-10-01 17:20:02.881117] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:04.464 [2024-10-01 17:20:02.881180] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.464 [2024-10-01 17:20:02.881291] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:20:04.464 [2024-10-01 17:20:02.881444] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.464 [2024-10-01 17:20:02.881445] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:20:04.464 [2024-10-01 17:20:02.937210] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:20:04.464 [2024-10-01 17:20:02.937374] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:20:04.464 [2024-10-01 17:20:02.938339] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:20:04.464 [2024-10-01 17:20:02.939094] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:20:04.464 [2024-10-01 17:20:02.939181] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:20:04.464 17:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:04.464 17:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:20:04.464 17:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:20:05.850 17:20:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:20:05.850 17:20:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:20:05.850 17:20:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:20:05.850 17:20:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:05.850 17:20:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:20:05.850 17:20:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:05.850 Malloc1 00:20:06.112 17:20:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:20:06.112 17:20:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:20:06.372 17:20:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:20:06.632 17:20:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:06.632 17:20:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:20:06.632 17:20:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:20:06.632 Malloc2 00:20:06.893 17:20:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:20:06.893 17:20:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:20:07.154 17:20:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:20:07.414 17:20:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:20:07.414 17:20:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3014550 00:20:07.414 17:20:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 3014550 ']' 00:20:07.414 17:20:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 3014550 00:20:07.414 17:20:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:20:07.414 17:20:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:07.414 17:20:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3014550 00:20:07.414 17:20:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:07.414 17:20:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:07.414 17:20:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3014550' 00:20:07.414 killing process with pid 3014550 00:20:07.414 17:20:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 3014550 00:20:07.414 17:20:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 3014550 00:20:07.414 17:20:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:20:07.414 17:20:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:20:07.414 00:20:07.414 real 0m49.899s 00:20:07.414 user 3m12.941s 00:20:07.414 sys 0m2.753s 00:20:07.414 17:20:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:07.414 17:20:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:20:07.414 ************************************ 00:20:07.414 END TEST nvmf_vfio_user 00:20:07.414 ************************************ 00:20:07.676 17:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:20:07.676 17:20:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:07.676 17:20:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:07.676 17:20:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:07.676 ************************************ 00:20:07.676 START TEST nvmf_vfio_user_nvme_compliance 00:20:07.676 ************************************ 00:20:07.676 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:20:07.676 * Looking for test storage... 00:20:07.676 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:20:07.676 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:07.676 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lcov --version 00:20:07.676 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:07.676 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:07.676 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:07.676 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:07.676 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:07.676 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:20:07.676 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:20:07.676 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:20:07.676 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:20:07.676 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:20:07.676 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:20:07.676 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:20:07.676 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:07.676 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:20:07.676 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:20:07.676 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:07.676 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:07.676 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:20:07.676 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:20:07.676 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:07.676 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:20:07.676 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:20:07.676 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:20:07.676 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:20:07.676 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:07.676 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:20:07.938 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:20:07.938 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:07.938 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:07.938 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:20:07.938 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:07.938 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:07.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.938 --rc genhtml_branch_coverage=1 00:20:07.938 --rc genhtml_function_coverage=1 00:20:07.938 --rc genhtml_legend=1 00:20:07.938 --rc geninfo_all_blocks=1 00:20:07.938 --rc geninfo_unexecuted_blocks=1 00:20:07.938 00:20:07.938 ' 00:20:07.938 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:07.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.938 --rc genhtml_branch_coverage=1 00:20:07.938 --rc genhtml_function_coverage=1 00:20:07.938 --rc genhtml_legend=1 00:20:07.938 --rc geninfo_all_blocks=1 00:20:07.938 --rc geninfo_unexecuted_blocks=1 00:20:07.938 00:20:07.938 ' 00:20:07.938 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:07.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.938 --rc genhtml_branch_coverage=1 00:20:07.938 --rc genhtml_function_coverage=1 00:20:07.938 --rc genhtml_legend=1 00:20:07.938 --rc geninfo_all_blocks=1 00:20:07.938 --rc geninfo_unexecuted_blocks=1 00:20:07.938 00:20:07.938 ' 00:20:07.938 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:07.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.938 --rc genhtml_branch_coverage=1 00:20:07.938 --rc genhtml_function_coverage=1 00:20:07.938 --rc genhtml_legend=1 00:20:07.938 --rc geninfo_all_blocks=1 00:20:07.938 --rc geninfo_unexecuted_blocks=1 00:20:07.938 00:20:07.938 ' 00:20:07.938 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:07.938 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:20:07.938 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:07.938 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:07.938 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:07.938 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:07.938 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:07.938 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:07.938 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:07.938 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:07.938 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:07.938 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:07.938 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:07.938 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:07.938 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:07.938 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:07.938 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:07.938 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:07.938 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:07.938 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:20:07.938 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:07.938 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:07.938 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:07.938 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.938 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.938 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.938 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:20:07.939 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.939 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:20:07.939 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:07.939 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:07.939 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:07.939 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:07.939 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:07.939 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:07.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:07.939 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:07.939 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:07.939 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:07.939 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:07.939 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:07.939 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:20:07.939 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:20:07.939 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:20:07.939 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3015292 00:20:07.939 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3015292' 00:20:07.939 Process pid: 3015292 00:20:07.939 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:20:07.939 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:07.939 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3015292 00:20:07.939 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 3015292 ']' 00:20:07.939 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.939 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:07.939 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.939 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:07.939 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:07.939 [2024-10-01 17:20:06.321175] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:20:07.939 [2024-10-01 17:20:06.321256] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:07.939 [2024-10-01 17:20:06.386613] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:07.939 [2024-10-01 17:20:06.426048] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:07.939 [2024-10-01 17:20:06.426096] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:07.939 [2024-10-01 17:20:06.426104] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:07.939 [2024-10-01 17:20:06.426111] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:07.939 [2024-10-01 17:20:06.426117] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:07.939 [2024-10-01 17:20:06.426275] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.939 [2024-10-01 17:20:06.426427] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:20:07.939 [2024-10-01 17:20:06.426430] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.200 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:08.200 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:20:08.200 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:20:09.144 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:20:09.144 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:20:09.144 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:20:09.144 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.144 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:09.144 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.144 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:20:09.144 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:20:09.144 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.144 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:09.144 malloc0 00:20:09.144 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.144 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:20:09.144 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.144 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:09.144 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.144 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:20:09.144 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.144 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:09.144 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.144 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:20:09.144 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.144 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:09.144 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.144 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:20:09.405 00:20:09.405 00:20:09.405 CUnit - A unit testing framework for C - Version 2.1-3 00:20:09.405 http://cunit.sourceforge.net/ 00:20:09.405 00:20:09.405 00:20:09.405 Suite: nvme_compliance 00:20:09.405 Test: admin_identify_ctrlr_verify_dptr ...[2024-10-01 17:20:07.768709] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:09.405 [2024-10-01 17:20:07.770046] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:20:09.405 [2024-10-01 17:20:07.770058] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:20:09.405 [2024-10-01 17:20:07.770063] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:20:09.405 [2024-10-01 17:20:07.771727] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:09.406 passed 00:20:09.406 Test: admin_identify_ctrlr_verify_fused ...[2024-10-01 17:20:07.868311] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:09.406 [2024-10-01 17:20:07.871328] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:09.406 passed 00:20:09.666 Test: admin_identify_ns ...[2024-10-01 17:20:07.967242] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:09.666 [2024-10-01 17:20:08.027006] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:20:09.666 [2024-10-01 17:20:08.035004] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:20:09.666 [2024-10-01 17:20:08.056122] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:09.666 passed 00:20:09.666 Test: admin_get_features_mandatory_features ...[2024-10-01 17:20:08.150116] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:09.666 [2024-10-01 17:20:08.153133] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:09.666 passed 00:20:09.927 Test: admin_get_features_optional_features ...[2024-10-01 17:20:08.246708] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:09.927 [2024-10-01 17:20:08.251743] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:09.927 passed 00:20:09.927 Test: admin_set_features_number_of_queues ...[2024-10-01 17:20:08.343840] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:09.927 [2024-10-01 17:20:08.449113] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:10.187 passed 00:20:10.187 Test: admin_get_log_page_mandatory_logs ...[2024-10-01 17:20:08.542752] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:10.187 [2024-10-01 17:20:08.545773] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:10.187 passed 00:20:10.187 Test: admin_get_log_page_with_lpo ...[2024-10-01 17:20:08.638251] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:10.187 [2024-10-01 17:20:08.710008] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:20:10.187 [2024-10-01 17:20:08.723055] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:10.448 passed 00:20:10.448 Test: fabric_property_get ...[2024-10-01 17:20:08.814725] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:10.448 [2024-10-01 17:20:08.815970] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:20:10.448 [2024-10-01 17:20:08.817749] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:10.448 passed 00:20:10.448 Test: admin_delete_io_sq_use_admin_qid ...[2024-10-01 17:20:08.909313] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:10.448 [2024-10-01 17:20:08.910559] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:20:10.448 [2024-10-01 17:20:08.912336] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:10.448 passed 00:20:10.708 Test: admin_delete_io_sq_delete_sq_twice ...[2024-10-01 17:20:09.007250] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:10.708 [2024-10-01 17:20:09.091004] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:20:10.708 [2024-10-01 17:20:09.107002] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:20:10.708 [2024-10-01 17:20:09.112092] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:10.708 passed 00:20:10.708 Test: admin_delete_io_cq_use_admin_qid ...[2024-10-01 17:20:09.205752] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:10.708 [2024-10-01 17:20:09.207007] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:20:10.708 [2024-10-01 17:20:09.208771] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:10.708 passed 00:20:10.970 Test: admin_delete_io_cq_delete_cq_first ...[2024-10-01 17:20:09.301889] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:10.970 [2024-10-01 17:20:09.375004] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:20:10.970 [2024-10-01 17:20:09.399000] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:20:10.970 [2024-10-01 17:20:09.404091] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:10.970 passed 00:20:10.970 Test: admin_create_io_cq_verify_iv_pc ...[2024-10-01 17:20:09.498193] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:10.970 [2024-10-01 17:20:09.499451] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:20:10.970 [2024-10-01 17:20:09.499473] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:20:10.970 [2024-10-01 17:20:09.501220] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:11.232 passed 00:20:11.232 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-10-01 17:20:09.594587] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:11.232 [2024-10-01 17:20:09.686005] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:20:11.232 [2024-10-01 17:20:09.694003] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:20:11.232 [2024-10-01 17:20:09.702003] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:20:11.232 [2024-10-01 17:20:09.710014] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:20:11.232 [2024-10-01 17:20:09.739090] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:11.232 passed 00:20:11.492 Test: admin_create_io_sq_verify_pc ...[2024-10-01 17:20:09.833125] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:11.492 [2024-10-01 17:20:09.852007] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:20:11.492 [2024-10-01 17:20:09.869294] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:11.492 passed 00:20:11.492 Test: admin_create_io_qp_max_qps ...[2024-10-01 17:20:09.957813] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:12.876 [2024-10-01 17:20:11.064009] nvme_ctrlr.c:5504:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:20:13.137 [2024-10-01 17:20:11.451245] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:13.137 passed 00:20:13.137 Test: admin_create_io_sq_shared_cq ...[2024-10-01 17:20:11.545276] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:13.137 [2024-10-01 17:20:11.676012] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:20:13.398 [2024-10-01 17:20:11.713049] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:13.398 passed 00:20:13.398 00:20:13.398 Run Summary: Type Total Ran Passed Failed Inactive 00:20:13.398 suites 1 1 n/a 0 0 00:20:13.398 tests 18 18 18 0 0 00:20:13.398 asserts 360 360 360 0 n/a 00:20:13.398 00:20:13.398 Elapsed time = 1.652 seconds 00:20:13.398 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3015292 00:20:13.398 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 3015292 ']' 00:20:13.398 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 3015292 00:20:13.398 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:20:13.398 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:13.398 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3015292 00:20:13.398 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:13.398 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:13.398 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3015292' 00:20:13.398 killing process with pid 3015292 00:20:13.398 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 3015292 00:20:13.398 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 3015292 00:20:13.659 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:20:13.659 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:20:13.659 00:20:13.659 real 0m5.944s 00:20:13.659 user 0m16.662s 00:20:13.659 sys 0m0.496s 00:20:13.659 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:13.659 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:13.659 ************************************ 00:20:13.659 END TEST nvmf_vfio_user_nvme_compliance 00:20:13.659 ************************************ 00:20:13.659 17:20:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:20:13.659 17:20:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:13.659 17:20:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:13.659 17:20:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:13.659 ************************************ 00:20:13.659 START TEST nvmf_vfio_user_fuzz 00:20:13.659 ************************************ 00:20:13.659 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:20:13.659 * Looking for test storage... 00:20:13.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:13.659 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:13.659 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:20:13.659 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:13.927 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:13.927 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:13.927 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:13.927 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:13.927 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:20:13.927 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:20:13.927 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:20:13.927 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:20:13.927 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:20:13.927 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:20:13.927 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:20:13.927 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:13.927 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:20:13.927 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:20:13.927 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:13.927 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:13.927 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:20:13.927 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:20:13.927 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:13.927 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:20:13.927 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:20:13.927 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:20:13.927 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:20:13.927 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:13.927 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:20:13.927 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:20:13.927 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:13.927 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:13.927 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:20:13.927 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:13.927 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:13.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.927 --rc genhtml_branch_coverage=1 00:20:13.927 --rc genhtml_function_coverage=1 00:20:13.927 --rc genhtml_legend=1 00:20:13.927 --rc geninfo_all_blocks=1 00:20:13.927 --rc geninfo_unexecuted_blocks=1 00:20:13.927 00:20:13.927 ' 00:20:13.927 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:13.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.927 --rc genhtml_branch_coverage=1 00:20:13.927 --rc genhtml_function_coverage=1 00:20:13.927 --rc genhtml_legend=1 00:20:13.927 --rc geninfo_all_blocks=1 00:20:13.927 --rc geninfo_unexecuted_blocks=1 00:20:13.927 00:20:13.927 ' 00:20:13.927 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:13.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.927 --rc genhtml_branch_coverage=1 00:20:13.927 --rc genhtml_function_coverage=1 00:20:13.927 --rc genhtml_legend=1 00:20:13.927 --rc geninfo_all_blocks=1 00:20:13.927 --rc geninfo_unexecuted_blocks=1 00:20:13.927 00:20:13.927 ' 00:20:13.927 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:13.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.927 --rc genhtml_branch_coverage=1 00:20:13.927 --rc genhtml_function_coverage=1 00:20:13.927 --rc genhtml_legend=1 00:20:13.927 --rc geninfo_all_blocks=1 00:20:13.927 --rc geninfo_unexecuted_blocks=1 00:20:13.927 00:20:13.927 ' 00:20:13.927 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:13.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3016415 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3016415' 00:20:13.928 Process pid: 3016415 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3016415 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 3016415 ']' 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:13.928 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:14.189 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:14.189 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:20:14.189 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:20:15.133 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:20:15.133 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.133 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:15.133 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.133 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:20:15.133 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:20:15.133 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.133 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:15.133 malloc0 00:20:15.133 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.133 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:20:15.133 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.133 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:15.133 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.133 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:20:15.133 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.133 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:15.133 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.133 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:20:15.133 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.133 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:15.133 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.133 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:20:15.133 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:20:47.240 Fuzzing completed. Shutting down the fuzz application 00:20:47.240 00:20:47.240 Dumping successful admin opcodes: 00:20:47.240 8, 9, 10, 24, 00:20:47.240 Dumping successful io opcodes: 00:20:47.240 0, 00:20:47.240 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1093989, total successful commands: 4309, random_seed: 3071285632 00:20:47.240 NS: 0x200003a1ef00 admin qp, Total commands completed: 137600, total successful commands: 1116, random_seed: 913654784 00:20:47.240 17:20:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:20:47.240 17:20:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.240 17:20:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:47.240 17:20:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.240 17:20:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3016415 00:20:47.240 17:20:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 3016415 ']' 00:20:47.240 17:20:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 3016415 00:20:47.240 17:20:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:20:47.240 17:20:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:47.240 17:20:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3016415 00:20:47.240 17:20:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:47.240 17:20:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:47.240 17:20:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3016415' 00:20:47.240 killing process with pid 3016415 00:20:47.240 17:20:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 3016415 00:20:47.240 17:20:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 3016415 00:20:47.240 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:20:47.240 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:20:47.240 00:20:47.240 real 0m33.166s 00:20:47.240 user 0m37.152s 00:20:47.240 sys 0m25.858s 00:20:47.240 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:47.240 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:47.240 ************************************ 00:20:47.240 END TEST nvmf_vfio_user_fuzz 00:20:47.240 ************************************ 00:20:47.240 17:20:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:47.240 17:20:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:47.240 17:20:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:47.240 17:20:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:47.240 ************************************ 00:20:47.240 START TEST nvmf_auth_target 00:20:47.240 ************************************ 00:20:47.240 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:47.240 * Looking for test storage... 00:20:47.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:47.240 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:47.240 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:20:47.240 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:47.240 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:47.240 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:47.240 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:47.240 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:47.240 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:20:47.240 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:20:47.240 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:20:47.240 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:20:47.240 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:20:47.240 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:20:47.240 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:20:47.240 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:47.240 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:20:47.240 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:20:47.240 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:47.240 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:47.240 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:20:47.240 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:20:47.240 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:47.240 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:20:47.240 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:20:47.240 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:20:47.240 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:20:47.240 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:47.240 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:20:47.240 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:20:47.240 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:47.240 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:47.240 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:20:47.240 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:47.240 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:47.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.240 --rc genhtml_branch_coverage=1 00:20:47.240 --rc genhtml_function_coverage=1 00:20:47.240 --rc genhtml_legend=1 00:20:47.240 --rc geninfo_all_blocks=1 00:20:47.240 --rc geninfo_unexecuted_blocks=1 00:20:47.240 00:20:47.240 ' 00:20:47.240 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:47.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.240 --rc genhtml_branch_coverage=1 00:20:47.240 --rc genhtml_function_coverage=1 00:20:47.240 --rc genhtml_legend=1 00:20:47.241 --rc geninfo_all_blocks=1 00:20:47.241 --rc geninfo_unexecuted_blocks=1 00:20:47.241 00:20:47.241 ' 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:47.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.241 --rc genhtml_branch_coverage=1 00:20:47.241 --rc genhtml_function_coverage=1 00:20:47.241 --rc genhtml_legend=1 00:20:47.241 --rc geninfo_all_blocks=1 00:20:47.241 --rc geninfo_unexecuted_blocks=1 00:20:47.241 00:20:47.241 ' 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:47.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.241 --rc genhtml_branch_coverage=1 00:20:47.241 --rc genhtml_function_coverage=1 00:20:47.241 --rc genhtml_legend=1 00:20:47.241 --rc geninfo_all_blocks=1 00:20:47.241 --rc geninfo_unexecuted_blocks=1 00:20:47.241 00:20:47.241 ' 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:47.241 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:20:47.241 17:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:55.379 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:55.379 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:55.379 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:55.379 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # is_hw=yes 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:55.379 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:55.379 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:20:55.379 00:20:55.379 --- 10.0.0.2 ping statistics --- 00:20:55.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.379 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:55.379 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:55.379 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:20:55.379 00:20:55.379 --- 10.0.0.1 ping statistics --- 00:20:55.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.379 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # return 0 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:55.379 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:55.380 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:55.380 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:55.380 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:20:55.380 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:55.380 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:55.380 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.380 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=3026474 00:20:55.380 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:20:55.380 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 3026474 00:20:55.380 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3026474 ']' 00:20:55.380 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.380 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:55.380 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.380 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:55.380 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.380 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:55.380 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:55.380 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:55.380 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:55.380 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.380 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:55.380 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3026734 00:20:55.380 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:55.380 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:20:55.380 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:20:55.380 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:20:55.380 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:55.380 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:20:55.380 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:20:55.380 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:20:55.380 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:55.380 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=fb84b64337fba7d0b1d1aaaf175b308e094172c7ce3cd51f 00:20:55.380 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:20:55.380 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.AiM 00:20:55.380 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key fb84b64337fba7d0b1d1aaaf175b308e094172c7ce3cd51f 0 00:20:55.380 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 fb84b64337fba7d0b1d1aaaf175b308e094172c7ce3cd51f 0 00:20:55.380 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:20:55.380 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:20:55.380 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=fb84b64337fba7d0b1d1aaaf175b308e094172c7ce3cd51f 00:20:55.380 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:20:55.380 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:20:55.641 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.AiM 00:20:55.641 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.AiM 00:20:55.641 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.AiM 00:20:55.641 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:20:55.641 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:20:55.641 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:55.641 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:20:55.641 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:20:55.641 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:20:55.641 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:55.641 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=f9bbaa7637cee9450f8c48acca51717e04586cd5426a04ac102d5983c076f219 00:20:55.641 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:20:55.641 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.9eN 00:20:55.641 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key f9bbaa7637cee9450f8c48acca51717e04586cd5426a04ac102d5983c076f219 3 00:20:55.641 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 f9bbaa7637cee9450f8c48acca51717e04586cd5426a04ac102d5983c076f219 3 00:20:55.641 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:20:55.641 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:20:55.641 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=f9bbaa7637cee9450f8c48acca51717e04586cd5426a04ac102d5983c076f219 00:20:55.641 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:20:55.641 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.9eN 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.9eN 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.9eN 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=4a27d203128b2e4c3ede7d57a0992801 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.c5C 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 4a27d203128b2e4c3ede7d57a0992801 1 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 4a27d203128b2e4c3ede7d57a0992801 1 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=4a27d203128b2e4c3ede7d57a0992801 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.c5C 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.c5C 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.c5C 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=645f096ab6cb04a201c23b64161f9db11f313bf1fff52665 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.BKb 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 645f096ab6cb04a201c23b64161f9db11f313bf1fff52665 2 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 645f096ab6cb04a201c23b64161f9db11f313bf1fff52665 2 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=645f096ab6cb04a201c23b64161f9db11f313bf1fff52665 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.BKb 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.BKb 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.BKb 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=a90be0c0d7bad02ea0555090428c36337266bcda4e2b7740 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.wAD 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key a90be0c0d7bad02ea0555090428c36337266bcda4e2b7740 2 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 a90be0c0d7bad02ea0555090428c36337266bcda4e2b7740 2 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=a90be0c0d7bad02ea0555090428c36337266bcda4e2b7740 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:20:55.641 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:20:55.902 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.wAD 00:20:55.902 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.wAD 00:20:55.902 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.wAD 00:20:55.902 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:20:55.902 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:20:55.902 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:55.902 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:20:55.902 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:20:55.902 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:20:55.902 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:55.902 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=6e2c5ed9ee1d509044e47e11d76c64de 00:20:55.902 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:20:55.902 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.lWL 00:20:55.902 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 6e2c5ed9ee1d509044e47e11d76c64de 1 00:20:55.902 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 6e2c5ed9ee1d509044e47e11d76c64de 1 00:20:55.902 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:20:55.902 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:20:55.902 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=6e2c5ed9ee1d509044e47e11d76c64de 00:20:55.902 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:20:55.902 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:20:55.902 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.lWL 00:20:55.902 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.lWL 00:20:55.902 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.lWL 00:20:55.903 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:20:55.903 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:20:55.903 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:55.903 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:20:55.903 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:20:55.903 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:20:55.903 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:55.903 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=b031f7b7dee1047b67ba7a3c9c20339462fe4f1c475898352270cc78fd2ba355 00:20:55.903 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:20:55.903 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.1Uu 00:20:55.903 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key b031f7b7dee1047b67ba7a3c9c20339462fe4f1c475898352270cc78fd2ba355 3 00:20:55.903 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 b031f7b7dee1047b67ba7a3c9c20339462fe4f1c475898352270cc78fd2ba355 3 00:20:55.903 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:20:55.903 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:20:55.903 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=b031f7b7dee1047b67ba7a3c9c20339462fe4f1c475898352270cc78fd2ba355 00:20:55.903 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:20:55.903 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:20:55.903 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.1Uu 00:20:55.903 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.1Uu 00:20:55.903 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.1Uu 00:20:55.903 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:20:55.903 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3026474 00:20:55.903 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3026474 ']' 00:20:55.903 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.903 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:55.903 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.903 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:55.903 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.164 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:56.164 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:56.164 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3026734 /var/tmp/host.sock 00:20:56.164 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3026734 ']' 00:20:56.164 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:20:56.164 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:56.164 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:56.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:56.164 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:56.164 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.164 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:56.164 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:56.164 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:20:56.164 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.164 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.164 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.164 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:56.164 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.AiM 00:20:56.164 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.164 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.164 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.164 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.AiM 00:20:56.164 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.AiM 00:20:56.423 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.9eN ]] 00:20:56.423 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.9eN 00:20:56.423 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.423 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.423 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.423 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.9eN 00:20:56.423 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.9eN 00:20:56.683 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:56.683 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.c5C 00:20:56.683 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.683 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.683 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.683 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.c5C 00:20:56.683 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.c5C 00:20:56.683 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.BKb ]] 00:20:56.683 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.BKb 00:20:56.683 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.683 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.683 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.683 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.BKb 00:20:56.683 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.BKb 00:20:56.943 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:56.943 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.wAD 00:20:56.943 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.943 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.943 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.943 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.wAD 00:20:56.943 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.wAD 00:20:57.203 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.lWL ]] 00:20:57.203 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.lWL 00:20:57.203 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.203 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.203 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.203 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.lWL 00:20:57.203 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.lWL 00:20:57.203 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:57.203 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.1Uu 00:20:57.203 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.203 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.203 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.203 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.1Uu 00:20:57.203 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.1Uu 00:20:57.462 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:20:57.462 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:57.462 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:57.462 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.462 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:57.462 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:57.722 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:20:57.722 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.722 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:57.722 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:57.722 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:57.722 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.722 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.722 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.722 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.722 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.722 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.722 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.722 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.983 00:20:57.983 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.983 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.983 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.983 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.244 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.244 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.244 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.244 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.244 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:58.244 { 00:20:58.244 "cntlid": 1, 00:20:58.244 "qid": 0, 00:20:58.244 "state": "enabled", 00:20:58.244 "thread": "nvmf_tgt_poll_group_000", 00:20:58.244 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:58.244 "listen_address": { 00:20:58.244 "trtype": "TCP", 00:20:58.244 "adrfam": "IPv4", 00:20:58.244 "traddr": "10.0.0.2", 00:20:58.244 "trsvcid": "4420" 00:20:58.244 }, 00:20:58.244 "peer_address": { 00:20:58.244 "trtype": "TCP", 00:20:58.244 "adrfam": "IPv4", 00:20:58.244 "traddr": "10.0.0.1", 00:20:58.244 "trsvcid": "50582" 00:20:58.244 }, 00:20:58.244 "auth": { 00:20:58.244 "state": "completed", 00:20:58.244 "digest": "sha256", 00:20:58.244 "dhgroup": "null" 00:20:58.244 } 00:20:58.244 } 00:20:58.244 ]' 00:20:58.244 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:58.244 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:58.244 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:58.244 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:58.244 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:58.244 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.244 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.244 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.504 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmI4NGI2NDMzN2ZiYTdkMGIxZDFhYWFmMTc1YjMwOGUwOTQxNzJjN2NlM2NkNTFmTz+1jQ==: --dhchap-ctrl-secret DHHC-1:03:ZjliYmFhNzYzN2NlZTk0NTBmOGM0OGFjY2E1MTcxN2UwNDU4NmNkNTQyNmEwNGFjMTAyZDU5ODNjMDc2ZjIxOdcbX0o=: 00:20:58.504 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZmI4NGI2NDMzN2ZiYTdkMGIxZDFhYWFmMTc1YjMwOGUwOTQxNzJjN2NlM2NkNTFmTz+1jQ==: --dhchap-ctrl-secret DHHC-1:03:ZjliYmFhNzYzN2NlZTk0NTBmOGM0OGFjY2E1MTcxN2UwNDU4NmNkNTQyNmEwNGFjMTAyZDU5ODNjMDc2ZjIxOdcbX0o=: 00:20:59.444 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.444 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:59.444 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.444 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.444 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.444 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.444 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:59.444 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:59.445 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:20:59.445 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.445 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:59.445 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:59.445 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:59.445 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.445 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.445 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.445 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.445 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.445 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.445 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.445 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.705 00:20:59.705 17:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.705 17:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.705 17:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.965 17:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.965 17:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.965 17:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.965 17:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.966 17:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.966 17:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.966 { 00:20:59.966 "cntlid": 3, 00:20:59.966 "qid": 0, 00:20:59.966 "state": "enabled", 00:20:59.966 "thread": "nvmf_tgt_poll_group_000", 00:20:59.966 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:59.966 "listen_address": { 00:20:59.966 "trtype": "TCP", 00:20:59.966 "adrfam": "IPv4", 00:20:59.966 "traddr": "10.0.0.2", 00:20:59.966 "trsvcid": "4420" 00:20:59.966 }, 00:20:59.966 "peer_address": { 00:20:59.966 "trtype": "TCP", 00:20:59.966 "adrfam": "IPv4", 00:20:59.966 "traddr": "10.0.0.1", 00:20:59.966 "trsvcid": "50602" 00:20:59.966 }, 00:20:59.966 "auth": { 00:20:59.966 "state": "completed", 00:20:59.966 "digest": "sha256", 00:20:59.966 "dhgroup": "null" 00:20:59.966 } 00:20:59.966 } 00:20:59.966 ]' 00:20:59.966 17:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.966 17:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:59.966 17:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.966 17:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:59.966 17:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.966 17:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.966 17:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.966 17:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.252 17:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGEyN2QyMDMxMjhiMmU0YzNlZGU3ZDU3YTA5OTI4MDGfzKv6: --dhchap-ctrl-secret DHHC-1:02:NjQ1ZjA5NmFiNmNiMDRhMjAxYzIzYjY0MTYxZjlkYjExZjMxM2JmMWZmZjUyNjY1DEziog==: 00:21:00.252 17:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NGEyN2QyMDMxMjhiMmU0YzNlZGU3ZDU3YTA5OTI4MDGfzKv6: --dhchap-ctrl-secret DHHC-1:02:NjQ1ZjA5NmFiNmNiMDRhMjAxYzIzYjY0MTYxZjlkYjExZjMxM2JmMWZmZjUyNjY1DEziog==: 00:21:00.849 17:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.849 17:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:00.849 17:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.849 17:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.110 17:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.110 17:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.110 17:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:01.110 17:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:01.110 17:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:21:01.110 17:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.110 17:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:01.110 17:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:01.110 17:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:01.110 17:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.110 17:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.110 17:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.110 17:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.110 17:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.110 17:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.110 17:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.110 17:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.370 00:21:01.370 17:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:01.370 17:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:01.370 17:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.632 17:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.632 17:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.632 17:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.632 17:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.632 17:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.632 17:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.632 { 00:21:01.632 "cntlid": 5, 00:21:01.632 "qid": 0, 00:21:01.632 "state": "enabled", 00:21:01.632 "thread": "nvmf_tgt_poll_group_000", 00:21:01.632 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:01.632 "listen_address": { 00:21:01.632 "trtype": "TCP", 00:21:01.632 "adrfam": "IPv4", 00:21:01.632 "traddr": "10.0.0.2", 00:21:01.632 "trsvcid": "4420" 00:21:01.632 }, 00:21:01.632 "peer_address": { 00:21:01.632 "trtype": "TCP", 00:21:01.632 "adrfam": "IPv4", 00:21:01.632 "traddr": "10.0.0.1", 00:21:01.632 "trsvcid": "50628" 00:21:01.632 }, 00:21:01.632 "auth": { 00:21:01.632 "state": "completed", 00:21:01.632 "digest": "sha256", 00:21:01.632 "dhgroup": "null" 00:21:01.632 } 00:21:01.632 } 00:21:01.632 ]' 00:21:01.632 17:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:01.632 17:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:01.632 17:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:01.632 17:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:01.632 17:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:01.632 17:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.632 17:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.632 17:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.892 17:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTkwYmUwYzBkN2JhZDAyZWEwNTU1MDkwNDI4YzM2MzM3MjY2YmNkYTRlMmI3NzQwcDDsRw==: --dhchap-ctrl-secret DHHC-1:01:NmUyYzVlZDllZTFkNTA5MDQ0ZTQ3ZTExZDc2YzY0ZGXBtqiM: 00:21:01.892 17:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YTkwYmUwYzBkN2JhZDAyZWEwNTU1MDkwNDI4YzM2MzM3MjY2YmNkYTRlMmI3NzQwcDDsRw==: --dhchap-ctrl-secret DHHC-1:01:NmUyYzVlZDllZTFkNTA5MDQ0ZTQ3ZTExZDc2YzY0ZGXBtqiM: 00:21:02.833 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.833 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.833 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:02.833 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.833 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.833 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.833 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:02.833 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:02.833 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:02.833 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:21:02.833 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:02.833 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:02.833 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:02.833 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:02.833 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.833 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:02.833 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.833 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.833 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.833 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:02.833 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:02.834 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:03.095 00:21:03.095 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.095 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.095 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.357 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.357 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.357 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.357 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.357 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.357 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.357 { 00:21:03.357 "cntlid": 7, 00:21:03.357 "qid": 0, 00:21:03.357 "state": "enabled", 00:21:03.357 "thread": "nvmf_tgt_poll_group_000", 00:21:03.357 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:03.357 "listen_address": { 00:21:03.357 "trtype": "TCP", 00:21:03.357 "adrfam": "IPv4", 00:21:03.357 "traddr": "10.0.0.2", 00:21:03.357 "trsvcid": "4420" 00:21:03.357 }, 00:21:03.357 "peer_address": { 00:21:03.357 "trtype": "TCP", 00:21:03.357 "adrfam": "IPv4", 00:21:03.357 "traddr": "10.0.0.1", 00:21:03.357 "trsvcid": "57730" 00:21:03.357 }, 00:21:03.357 "auth": { 00:21:03.357 "state": "completed", 00:21:03.357 "digest": "sha256", 00:21:03.357 "dhgroup": "null" 00:21:03.357 } 00:21:03.357 } 00:21:03.357 ]' 00:21:03.357 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.357 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:03.357 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.357 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:03.357 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.357 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.357 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.357 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.617 17:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjAzMWY3YjdkZWUxMDQ3YjY3YmE3YTNjOWMyMDMzOTQ2MmZlNGYxYzQ3NTg5ODM1MjI3MGNjNzhmZDJiYTM1NaBnAy0=: 00:21:03.617 17:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjAzMWY3YjdkZWUxMDQ3YjY3YmE3YTNjOWMyMDMzOTQ2MmZlNGYxYzQ3NTg5ODM1MjI3MGNjNzhmZDJiYTM1NaBnAy0=: 00:21:04.559 17:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.559 17:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:04.559 17:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.559 17:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.559 17:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.559 17:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:04.559 17:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.559 17:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:04.559 17:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:04.559 17:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:21:04.559 17:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.559 17:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:04.559 17:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:04.559 17:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:04.559 17:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.559 17:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.559 17:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.559 17:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.559 17:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.559 17:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.559 17:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.560 17:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.819 00:21:04.819 17:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.819 17:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.819 17:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.081 17:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.081 17:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.081 17:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.081 17:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.081 17:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.081 17:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:05.081 { 00:21:05.081 "cntlid": 9, 00:21:05.081 "qid": 0, 00:21:05.081 "state": "enabled", 00:21:05.081 "thread": "nvmf_tgt_poll_group_000", 00:21:05.081 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:05.081 "listen_address": { 00:21:05.081 "trtype": "TCP", 00:21:05.081 "adrfam": "IPv4", 00:21:05.081 "traddr": "10.0.0.2", 00:21:05.081 "trsvcid": "4420" 00:21:05.081 }, 00:21:05.081 "peer_address": { 00:21:05.081 "trtype": "TCP", 00:21:05.081 "adrfam": "IPv4", 00:21:05.081 "traddr": "10.0.0.1", 00:21:05.081 "trsvcid": "57756" 00:21:05.081 }, 00:21:05.081 "auth": { 00:21:05.081 "state": "completed", 00:21:05.081 "digest": "sha256", 00:21:05.081 "dhgroup": "ffdhe2048" 00:21:05.081 } 00:21:05.081 } 00:21:05.081 ]' 00:21:05.081 17:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:05.081 17:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:05.081 17:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.081 17:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:05.081 17:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:05.081 17:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.081 17:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.081 17:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.341 17:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmI4NGI2NDMzN2ZiYTdkMGIxZDFhYWFmMTc1YjMwOGUwOTQxNzJjN2NlM2NkNTFmTz+1jQ==: --dhchap-ctrl-secret DHHC-1:03:ZjliYmFhNzYzN2NlZTk0NTBmOGM0OGFjY2E1MTcxN2UwNDU4NmNkNTQyNmEwNGFjMTAyZDU5ODNjMDc2ZjIxOdcbX0o=: 00:21:05.341 17:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZmI4NGI2NDMzN2ZiYTdkMGIxZDFhYWFmMTc1YjMwOGUwOTQxNzJjN2NlM2NkNTFmTz+1jQ==: --dhchap-ctrl-secret DHHC-1:03:ZjliYmFhNzYzN2NlZTk0NTBmOGM0OGFjY2E1MTcxN2UwNDU4NmNkNTQyNmEwNGFjMTAyZDU5ODNjMDc2ZjIxOdcbX0o=: 00:21:05.916 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.177 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.177 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:06.177 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.177 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.177 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.177 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.177 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:06.177 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:06.177 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:21:06.177 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.177 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:06.177 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:06.177 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:06.177 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.177 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.177 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.177 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.177 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.177 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.177 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.177 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.438 00:21:06.438 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.438 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.438 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.699 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.699 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.699 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.699 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.699 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.699 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.699 { 00:21:06.699 "cntlid": 11, 00:21:06.699 "qid": 0, 00:21:06.699 "state": "enabled", 00:21:06.699 "thread": "nvmf_tgt_poll_group_000", 00:21:06.699 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:06.699 "listen_address": { 00:21:06.699 "trtype": "TCP", 00:21:06.699 "adrfam": "IPv4", 00:21:06.699 "traddr": "10.0.0.2", 00:21:06.699 "trsvcid": "4420" 00:21:06.699 }, 00:21:06.699 "peer_address": { 00:21:06.699 "trtype": "TCP", 00:21:06.699 "adrfam": "IPv4", 00:21:06.699 "traddr": "10.0.0.1", 00:21:06.699 "trsvcid": "57778" 00:21:06.699 }, 00:21:06.699 "auth": { 00:21:06.699 "state": "completed", 00:21:06.699 "digest": "sha256", 00:21:06.699 "dhgroup": "ffdhe2048" 00:21:06.699 } 00:21:06.699 } 00:21:06.699 ]' 00:21:06.699 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.699 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:06.699 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:06.699 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:06.699 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:06.699 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.699 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.699 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.960 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGEyN2QyMDMxMjhiMmU0YzNlZGU3ZDU3YTA5OTI4MDGfzKv6: --dhchap-ctrl-secret DHHC-1:02:NjQ1ZjA5NmFiNmNiMDRhMjAxYzIzYjY0MTYxZjlkYjExZjMxM2JmMWZmZjUyNjY1DEziog==: 00:21:06.960 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NGEyN2QyMDMxMjhiMmU0YzNlZGU3ZDU3YTA5OTI4MDGfzKv6: --dhchap-ctrl-secret DHHC-1:02:NjQ1ZjA5NmFiNmNiMDRhMjAxYzIzYjY0MTYxZjlkYjExZjMxM2JmMWZmZjUyNjY1DEziog==: 00:21:07.901 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.901 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:07.901 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.901 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.901 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.901 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.901 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:07.901 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:07.901 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:21:07.901 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:07.901 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:07.901 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:07.901 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:07.901 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.901 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.901 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.901 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.901 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.901 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.901 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.901 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.161 00:21:08.161 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:08.161 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.161 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.423 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.423 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.423 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.423 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.423 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.423 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.423 { 00:21:08.423 "cntlid": 13, 00:21:08.423 "qid": 0, 00:21:08.423 "state": "enabled", 00:21:08.423 "thread": "nvmf_tgt_poll_group_000", 00:21:08.423 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:08.423 "listen_address": { 00:21:08.423 "trtype": "TCP", 00:21:08.423 "adrfam": "IPv4", 00:21:08.423 "traddr": "10.0.0.2", 00:21:08.423 "trsvcid": "4420" 00:21:08.423 }, 00:21:08.423 "peer_address": { 00:21:08.423 "trtype": "TCP", 00:21:08.423 "adrfam": "IPv4", 00:21:08.423 "traddr": "10.0.0.1", 00:21:08.423 "trsvcid": "57816" 00:21:08.423 }, 00:21:08.423 "auth": { 00:21:08.423 "state": "completed", 00:21:08.423 "digest": "sha256", 00:21:08.423 "dhgroup": "ffdhe2048" 00:21:08.423 } 00:21:08.423 } 00:21:08.423 ]' 00:21:08.423 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.423 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:08.423 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.423 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:08.423 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.684 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.684 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.684 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.684 17:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTkwYmUwYzBkN2JhZDAyZWEwNTU1MDkwNDI4YzM2MzM3MjY2YmNkYTRlMmI3NzQwcDDsRw==: --dhchap-ctrl-secret DHHC-1:01:NmUyYzVlZDllZTFkNTA5MDQ0ZTQ3ZTExZDc2YzY0ZGXBtqiM: 00:21:08.684 17:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YTkwYmUwYzBkN2JhZDAyZWEwNTU1MDkwNDI4YzM2MzM3MjY2YmNkYTRlMmI3NzQwcDDsRw==: --dhchap-ctrl-secret DHHC-1:01:NmUyYzVlZDllZTFkNTA5MDQ0ZTQ3ZTExZDc2YzY0ZGXBtqiM: 00:21:09.625 17:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.625 17:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:09.625 17:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.625 17:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.625 17:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.625 17:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.625 17:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:09.625 17:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:09.625 17:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:21:09.625 17:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.625 17:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:09.625 17:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:09.625 17:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:09.625 17:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.625 17:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:09.625 17:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.625 17:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.625 17:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.625 17:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:09.625 17:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:09.625 17:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:09.887 00:21:09.887 17:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.887 17:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.887 17:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.148 17:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.149 17:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.149 17:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.149 17:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.149 17:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.149 17:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:10.149 { 00:21:10.149 "cntlid": 15, 00:21:10.149 "qid": 0, 00:21:10.149 "state": "enabled", 00:21:10.149 "thread": "nvmf_tgt_poll_group_000", 00:21:10.149 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:10.149 "listen_address": { 00:21:10.149 "trtype": "TCP", 00:21:10.149 "adrfam": "IPv4", 00:21:10.149 "traddr": "10.0.0.2", 00:21:10.149 "trsvcid": "4420" 00:21:10.149 }, 00:21:10.149 "peer_address": { 00:21:10.149 "trtype": "TCP", 00:21:10.149 "adrfam": "IPv4", 00:21:10.149 "traddr": "10.0.0.1", 00:21:10.149 "trsvcid": "57856" 00:21:10.149 }, 00:21:10.149 "auth": { 00:21:10.149 "state": "completed", 00:21:10.149 "digest": "sha256", 00:21:10.149 "dhgroup": "ffdhe2048" 00:21:10.149 } 00:21:10.149 } 00:21:10.149 ]' 00:21:10.149 17:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:10.149 17:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:10.149 17:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.149 17:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:10.149 17:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.149 17:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.149 17:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.149 17:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.409 17:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjAzMWY3YjdkZWUxMDQ3YjY3YmE3YTNjOWMyMDMzOTQ2MmZlNGYxYzQ3NTg5ODM1MjI3MGNjNzhmZDJiYTM1NaBnAy0=: 00:21:10.409 17:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjAzMWY3YjdkZWUxMDQ3YjY3YmE3YTNjOWMyMDMzOTQ2MmZlNGYxYzQ3NTg5ODM1MjI3MGNjNzhmZDJiYTM1NaBnAy0=: 00:21:11.349 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.350 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.350 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:11.350 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.350 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.350 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.350 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:11.350 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.350 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:11.350 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:11.350 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:21:11.350 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.350 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:11.350 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:11.350 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:11.350 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.350 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.350 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.350 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.350 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.350 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.350 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.350 17:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.610 00:21:11.610 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.610 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.610 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.870 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.870 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.870 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.870 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.870 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.870 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.870 { 00:21:11.870 "cntlid": 17, 00:21:11.870 "qid": 0, 00:21:11.870 "state": "enabled", 00:21:11.870 "thread": "nvmf_tgt_poll_group_000", 00:21:11.870 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:11.870 "listen_address": { 00:21:11.870 "trtype": "TCP", 00:21:11.870 "adrfam": "IPv4", 00:21:11.870 "traddr": "10.0.0.2", 00:21:11.870 "trsvcid": "4420" 00:21:11.870 }, 00:21:11.870 "peer_address": { 00:21:11.870 "trtype": "TCP", 00:21:11.870 "adrfam": "IPv4", 00:21:11.870 "traddr": "10.0.0.1", 00:21:11.870 "trsvcid": "57876" 00:21:11.870 }, 00:21:11.870 "auth": { 00:21:11.870 "state": "completed", 00:21:11.870 "digest": "sha256", 00:21:11.870 "dhgroup": "ffdhe3072" 00:21:11.870 } 00:21:11.870 } 00:21:11.870 ]' 00:21:11.870 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.870 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:11.870 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.870 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:11.870 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.870 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.871 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.871 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.132 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmI4NGI2NDMzN2ZiYTdkMGIxZDFhYWFmMTc1YjMwOGUwOTQxNzJjN2NlM2NkNTFmTz+1jQ==: --dhchap-ctrl-secret DHHC-1:03:ZjliYmFhNzYzN2NlZTk0NTBmOGM0OGFjY2E1MTcxN2UwNDU4NmNkNTQyNmEwNGFjMTAyZDU5ODNjMDc2ZjIxOdcbX0o=: 00:21:12.132 17:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZmI4NGI2NDMzN2ZiYTdkMGIxZDFhYWFmMTc1YjMwOGUwOTQxNzJjN2NlM2NkNTFmTz+1jQ==: --dhchap-ctrl-secret DHHC-1:03:ZjliYmFhNzYzN2NlZTk0NTBmOGM0OGFjY2E1MTcxN2UwNDU4NmNkNTQyNmEwNGFjMTAyZDU5ODNjMDc2ZjIxOdcbX0o=: 00:21:13.076 17:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.076 17:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:13.076 17:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.076 17:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.076 17:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.076 17:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:13.076 17:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:13.076 17:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:13.076 17:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:21:13.076 17:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:13.076 17:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:13.076 17:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:13.076 17:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:13.076 17:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.076 17:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.076 17:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.076 17:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.076 17:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.076 17:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.076 17:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.076 17:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.337 00:21:13.337 17:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:13.337 17:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:13.337 17:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.597 17:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.597 17:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.597 17:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.597 17:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.597 17:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.597 17:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.597 { 00:21:13.597 "cntlid": 19, 00:21:13.597 "qid": 0, 00:21:13.597 "state": "enabled", 00:21:13.597 "thread": "nvmf_tgt_poll_group_000", 00:21:13.597 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:13.597 "listen_address": { 00:21:13.597 "trtype": "TCP", 00:21:13.597 "adrfam": "IPv4", 00:21:13.597 "traddr": "10.0.0.2", 00:21:13.597 "trsvcid": "4420" 00:21:13.597 }, 00:21:13.597 "peer_address": { 00:21:13.597 "trtype": "TCP", 00:21:13.597 "adrfam": "IPv4", 00:21:13.597 "traddr": "10.0.0.1", 00:21:13.597 "trsvcid": "56092" 00:21:13.597 }, 00:21:13.597 "auth": { 00:21:13.597 "state": "completed", 00:21:13.597 "digest": "sha256", 00:21:13.597 "dhgroup": "ffdhe3072" 00:21:13.597 } 00:21:13.597 } 00:21:13.597 ]' 00:21:13.597 17:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.597 17:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:13.597 17:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.597 17:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:13.597 17:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.597 17:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.597 17:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.597 17:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.858 17:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGEyN2QyMDMxMjhiMmU0YzNlZGU3ZDU3YTA5OTI4MDGfzKv6: --dhchap-ctrl-secret DHHC-1:02:NjQ1ZjA5NmFiNmNiMDRhMjAxYzIzYjY0MTYxZjlkYjExZjMxM2JmMWZmZjUyNjY1DEziog==: 00:21:13.858 17:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NGEyN2QyMDMxMjhiMmU0YzNlZGU3ZDU3YTA5OTI4MDGfzKv6: --dhchap-ctrl-secret DHHC-1:02:NjQ1ZjA5NmFiNmNiMDRhMjAxYzIzYjY0MTYxZjlkYjExZjMxM2JmMWZmZjUyNjY1DEziog==: 00:21:14.430 17:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.430 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.430 17:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:14.430 17:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.430 17:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.691 17:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.691 17:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.691 17:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:14.691 17:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:14.691 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:21:14.691 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.691 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:14.691 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:14.691 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:14.691 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.691 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.691 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.691 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.691 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.691 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.691 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.691 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.951 00:21:14.951 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.951 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.951 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.212 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.212 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.212 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.213 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.213 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.213 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.213 { 00:21:15.213 "cntlid": 21, 00:21:15.213 "qid": 0, 00:21:15.213 "state": "enabled", 00:21:15.213 "thread": "nvmf_tgt_poll_group_000", 00:21:15.213 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:15.213 "listen_address": { 00:21:15.213 "trtype": "TCP", 00:21:15.213 "adrfam": "IPv4", 00:21:15.213 "traddr": "10.0.0.2", 00:21:15.213 "trsvcid": "4420" 00:21:15.213 }, 00:21:15.213 "peer_address": { 00:21:15.213 "trtype": "TCP", 00:21:15.213 "adrfam": "IPv4", 00:21:15.213 "traddr": "10.0.0.1", 00:21:15.213 "trsvcid": "56108" 00:21:15.213 }, 00:21:15.213 "auth": { 00:21:15.213 "state": "completed", 00:21:15.213 "digest": "sha256", 00:21:15.213 "dhgroup": "ffdhe3072" 00:21:15.213 } 00:21:15.213 } 00:21:15.213 ]' 00:21:15.213 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.213 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:15.213 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.213 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:15.213 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.213 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.213 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.213 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.474 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTkwYmUwYzBkN2JhZDAyZWEwNTU1MDkwNDI4YzM2MzM3MjY2YmNkYTRlMmI3NzQwcDDsRw==: --dhchap-ctrl-secret DHHC-1:01:NmUyYzVlZDllZTFkNTA5MDQ0ZTQ3ZTExZDc2YzY0ZGXBtqiM: 00:21:15.474 17:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YTkwYmUwYzBkN2JhZDAyZWEwNTU1MDkwNDI4YzM2MzM3MjY2YmNkYTRlMmI3NzQwcDDsRw==: --dhchap-ctrl-secret DHHC-1:01:NmUyYzVlZDllZTFkNTA5MDQ0ZTQ3ZTExZDc2YzY0ZGXBtqiM: 00:21:16.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.415 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:16.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:16.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:16.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:16.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:21:16.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:16.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:16.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:16.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:16.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:16.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:16.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:16.415 17:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:16.676 00:21:16.676 17:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.676 17:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.676 17:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.937 17:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.937 17:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.937 17:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.937 17:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.937 17:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.937 17:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.937 { 00:21:16.937 "cntlid": 23, 00:21:16.937 "qid": 0, 00:21:16.937 "state": "enabled", 00:21:16.937 "thread": "nvmf_tgt_poll_group_000", 00:21:16.937 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:16.937 "listen_address": { 00:21:16.937 "trtype": "TCP", 00:21:16.937 "adrfam": "IPv4", 00:21:16.937 "traddr": "10.0.0.2", 00:21:16.937 "trsvcid": "4420" 00:21:16.937 }, 00:21:16.937 "peer_address": { 00:21:16.937 "trtype": "TCP", 00:21:16.937 "adrfam": "IPv4", 00:21:16.937 "traddr": "10.0.0.1", 00:21:16.937 "trsvcid": "56136" 00:21:16.937 }, 00:21:16.937 "auth": { 00:21:16.937 "state": "completed", 00:21:16.937 "digest": "sha256", 00:21:16.937 "dhgroup": "ffdhe3072" 00:21:16.937 } 00:21:16.937 } 00:21:16.937 ]' 00:21:16.937 17:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.937 17:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:16.937 17:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.937 17:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:16.937 17:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:17.198 17:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.198 17:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.198 17:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.198 17:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjAzMWY3YjdkZWUxMDQ3YjY3YmE3YTNjOWMyMDMzOTQ2MmZlNGYxYzQ3NTg5ODM1MjI3MGNjNzhmZDJiYTM1NaBnAy0=: 00:21:17.198 17:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjAzMWY3YjdkZWUxMDQ3YjY3YmE3YTNjOWMyMDMzOTQ2MmZlNGYxYzQ3NTg5ODM1MjI3MGNjNzhmZDJiYTM1NaBnAy0=: 00:21:18.140 17:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.140 17:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:18.140 17:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.140 17:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.140 17:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.140 17:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:18.140 17:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:18.140 17:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:18.140 17:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:18.140 17:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:21:18.140 17:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:18.140 17:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:18.140 17:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:18.140 17:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:18.140 17:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.140 17:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.140 17:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.140 17:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.140 17:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.140 17:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.140 17:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.140 17:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.401 00:21:18.401 17:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.401 17:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.401 17:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.661 17:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.661 17:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.661 17:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.661 17:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.661 17:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.661 17:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:18.661 { 00:21:18.661 "cntlid": 25, 00:21:18.661 "qid": 0, 00:21:18.661 "state": "enabled", 00:21:18.661 "thread": "nvmf_tgt_poll_group_000", 00:21:18.661 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:18.661 "listen_address": { 00:21:18.661 "trtype": "TCP", 00:21:18.661 "adrfam": "IPv4", 00:21:18.661 "traddr": "10.0.0.2", 00:21:18.661 "trsvcid": "4420" 00:21:18.661 }, 00:21:18.661 "peer_address": { 00:21:18.661 "trtype": "TCP", 00:21:18.661 "adrfam": "IPv4", 00:21:18.661 "traddr": "10.0.0.1", 00:21:18.661 "trsvcid": "56168" 00:21:18.661 }, 00:21:18.661 "auth": { 00:21:18.661 "state": "completed", 00:21:18.661 "digest": "sha256", 00:21:18.661 "dhgroup": "ffdhe4096" 00:21:18.661 } 00:21:18.661 } 00:21:18.661 ]' 00:21:18.661 17:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.661 17:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:18.661 17:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.661 17:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:18.661 17:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.661 17:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.662 17:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.662 17:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.922 17:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmI4NGI2NDMzN2ZiYTdkMGIxZDFhYWFmMTc1YjMwOGUwOTQxNzJjN2NlM2NkNTFmTz+1jQ==: --dhchap-ctrl-secret DHHC-1:03:ZjliYmFhNzYzN2NlZTk0NTBmOGM0OGFjY2E1MTcxN2UwNDU4NmNkNTQyNmEwNGFjMTAyZDU5ODNjMDc2ZjIxOdcbX0o=: 00:21:18.922 17:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZmI4NGI2NDMzN2ZiYTdkMGIxZDFhYWFmMTc1YjMwOGUwOTQxNzJjN2NlM2NkNTFmTz+1jQ==: --dhchap-ctrl-secret DHHC-1:03:ZjliYmFhNzYzN2NlZTk0NTBmOGM0OGFjY2E1MTcxN2UwNDU4NmNkNTQyNmEwNGFjMTAyZDU5ODNjMDc2ZjIxOdcbX0o=: 00:21:19.862 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.862 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:19.862 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.862 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.862 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.862 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:19.862 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:19.862 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:19.863 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:21:19.863 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.863 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:19.863 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:19.863 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:19.863 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.863 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.863 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.863 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.863 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.863 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.863 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.863 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.122 00:21:20.122 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:20.122 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:20.122 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.382 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.382 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.382 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.382 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.382 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.382 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:20.382 { 00:21:20.382 "cntlid": 27, 00:21:20.382 "qid": 0, 00:21:20.382 "state": "enabled", 00:21:20.382 "thread": "nvmf_tgt_poll_group_000", 00:21:20.382 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:20.382 "listen_address": { 00:21:20.382 "trtype": "TCP", 00:21:20.382 "adrfam": "IPv4", 00:21:20.382 "traddr": "10.0.0.2", 00:21:20.382 "trsvcid": "4420" 00:21:20.382 }, 00:21:20.382 "peer_address": { 00:21:20.382 "trtype": "TCP", 00:21:20.382 "adrfam": "IPv4", 00:21:20.382 "traddr": "10.0.0.1", 00:21:20.382 "trsvcid": "56214" 00:21:20.382 }, 00:21:20.382 "auth": { 00:21:20.382 "state": "completed", 00:21:20.382 "digest": "sha256", 00:21:20.382 "dhgroup": "ffdhe4096" 00:21:20.382 } 00:21:20.382 } 00:21:20.382 ]' 00:21:20.382 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:20.382 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:20.382 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:20.382 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:20.382 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:20.382 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.382 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.382 17:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.646 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGEyN2QyMDMxMjhiMmU0YzNlZGU3ZDU3YTA5OTI4MDGfzKv6: --dhchap-ctrl-secret DHHC-1:02:NjQ1ZjA5NmFiNmNiMDRhMjAxYzIzYjY0MTYxZjlkYjExZjMxM2JmMWZmZjUyNjY1DEziog==: 00:21:20.646 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NGEyN2QyMDMxMjhiMmU0YzNlZGU3ZDU3YTA5OTI4MDGfzKv6: --dhchap-ctrl-secret DHHC-1:02:NjQ1ZjA5NmFiNmNiMDRhMjAxYzIzYjY0MTYxZjlkYjExZjMxM2JmMWZmZjUyNjY1DEziog==: 00:21:21.589 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.589 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:21.589 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.589 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.589 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.589 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:21.589 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:21.589 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:21.589 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:21:21.589 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:21.589 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:21.589 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:21.589 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:21.589 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.589 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:21.589 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.589 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.589 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.589 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:21.589 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:21.589 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:21.850 00:21:21.850 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.850 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.850 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.111 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.111 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.111 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.111 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.111 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.111 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:22.111 { 00:21:22.111 "cntlid": 29, 00:21:22.111 "qid": 0, 00:21:22.111 "state": "enabled", 00:21:22.111 "thread": "nvmf_tgt_poll_group_000", 00:21:22.111 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:22.111 "listen_address": { 00:21:22.111 "trtype": "TCP", 00:21:22.111 "adrfam": "IPv4", 00:21:22.111 "traddr": "10.0.0.2", 00:21:22.111 "trsvcid": "4420" 00:21:22.111 }, 00:21:22.111 "peer_address": { 00:21:22.111 "trtype": "TCP", 00:21:22.111 "adrfam": "IPv4", 00:21:22.111 "traddr": "10.0.0.1", 00:21:22.111 "trsvcid": "56240" 00:21:22.111 }, 00:21:22.111 "auth": { 00:21:22.111 "state": "completed", 00:21:22.111 "digest": "sha256", 00:21:22.111 "dhgroup": "ffdhe4096" 00:21:22.111 } 00:21:22.111 } 00:21:22.111 ]' 00:21:22.111 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.111 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:22.111 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.111 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:22.111 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.373 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.373 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.373 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.373 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTkwYmUwYzBkN2JhZDAyZWEwNTU1MDkwNDI4YzM2MzM3MjY2YmNkYTRlMmI3NzQwcDDsRw==: --dhchap-ctrl-secret DHHC-1:01:NmUyYzVlZDllZTFkNTA5MDQ0ZTQ3ZTExZDc2YzY0ZGXBtqiM: 00:21:22.373 17:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YTkwYmUwYzBkN2JhZDAyZWEwNTU1MDkwNDI4YzM2MzM3MjY2YmNkYTRlMmI3NzQwcDDsRw==: --dhchap-ctrl-secret DHHC-1:01:NmUyYzVlZDllZTFkNTA5MDQ0ZTQ3ZTExZDc2YzY0ZGXBtqiM: 00:21:23.316 17:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.316 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.316 17:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:23.316 17:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.316 17:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.316 17:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.316 17:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.316 17:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:23.316 17:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:23.316 17:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:21:23.316 17:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:23.316 17:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:23.316 17:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:23.316 17:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:23.316 17:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.316 17:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:23.316 17:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.316 17:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.316 17:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.316 17:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:23.316 17:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:23.316 17:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:23.577 00:21:23.577 17:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:23.577 17:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:23.577 17:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.838 17:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.838 17:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.838 17:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.838 17:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.838 17:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.838 17:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:23.838 { 00:21:23.838 "cntlid": 31, 00:21:23.838 "qid": 0, 00:21:23.838 "state": "enabled", 00:21:23.838 "thread": "nvmf_tgt_poll_group_000", 00:21:23.838 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:23.838 "listen_address": { 00:21:23.838 "trtype": "TCP", 00:21:23.838 "adrfam": "IPv4", 00:21:23.838 "traddr": "10.0.0.2", 00:21:23.838 "trsvcid": "4420" 00:21:23.838 }, 00:21:23.838 "peer_address": { 00:21:23.838 "trtype": "TCP", 00:21:23.838 "adrfam": "IPv4", 00:21:23.838 "traddr": "10.0.0.1", 00:21:23.838 "trsvcid": "43856" 00:21:23.838 }, 00:21:23.838 "auth": { 00:21:23.838 "state": "completed", 00:21:23.838 "digest": "sha256", 00:21:23.838 "dhgroup": "ffdhe4096" 00:21:23.838 } 00:21:23.838 } 00:21:23.838 ]' 00:21:23.838 17:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:23.838 17:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:23.838 17:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:23.838 17:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:23.838 17:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:23.838 17:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.838 17:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.838 17:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.099 17:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjAzMWY3YjdkZWUxMDQ3YjY3YmE3YTNjOWMyMDMzOTQ2MmZlNGYxYzQ3NTg5ODM1MjI3MGNjNzhmZDJiYTM1NaBnAy0=: 00:21:24.099 17:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjAzMWY3YjdkZWUxMDQ3YjY3YmE3YTNjOWMyMDMzOTQ2MmZlNGYxYzQ3NTg5ODM1MjI3MGNjNzhmZDJiYTM1NaBnAy0=: 00:21:25.042 17:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.042 17:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:25.042 17:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.042 17:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.042 17:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.042 17:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:25.042 17:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:25.042 17:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:25.042 17:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:25.042 17:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:21:25.042 17:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:25.042 17:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:25.042 17:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:25.042 17:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:25.042 17:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.042 17:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.042 17:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.042 17:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.042 17:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.042 17:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.042 17:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.042 17:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.615 00:21:25.615 17:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:25.615 17:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:25.615 17:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.615 17:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.615 17:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.615 17:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.615 17:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.615 17:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.615 17:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:25.615 { 00:21:25.615 "cntlid": 33, 00:21:25.615 "qid": 0, 00:21:25.615 "state": "enabled", 00:21:25.615 "thread": "nvmf_tgt_poll_group_000", 00:21:25.615 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:25.615 "listen_address": { 00:21:25.615 "trtype": "TCP", 00:21:25.615 "adrfam": "IPv4", 00:21:25.615 "traddr": "10.0.0.2", 00:21:25.615 "trsvcid": "4420" 00:21:25.615 }, 00:21:25.615 "peer_address": { 00:21:25.615 "trtype": "TCP", 00:21:25.615 "adrfam": "IPv4", 00:21:25.615 "traddr": "10.0.0.1", 00:21:25.615 "trsvcid": "43878" 00:21:25.615 }, 00:21:25.615 "auth": { 00:21:25.615 "state": "completed", 00:21:25.615 "digest": "sha256", 00:21:25.615 "dhgroup": "ffdhe6144" 00:21:25.615 } 00:21:25.615 } 00:21:25.615 ]' 00:21:25.615 17:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:25.615 17:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:25.615 17:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:25.876 17:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:25.876 17:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:25.876 17:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.876 17:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.876 17:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.876 17:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmI4NGI2NDMzN2ZiYTdkMGIxZDFhYWFmMTc1YjMwOGUwOTQxNzJjN2NlM2NkNTFmTz+1jQ==: --dhchap-ctrl-secret DHHC-1:03:ZjliYmFhNzYzN2NlZTk0NTBmOGM0OGFjY2E1MTcxN2UwNDU4NmNkNTQyNmEwNGFjMTAyZDU5ODNjMDc2ZjIxOdcbX0o=: 00:21:25.876 17:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZmI4NGI2NDMzN2ZiYTdkMGIxZDFhYWFmMTc1YjMwOGUwOTQxNzJjN2NlM2NkNTFmTz+1jQ==: --dhchap-ctrl-secret DHHC-1:03:ZjliYmFhNzYzN2NlZTk0NTBmOGM0OGFjY2E1MTcxN2UwNDU4NmNkNTQyNmEwNGFjMTAyZDU5ODNjMDc2ZjIxOdcbX0o=: 00:21:26.820 17:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.820 17:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:26.820 17:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.820 17:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.820 17:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.820 17:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:26.820 17:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:26.820 17:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:26.820 17:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:21:26.820 17:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.820 17:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:26.820 17:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:26.820 17:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:26.820 17:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.820 17:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.820 17:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.820 17:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.082 17:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.082 17:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:27.082 17:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:27.082 17:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:27.343 00:21:27.343 17:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:27.343 17:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.343 17:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:27.605 17:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.605 17:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.605 17:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.605 17:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.605 17:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.605 17:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.605 { 00:21:27.605 "cntlid": 35, 00:21:27.605 "qid": 0, 00:21:27.605 "state": "enabled", 00:21:27.605 "thread": "nvmf_tgt_poll_group_000", 00:21:27.605 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:27.605 "listen_address": { 00:21:27.605 "trtype": "TCP", 00:21:27.605 "adrfam": "IPv4", 00:21:27.605 "traddr": "10.0.0.2", 00:21:27.605 "trsvcid": "4420" 00:21:27.605 }, 00:21:27.605 "peer_address": { 00:21:27.605 "trtype": "TCP", 00:21:27.605 "adrfam": "IPv4", 00:21:27.605 "traddr": "10.0.0.1", 00:21:27.605 "trsvcid": "43894" 00:21:27.605 }, 00:21:27.605 "auth": { 00:21:27.605 "state": "completed", 00:21:27.605 "digest": "sha256", 00:21:27.605 "dhgroup": "ffdhe6144" 00:21:27.605 } 00:21:27.605 } 00:21:27.605 ]' 00:21:27.605 17:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.605 17:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:27.605 17:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.605 17:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:27.605 17:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.605 17:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.605 17:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.605 17:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.866 17:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGEyN2QyMDMxMjhiMmU0YzNlZGU3ZDU3YTA5OTI4MDGfzKv6: --dhchap-ctrl-secret DHHC-1:02:NjQ1ZjA5NmFiNmNiMDRhMjAxYzIzYjY0MTYxZjlkYjExZjMxM2JmMWZmZjUyNjY1DEziog==: 00:21:27.866 17:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NGEyN2QyMDMxMjhiMmU0YzNlZGU3ZDU3YTA5OTI4MDGfzKv6: --dhchap-ctrl-secret DHHC-1:02:NjQ1ZjA5NmFiNmNiMDRhMjAxYzIzYjY0MTYxZjlkYjExZjMxM2JmMWZmZjUyNjY1DEziog==: 00:21:28.808 17:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.808 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.808 17:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:28.808 17:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.808 17:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.808 17:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.808 17:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.808 17:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:28.809 17:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:28.809 17:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:21:28.809 17:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.809 17:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:28.809 17:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:28.809 17:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:28.809 17:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.809 17:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.809 17:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.809 17:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.809 17:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.809 17:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.809 17:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.809 17:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:29.070 00:21:29.070 17:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:29.070 17:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:29.070 17:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.331 17:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.331 17:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.331 17:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.331 17:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.331 17:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.331 17:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.331 { 00:21:29.331 "cntlid": 37, 00:21:29.331 "qid": 0, 00:21:29.331 "state": "enabled", 00:21:29.331 "thread": "nvmf_tgt_poll_group_000", 00:21:29.331 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:29.331 "listen_address": { 00:21:29.331 "trtype": "TCP", 00:21:29.331 "adrfam": "IPv4", 00:21:29.331 "traddr": "10.0.0.2", 00:21:29.331 "trsvcid": "4420" 00:21:29.331 }, 00:21:29.331 "peer_address": { 00:21:29.331 "trtype": "TCP", 00:21:29.331 "adrfam": "IPv4", 00:21:29.331 "traddr": "10.0.0.1", 00:21:29.331 "trsvcid": "43922" 00:21:29.331 }, 00:21:29.331 "auth": { 00:21:29.331 "state": "completed", 00:21:29.331 "digest": "sha256", 00:21:29.331 "dhgroup": "ffdhe6144" 00:21:29.331 } 00:21:29.331 } 00:21:29.331 ]' 00:21:29.331 17:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:29.331 17:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:29.331 17:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:29.331 17:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:29.331 17:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:29.591 17:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.591 17:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.591 17:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.592 17:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTkwYmUwYzBkN2JhZDAyZWEwNTU1MDkwNDI4YzM2MzM3MjY2YmNkYTRlMmI3NzQwcDDsRw==: --dhchap-ctrl-secret DHHC-1:01:NmUyYzVlZDllZTFkNTA5MDQ0ZTQ3ZTExZDc2YzY0ZGXBtqiM: 00:21:29.592 17:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YTkwYmUwYzBkN2JhZDAyZWEwNTU1MDkwNDI4YzM2MzM3MjY2YmNkYTRlMmI3NzQwcDDsRw==: --dhchap-ctrl-secret DHHC-1:01:NmUyYzVlZDllZTFkNTA5MDQ0ZTQ3ZTExZDc2YzY0ZGXBtqiM: 00:21:30.535 17:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.535 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.535 17:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:30.535 17:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.535 17:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.535 17:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.535 17:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:30.535 17:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:30.535 17:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:30.535 17:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:21:30.535 17:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:30.535 17:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:30.535 17:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:30.535 17:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:30.535 17:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.535 17:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:30.535 17:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.535 17:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.535 17:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.535 17:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:30.535 17:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:30.535 17:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:30.795 00:21:31.057 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.057 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.057 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.057 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.057 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.057 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.057 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.057 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.057 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.057 { 00:21:31.057 "cntlid": 39, 00:21:31.057 "qid": 0, 00:21:31.057 "state": "enabled", 00:21:31.057 "thread": "nvmf_tgt_poll_group_000", 00:21:31.057 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:31.057 "listen_address": { 00:21:31.057 "trtype": "TCP", 00:21:31.057 "adrfam": "IPv4", 00:21:31.057 "traddr": "10.0.0.2", 00:21:31.057 "trsvcid": "4420" 00:21:31.057 }, 00:21:31.057 "peer_address": { 00:21:31.057 "trtype": "TCP", 00:21:31.057 "adrfam": "IPv4", 00:21:31.057 "traddr": "10.0.0.1", 00:21:31.057 "trsvcid": "43964" 00:21:31.057 }, 00:21:31.057 "auth": { 00:21:31.057 "state": "completed", 00:21:31.057 "digest": "sha256", 00:21:31.057 "dhgroup": "ffdhe6144" 00:21:31.057 } 00:21:31.057 } 00:21:31.057 ]' 00:21:31.057 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.057 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:31.057 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:31.318 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:31.318 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.318 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.318 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.319 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.319 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjAzMWY3YjdkZWUxMDQ3YjY3YmE3YTNjOWMyMDMzOTQ2MmZlNGYxYzQ3NTg5ODM1MjI3MGNjNzhmZDJiYTM1NaBnAy0=: 00:21:31.319 17:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjAzMWY3YjdkZWUxMDQ3YjY3YmE3YTNjOWMyMDMzOTQ2MmZlNGYxYzQ3NTg5ODM1MjI3MGNjNzhmZDJiYTM1NaBnAy0=: 00:21:32.260 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.260 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.260 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:32.260 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.260 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.260 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.260 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:32.260 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:32.260 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:32.260 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:32.260 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:21:32.260 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:32.260 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:32.260 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:32.260 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:32.260 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.260 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.260 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.260 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.521 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.521 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.521 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.521 17:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.091 00:21:33.091 17:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:33.091 17:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:33.092 17:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.092 17:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.092 17:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.092 17:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.092 17:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.092 17:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.092 17:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.092 { 00:21:33.092 "cntlid": 41, 00:21:33.092 "qid": 0, 00:21:33.092 "state": "enabled", 00:21:33.092 "thread": "nvmf_tgt_poll_group_000", 00:21:33.092 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:33.092 "listen_address": { 00:21:33.092 "trtype": "TCP", 00:21:33.092 "adrfam": "IPv4", 00:21:33.092 "traddr": "10.0.0.2", 00:21:33.092 "trsvcid": "4420" 00:21:33.092 }, 00:21:33.092 "peer_address": { 00:21:33.092 "trtype": "TCP", 00:21:33.092 "adrfam": "IPv4", 00:21:33.092 "traddr": "10.0.0.1", 00:21:33.092 "trsvcid": "58118" 00:21:33.092 }, 00:21:33.092 "auth": { 00:21:33.092 "state": "completed", 00:21:33.092 "digest": "sha256", 00:21:33.092 "dhgroup": "ffdhe8192" 00:21:33.092 } 00:21:33.092 } 00:21:33.092 ]' 00:21:33.092 17:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.092 17:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:33.092 17:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.092 17:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:33.092 17:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.352 17:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.352 17:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.352 17:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.352 17:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmI4NGI2NDMzN2ZiYTdkMGIxZDFhYWFmMTc1YjMwOGUwOTQxNzJjN2NlM2NkNTFmTz+1jQ==: --dhchap-ctrl-secret DHHC-1:03:ZjliYmFhNzYzN2NlZTk0NTBmOGM0OGFjY2E1MTcxN2UwNDU4NmNkNTQyNmEwNGFjMTAyZDU5ODNjMDc2ZjIxOdcbX0o=: 00:21:33.352 17:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZmI4NGI2NDMzN2ZiYTdkMGIxZDFhYWFmMTc1YjMwOGUwOTQxNzJjN2NlM2NkNTFmTz+1jQ==: --dhchap-ctrl-secret DHHC-1:03:ZjliYmFhNzYzN2NlZTk0NTBmOGM0OGFjY2E1MTcxN2UwNDU4NmNkNTQyNmEwNGFjMTAyZDU5ODNjMDc2ZjIxOdcbX0o=: 00:21:34.299 17:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.299 17:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:34.299 17:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.299 17:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.299 17:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.299 17:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:34.299 17:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:34.299 17:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:34.299 17:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:21:34.299 17:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.299 17:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:34.300 17:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:34.300 17:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:34.300 17:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.300 17:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.300 17:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.300 17:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.300 17:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.300 17:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.300 17:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.300 17:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.873 00:21:34.873 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:34.873 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:34.873 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.135 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.135 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.135 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.135 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.135 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.135 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:35.135 { 00:21:35.135 "cntlid": 43, 00:21:35.135 "qid": 0, 00:21:35.135 "state": "enabled", 00:21:35.135 "thread": "nvmf_tgt_poll_group_000", 00:21:35.135 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:35.135 "listen_address": { 00:21:35.135 "trtype": "TCP", 00:21:35.135 "adrfam": "IPv4", 00:21:35.135 "traddr": "10.0.0.2", 00:21:35.135 "trsvcid": "4420" 00:21:35.135 }, 00:21:35.135 "peer_address": { 00:21:35.135 "trtype": "TCP", 00:21:35.135 "adrfam": "IPv4", 00:21:35.135 "traddr": "10.0.0.1", 00:21:35.135 "trsvcid": "58136" 00:21:35.135 }, 00:21:35.135 "auth": { 00:21:35.135 "state": "completed", 00:21:35.135 "digest": "sha256", 00:21:35.135 "dhgroup": "ffdhe8192" 00:21:35.135 } 00:21:35.135 } 00:21:35.135 ]' 00:21:35.135 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:35.136 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:35.136 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.136 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:35.136 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.136 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.136 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.136 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.395 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGEyN2QyMDMxMjhiMmU0YzNlZGU3ZDU3YTA5OTI4MDGfzKv6: --dhchap-ctrl-secret DHHC-1:02:NjQ1ZjA5NmFiNmNiMDRhMjAxYzIzYjY0MTYxZjlkYjExZjMxM2JmMWZmZjUyNjY1DEziog==: 00:21:35.395 17:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NGEyN2QyMDMxMjhiMmU0YzNlZGU3ZDU3YTA5OTI4MDGfzKv6: --dhchap-ctrl-secret DHHC-1:02:NjQ1ZjA5NmFiNmNiMDRhMjAxYzIzYjY0MTYxZjlkYjExZjMxM2JmMWZmZjUyNjY1DEziog==: 00:21:36.335 17:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.335 17:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:36.335 17:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.335 17:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.335 17:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.335 17:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:36.335 17:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:36.335 17:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:36.335 17:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:21:36.335 17:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:36.335 17:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:36.335 17:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:36.335 17:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:36.335 17:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.335 17:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.335 17:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.335 17:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.335 17:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.335 17:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.335 17:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.335 17:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.904 00:21:36.904 17:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:36.904 17:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:36.904 17:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.163 17:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.163 17:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.163 17:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.163 17:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.163 17:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.163 17:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:37.163 { 00:21:37.163 "cntlid": 45, 00:21:37.163 "qid": 0, 00:21:37.163 "state": "enabled", 00:21:37.163 "thread": "nvmf_tgt_poll_group_000", 00:21:37.163 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:37.163 "listen_address": { 00:21:37.163 "trtype": "TCP", 00:21:37.163 "adrfam": "IPv4", 00:21:37.163 "traddr": "10.0.0.2", 00:21:37.163 "trsvcid": "4420" 00:21:37.163 }, 00:21:37.163 "peer_address": { 00:21:37.163 "trtype": "TCP", 00:21:37.163 "adrfam": "IPv4", 00:21:37.163 "traddr": "10.0.0.1", 00:21:37.163 "trsvcid": "58172" 00:21:37.163 }, 00:21:37.163 "auth": { 00:21:37.163 "state": "completed", 00:21:37.163 "digest": "sha256", 00:21:37.163 "dhgroup": "ffdhe8192" 00:21:37.163 } 00:21:37.163 } 00:21:37.163 ]' 00:21:37.163 17:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:37.163 17:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:37.163 17:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:37.163 17:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:37.163 17:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:37.422 17:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.422 17:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.422 17:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.422 17:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTkwYmUwYzBkN2JhZDAyZWEwNTU1MDkwNDI4YzM2MzM3MjY2YmNkYTRlMmI3NzQwcDDsRw==: --dhchap-ctrl-secret DHHC-1:01:NmUyYzVlZDllZTFkNTA5MDQ0ZTQ3ZTExZDc2YzY0ZGXBtqiM: 00:21:37.422 17:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YTkwYmUwYzBkN2JhZDAyZWEwNTU1MDkwNDI4YzM2MzM3MjY2YmNkYTRlMmI3NzQwcDDsRw==: --dhchap-ctrl-secret DHHC-1:01:NmUyYzVlZDllZTFkNTA5MDQ0ZTQ3ZTExZDc2YzY0ZGXBtqiM: 00:21:38.362 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.362 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:38.362 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.362 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.362 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.362 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:38.362 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:38.362 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:38.362 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:21:38.362 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:38.362 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:38.362 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:38.362 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:38.362 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.362 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:38.362 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.362 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.362 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.362 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:38.362 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:38.362 17:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:38.930 00:21:38.930 17:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.930 17:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.930 17:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.188 17:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.188 17:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.188 17:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.188 17:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.188 17:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.188 17:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:39.188 { 00:21:39.188 "cntlid": 47, 00:21:39.188 "qid": 0, 00:21:39.188 "state": "enabled", 00:21:39.188 "thread": "nvmf_tgt_poll_group_000", 00:21:39.188 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:39.188 "listen_address": { 00:21:39.188 "trtype": "TCP", 00:21:39.188 "adrfam": "IPv4", 00:21:39.188 "traddr": "10.0.0.2", 00:21:39.188 "trsvcid": "4420" 00:21:39.188 }, 00:21:39.188 "peer_address": { 00:21:39.188 "trtype": "TCP", 00:21:39.188 "adrfam": "IPv4", 00:21:39.188 "traddr": "10.0.0.1", 00:21:39.188 "trsvcid": "58196" 00:21:39.188 }, 00:21:39.188 "auth": { 00:21:39.188 "state": "completed", 00:21:39.188 "digest": "sha256", 00:21:39.188 "dhgroup": "ffdhe8192" 00:21:39.188 } 00:21:39.188 } 00:21:39.188 ]' 00:21:39.188 17:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:39.188 17:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:39.188 17:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:39.188 17:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:39.188 17:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:39.188 17:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.188 17:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.188 17:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.448 17:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjAzMWY3YjdkZWUxMDQ3YjY3YmE3YTNjOWMyMDMzOTQ2MmZlNGYxYzQ3NTg5ODM1MjI3MGNjNzhmZDJiYTM1NaBnAy0=: 00:21:39.448 17:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjAzMWY3YjdkZWUxMDQ3YjY3YmE3YTNjOWMyMDMzOTQ2MmZlNGYxYzQ3NTg5ODM1MjI3MGNjNzhmZDJiYTM1NaBnAy0=: 00:21:40.388 17:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.388 17:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:40.388 17:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.388 17:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.388 17:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.388 17:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:40.388 17:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:40.388 17:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:40.388 17:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:40.388 17:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:40.388 17:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:21:40.388 17:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:40.388 17:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:40.388 17:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:40.388 17:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:40.388 17:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.388 17:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.388 17:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.388 17:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.388 17:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.388 17:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.388 17:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.388 17:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.647 00:21:40.647 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:40.647 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:40.647 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.907 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.907 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.907 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.907 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.907 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.907 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:40.907 { 00:21:40.907 "cntlid": 49, 00:21:40.907 "qid": 0, 00:21:40.907 "state": "enabled", 00:21:40.907 "thread": "nvmf_tgt_poll_group_000", 00:21:40.907 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:40.907 "listen_address": { 00:21:40.907 "trtype": "TCP", 00:21:40.907 "adrfam": "IPv4", 00:21:40.907 "traddr": "10.0.0.2", 00:21:40.907 "trsvcid": "4420" 00:21:40.907 }, 00:21:40.907 "peer_address": { 00:21:40.907 "trtype": "TCP", 00:21:40.907 "adrfam": "IPv4", 00:21:40.907 "traddr": "10.0.0.1", 00:21:40.907 "trsvcid": "58214" 00:21:40.907 }, 00:21:40.907 "auth": { 00:21:40.907 "state": "completed", 00:21:40.907 "digest": "sha384", 00:21:40.907 "dhgroup": "null" 00:21:40.907 } 00:21:40.907 } 00:21:40.907 ]' 00:21:40.907 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:40.907 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:40.907 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:40.907 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:40.907 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:40.907 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.907 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.907 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.167 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmI4NGI2NDMzN2ZiYTdkMGIxZDFhYWFmMTc1YjMwOGUwOTQxNzJjN2NlM2NkNTFmTz+1jQ==: --dhchap-ctrl-secret DHHC-1:03:ZjliYmFhNzYzN2NlZTk0NTBmOGM0OGFjY2E1MTcxN2UwNDU4NmNkNTQyNmEwNGFjMTAyZDU5ODNjMDc2ZjIxOdcbX0o=: 00:21:41.167 17:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZmI4NGI2NDMzN2ZiYTdkMGIxZDFhYWFmMTc1YjMwOGUwOTQxNzJjN2NlM2NkNTFmTz+1jQ==: --dhchap-ctrl-secret DHHC-1:03:ZjliYmFhNzYzN2NlZTk0NTBmOGM0OGFjY2E1MTcxN2UwNDU4NmNkNTQyNmEwNGFjMTAyZDU5ODNjMDc2ZjIxOdcbX0o=: 00:21:42.108 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.108 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:42.108 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.108 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.108 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.108 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:42.108 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:42.108 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:42.108 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:21:42.108 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:42.108 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:42.108 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:42.108 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:42.108 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.108 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.108 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.108 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.108 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.108 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.108 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.108 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.368 00:21:42.368 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:42.368 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:42.368 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.629 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.629 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.629 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.629 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.629 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.629 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:42.629 { 00:21:42.629 "cntlid": 51, 00:21:42.629 "qid": 0, 00:21:42.629 "state": "enabled", 00:21:42.629 "thread": "nvmf_tgt_poll_group_000", 00:21:42.629 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:42.629 "listen_address": { 00:21:42.629 "trtype": "TCP", 00:21:42.629 "adrfam": "IPv4", 00:21:42.629 "traddr": "10.0.0.2", 00:21:42.629 "trsvcid": "4420" 00:21:42.629 }, 00:21:42.629 "peer_address": { 00:21:42.629 "trtype": "TCP", 00:21:42.629 "adrfam": "IPv4", 00:21:42.629 "traddr": "10.0.0.1", 00:21:42.629 "trsvcid": "58228" 00:21:42.629 }, 00:21:42.629 "auth": { 00:21:42.629 "state": "completed", 00:21:42.629 "digest": "sha384", 00:21:42.629 "dhgroup": "null" 00:21:42.629 } 00:21:42.629 } 00:21:42.629 ]' 00:21:42.629 17:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:42.629 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:42.629 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:42.629 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:42.629 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:42.629 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.629 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.629 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.890 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGEyN2QyMDMxMjhiMmU0YzNlZGU3ZDU3YTA5OTI4MDGfzKv6: --dhchap-ctrl-secret DHHC-1:02:NjQ1ZjA5NmFiNmNiMDRhMjAxYzIzYjY0MTYxZjlkYjExZjMxM2JmMWZmZjUyNjY1DEziog==: 00:21:42.890 17:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NGEyN2QyMDMxMjhiMmU0YzNlZGU3ZDU3YTA5OTI4MDGfzKv6: --dhchap-ctrl-secret DHHC-1:02:NjQ1ZjA5NmFiNmNiMDRhMjAxYzIzYjY0MTYxZjlkYjExZjMxM2JmMWZmZjUyNjY1DEziog==: 00:21:43.462 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.722 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.722 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:43.722 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.722 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.722 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.722 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:43.722 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:43.722 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:43.722 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:21:43.722 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:43.722 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:43.722 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:43.722 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:43.722 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.722 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:43.722 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.722 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.722 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.723 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:43.723 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:43.723 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:43.982 00:21:43.982 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:43.982 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:43.982 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.243 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.243 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.243 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.243 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.243 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.243 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:44.243 { 00:21:44.243 "cntlid": 53, 00:21:44.243 "qid": 0, 00:21:44.243 "state": "enabled", 00:21:44.243 "thread": "nvmf_tgt_poll_group_000", 00:21:44.243 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:44.243 "listen_address": { 00:21:44.243 "trtype": "TCP", 00:21:44.243 "adrfam": "IPv4", 00:21:44.243 "traddr": "10.0.0.2", 00:21:44.243 "trsvcid": "4420" 00:21:44.243 }, 00:21:44.243 "peer_address": { 00:21:44.243 "trtype": "TCP", 00:21:44.243 "adrfam": "IPv4", 00:21:44.243 "traddr": "10.0.0.1", 00:21:44.243 "trsvcid": "57308" 00:21:44.243 }, 00:21:44.243 "auth": { 00:21:44.243 "state": "completed", 00:21:44.243 "digest": "sha384", 00:21:44.243 "dhgroup": "null" 00:21:44.243 } 00:21:44.243 } 00:21:44.243 ]' 00:21:44.243 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:44.243 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:44.243 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:44.243 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:44.243 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:44.243 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.243 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.243 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.552 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTkwYmUwYzBkN2JhZDAyZWEwNTU1MDkwNDI4YzM2MzM3MjY2YmNkYTRlMmI3NzQwcDDsRw==: --dhchap-ctrl-secret DHHC-1:01:NmUyYzVlZDllZTFkNTA5MDQ0ZTQ3ZTExZDc2YzY0ZGXBtqiM: 00:21:44.552 17:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YTkwYmUwYzBkN2JhZDAyZWEwNTU1MDkwNDI4YzM2MzM3MjY2YmNkYTRlMmI3NzQwcDDsRw==: --dhchap-ctrl-secret DHHC-1:01:NmUyYzVlZDllZTFkNTA5MDQ0ZTQ3ZTExZDc2YzY0ZGXBtqiM: 00:21:45.209 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.209 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:45.209 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.209 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.209 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.209 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:45.209 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:45.209 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:45.476 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:21:45.476 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:45.476 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:45.476 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:45.476 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:45.476 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.476 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:45.476 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.476 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.476 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.476 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:45.476 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:45.476 17:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:45.793 00:21:45.793 17:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:45.793 17:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:45.793 17:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.793 17:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.793 17:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.793 17:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.793 17:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.053 17:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.053 17:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:46.053 { 00:21:46.053 "cntlid": 55, 00:21:46.053 "qid": 0, 00:21:46.053 "state": "enabled", 00:21:46.053 "thread": "nvmf_tgt_poll_group_000", 00:21:46.053 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:46.053 "listen_address": { 00:21:46.053 "trtype": "TCP", 00:21:46.053 "adrfam": "IPv4", 00:21:46.053 "traddr": "10.0.0.2", 00:21:46.053 "trsvcid": "4420" 00:21:46.053 }, 00:21:46.053 "peer_address": { 00:21:46.053 "trtype": "TCP", 00:21:46.053 "adrfam": "IPv4", 00:21:46.053 "traddr": "10.0.0.1", 00:21:46.053 "trsvcid": "57330" 00:21:46.053 }, 00:21:46.053 "auth": { 00:21:46.053 "state": "completed", 00:21:46.053 "digest": "sha384", 00:21:46.053 "dhgroup": "null" 00:21:46.053 } 00:21:46.053 } 00:21:46.053 ]' 00:21:46.053 17:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:46.053 17:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:46.053 17:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:46.053 17:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:46.054 17:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:46.054 17:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.054 17:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.054 17:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.313 17:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjAzMWY3YjdkZWUxMDQ3YjY3YmE3YTNjOWMyMDMzOTQ2MmZlNGYxYzQ3NTg5ODM1MjI3MGNjNzhmZDJiYTM1NaBnAy0=: 00:21:46.313 17:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjAzMWY3YjdkZWUxMDQ3YjY3YmE3YTNjOWMyMDMzOTQ2MmZlNGYxYzQ3NTg5ODM1MjI3MGNjNzhmZDJiYTM1NaBnAy0=: 00:21:46.885 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.885 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:46.885 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.885 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.885 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.885 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:46.885 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.886 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:46.886 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:47.146 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:21:47.146 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:47.146 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:47.146 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:47.146 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:47.146 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.146 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.146 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.146 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.146 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.146 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.146 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.146 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.407 00:21:47.407 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:47.407 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:47.407 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.668 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.668 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.668 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.668 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.668 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.668 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:47.668 { 00:21:47.668 "cntlid": 57, 00:21:47.668 "qid": 0, 00:21:47.668 "state": "enabled", 00:21:47.668 "thread": "nvmf_tgt_poll_group_000", 00:21:47.668 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:47.668 "listen_address": { 00:21:47.668 "trtype": "TCP", 00:21:47.668 "adrfam": "IPv4", 00:21:47.668 "traddr": "10.0.0.2", 00:21:47.668 "trsvcid": "4420" 00:21:47.668 }, 00:21:47.668 "peer_address": { 00:21:47.668 "trtype": "TCP", 00:21:47.668 "adrfam": "IPv4", 00:21:47.668 "traddr": "10.0.0.1", 00:21:47.668 "trsvcid": "57354" 00:21:47.668 }, 00:21:47.668 "auth": { 00:21:47.668 "state": "completed", 00:21:47.668 "digest": "sha384", 00:21:47.668 "dhgroup": "ffdhe2048" 00:21:47.668 } 00:21:47.668 } 00:21:47.668 ]' 00:21:47.668 17:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:47.668 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:47.668 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:47.668 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:47.668 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:47.668 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.668 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.668 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.928 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmI4NGI2NDMzN2ZiYTdkMGIxZDFhYWFmMTc1YjMwOGUwOTQxNzJjN2NlM2NkNTFmTz+1jQ==: --dhchap-ctrl-secret DHHC-1:03:ZjliYmFhNzYzN2NlZTk0NTBmOGM0OGFjY2E1MTcxN2UwNDU4NmNkNTQyNmEwNGFjMTAyZDU5ODNjMDc2ZjIxOdcbX0o=: 00:21:47.928 17:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZmI4NGI2NDMzN2ZiYTdkMGIxZDFhYWFmMTc1YjMwOGUwOTQxNzJjN2NlM2NkNTFmTz+1jQ==: --dhchap-ctrl-secret DHHC-1:03:ZjliYmFhNzYzN2NlZTk0NTBmOGM0OGFjY2E1MTcxN2UwNDU4NmNkNTQyNmEwNGFjMTAyZDU5ODNjMDc2ZjIxOdcbX0o=: 00:21:48.498 17:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.498 17:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:48.498 17:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.498 17:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.759 17:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.759 17:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:48.759 17:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:48.759 17:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:48.759 17:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:21:48.759 17:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:48.759 17:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:48.759 17:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:48.759 17:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:48.759 17:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.759 17:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.759 17:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.759 17:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.759 17:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.759 17:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.759 17:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.759 17:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.020 00:21:49.020 17:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:49.020 17:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:49.020 17:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.281 17:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.281 17:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.281 17:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.281 17:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.281 17:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.281 17:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:49.281 { 00:21:49.281 "cntlid": 59, 00:21:49.281 "qid": 0, 00:21:49.281 "state": "enabled", 00:21:49.281 "thread": "nvmf_tgt_poll_group_000", 00:21:49.281 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:49.281 "listen_address": { 00:21:49.281 "trtype": "TCP", 00:21:49.281 "adrfam": "IPv4", 00:21:49.281 "traddr": "10.0.0.2", 00:21:49.281 "trsvcid": "4420" 00:21:49.281 }, 00:21:49.281 "peer_address": { 00:21:49.281 "trtype": "TCP", 00:21:49.281 "adrfam": "IPv4", 00:21:49.281 "traddr": "10.0.0.1", 00:21:49.281 "trsvcid": "57400" 00:21:49.281 }, 00:21:49.281 "auth": { 00:21:49.281 "state": "completed", 00:21:49.281 "digest": "sha384", 00:21:49.281 "dhgroup": "ffdhe2048" 00:21:49.281 } 00:21:49.281 } 00:21:49.281 ]' 00:21:49.281 17:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:49.281 17:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:49.281 17:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:49.281 17:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:49.281 17:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:49.541 17:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.541 17:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.541 17:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.541 17:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGEyN2QyMDMxMjhiMmU0YzNlZGU3ZDU3YTA5OTI4MDGfzKv6: --dhchap-ctrl-secret DHHC-1:02:NjQ1ZjA5NmFiNmNiMDRhMjAxYzIzYjY0MTYxZjlkYjExZjMxM2JmMWZmZjUyNjY1DEziog==: 00:21:49.541 17:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NGEyN2QyMDMxMjhiMmU0YzNlZGU3ZDU3YTA5OTI4MDGfzKv6: --dhchap-ctrl-secret DHHC-1:02:NjQ1ZjA5NmFiNmNiMDRhMjAxYzIzYjY0MTYxZjlkYjExZjMxM2JmMWZmZjUyNjY1DEziog==: 00:21:50.482 17:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.482 17:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:50.482 17:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.482 17:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.482 17:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.482 17:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:50.482 17:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:50.482 17:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:50.482 17:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:21:50.482 17:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:50.482 17:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:50.482 17:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:50.482 17:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:50.482 17:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.483 17:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.483 17:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.483 17:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.483 17:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.483 17:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.483 17:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.483 17:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.742 00:21:50.742 17:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:50.742 17:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:50.742 17:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.004 17:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.004 17:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.004 17:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.004 17:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.004 17:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.004 17:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:51.004 { 00:21:51.004 "cntlid": 61, 00:21:51.004 "qid": 0, 00:21:51.004 "state": "enabled", 00:21:51.004 "thread": "nvmf_tgt_poll_group_000", 00:21:51.004 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:51.004 "listen_address": { 00:21:51.004 "trtype": "TCP", 00:21:51.004 "adrfam": "IPv4", 00:21:51.004 "traddr": "10.0.0.2", 00:21:51.004 "trsvcid": "4420" 00:21:51.004 }, 00:21:51.004 "peer_address": { 00:21:51.004 "trtype": "TCP", 00:21:51.004 "adrfam": "IPv4", 00:21:51.004 "traddr": "10.0.0.1", 00:21:51.004 "trsvcid": "57424" 00:21:51.004 }, 00:21:51.004 "auth": { 00:21:51.004 "state": "completed", 00:21:51.004 "digest": "sha384", 00:21:51.004 "dhgroup": "ffdhe2048" 00:21:51.004 } 00:21:51.004 } 00:21:51.004 ]' 00:21:51.004 17:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:51.004 17:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:51.004 17:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:51.004 17:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:51.004 17:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:51.265 17:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.265 17:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.265 17:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.265 17:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTkwYmUwYzBkN2JhZDAyZWEwNTU1MDkwNDI4YzM2MzM3MjY2YmNkYTRlMmI3NzQwcDDsRw==: --dhchap-ctrl-secret DHHC-1:01:NmUyYzVlZDllZTFkNTA5MDQ0ZTQ3ZTExZDc2YzY0ZGXBtqiM: 00:21:51.265 17:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YTkwYmUwYzBkN2JhZDAyZWEwNTU1MDkwNDI4YzM2MzM3MjY2YmNkYTRlMmI3NzQwcDDsRw==: --dhchap-ctrl-secret DHHC-1:01:NmUyYzVlZDllZTFkNTA5MDQ0ZTQ3ZTExZDc2YzY0ZGXBtqiM: 00:21:52.208 17:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.208 17:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:52.208 17:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.208 17:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.208 17:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.208 17:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:52.208 17:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:52.208 17:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:52.208 17:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:21:52.208 17:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:52.208 17:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:52.208 17:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:52.208 17:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:52.208 17:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.208 17:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:52.208 17:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.208 17:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.208 17:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.208 17:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:52.209 17:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:52.209 17:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:52.472 00:21:52.472 17:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:52.472 17:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.472 17:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.732 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.732 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.732 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.732 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.732 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.732 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:52.732 { 00:21:52.732 "cntlid": 63, 00:21:52.732 "qid": 0, 00:21:52.732 "state": "enabled", 00:21:52.732 "thread": "nvmf_tgt_poll_group_000", 00:21:52.732 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:52.732 "listen_address": { 00:21:52.732 "trtype": "TCP", 00:21:52.732 "adrfam": "IPv4", 00:21:52.732 "traddr": "10.0.0.2", 00:21:52.732 "trsvcid": "4420" 00:21:52.732 }, 00:21:52.732 "peer_address": { 00:21:52.732 "trtype": "TCP", 00:21:52.732 "adrfam": "IPv4", 00:21:52.732 "traddr": "10.0.0.1", 00:21:52.732 "trsvcid": "57446" 00:21:52.732 }, 00:21:52.732 "auth": { 00:21:52.732 "state": "completed", 00:21:52.732 "digest": "sha384", 00:21:52.732 "dhgroup": "ffdhe2048" 00:21:52.732 } 00:21:52.732 } 00:21:52.732 ]' 00:21:52.732 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.732 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:52.732 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:52.732 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:52.732 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:52.732 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.732 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.732 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.992 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjAzMWY3YjdkZWUxMDQ3YjY3YmE3YTNjOWMyMDMzOTQ2MmZlNGYxYzQ3NTg5ODM1MjI3MGNjNzhmZDJiYTM1NaBnAy0=: 00:21:52.992 17:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjAzMWY3YjdkZWUxMDQ3YjY3YmE3YTNjOWMyMDMzOTQ2MmZlNGYxYzQ3NTg5ODM1MjI3MGNjNzhmZDJiYTM1NaBnAy0=: 00:21:53.933 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.934 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:53.934 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.934 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.934 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.934 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:53.934 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:53.934 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:53.934 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:53.934 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:21:53.934 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:53.934 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:53.934 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:53.934 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:53.934 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.934 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:53.934 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.934 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.934 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.934 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:53.934 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:53.934 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.194 00:21:54.194 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:54.194 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:54.194 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.454 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.454 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.454 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.454 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.454 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.454 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:54.454 { 00:21:54.454 "cntlid": 65, 00:21:54.454 "qid": 0, 00:21:54.454 "state": "enabled", 00:21:54.454 "thread": "nvmf_tgt_poll_group_000", 00:21:54.454 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:54.454 "listen_address": { 00:21:54.454 "trtype": "TCP", 00:21:54.454 "adrfam": "IPv4", 00:21:54.454 "traddr": "10.0.0.2", 00:21:54.454 "trsvcid": "4420" 00:21:54.454 }, 00:21:54.454 "peer_address": { 00:21:54.454 "trtype": "TCP", 00:21:54.454 "adrfam": "IPv4", 00:21:54.454 "traddr": "10.0.0.1", 00:21:54.454 "trsvcid": "47786" 00:21:54.454 }, 00:21:54.454 "auth": { 00:21:54.454 "state": "completed", 00:21:54.454 "digest": "sha384", 00:21:54.454 "dhgroup": "ffdhe3072" 00:21:54.454 } 00:21:54.454 } 00:21:54.454 ]' 00:21:54.454 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:54.454 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:54.454 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:54.454 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:54.454 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:54.454 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.454 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.454 17:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.714 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmI4NGI2NDMzN2ZiYTdkMGIxZDFhYWFmMTc1YjMwOGUwOTQxNzJjN2NlM2NkNTFmTz+1jQ==: --dhchap-ctrl-secret DHHC-1:03:ZjliYmFhNzYzN2NlZTk0NTBmOGM0OGFjY2E1MTcxN2UwNDU4NmNkNTQyNmEwNGFjMTAyZDU5ODNjMDc2ZjIxOdcbX0o=: 00:21:54.715 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZmI4NGI2NDMzN2ZiYTdkMGIxZDFhYWFmMTc1YjMwOGUwOTQxNzJjN2NlM2NkNTFmTz+1jQ==: --dhchap-ctrl-secret DHHC-1:03:ZjliYmFhNzYzN2NlZTk0NTBmOGM0OGFjY2E1MTcxN2UwNDU4NmNkNTQyNmEwNGFjMTAyZDU5ODNjMDc2ZjIxOdcbX0o=: 00:21:55.656 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.656 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:55.656 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.656 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.656 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.656 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:55.656 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:55.656 17:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:55.656 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:21:55.656 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:55.656 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:55.656 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:55.656 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:55.656 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.656 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.656 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.656 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.656 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.656 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.656 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.656 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.917 00:21:55.917 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:55.917 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:55.917 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.178 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.178 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.178 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.178 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.178 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.178 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:56.178 { 00:21:56.178 "cntlid": 67, 00:21:56.178 "qid": 0, 00:21:56.178 "state": "enabled", 00:21:56.178 "thread": "nvmf_tgt_poll_group_000", 00:21:56.178 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:56.178 "listen_address": { 00:21:56.178 "trtype": "TCP", 00:21:56.178 "adrfam": "IPv4", 00:21:56.178 "traddr": "10.0.0.2", 00:21:56.178 "trsvcid": "4420" 00:21:56.178 }, 00:21:56.178 "peer_address": { 00:21:56.178 "trtype": "TCP", 00:21:56.178 "adrfam": "IPv4", 00:21:56.178 "traddr": "10.0.0.1", 00:21:56.178 "trsvcid": "47804" 00:21:56.179 }, 00:21:56.179 "auth": { 00:21:56.179 "state": "completed", 00:21:56.179 "digest": "sha384", 00:21:56.179 "dhgroup": "ffdhe3072" 00:21:56.179 } 00:21:56.179 } 00:21:56.179 ]' 00:21:56.179 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:56.179 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:56.179 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:56.179 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:56.179 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:56.179 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.179 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.179 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.439 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGEyN2QyMDMxMjhiMmU0YzNlZGU3ZDU3YTA5OTI4MDGfzKv6: --dhchap-ctrl-secret DHHC-1:02:NjQ1ZjA5NmFiNmNiMDRhMjAxYzIzYjY0MTYxZjlkYjExZjMxM2JmMWZmZjUyNjY1DEziog==: 00:21:56.439 17:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NGEyN2QyMDMxMjhiMmU0YzNlZGU3ZDU3YTA5OTI4MDGfzKv6: --dhchap-ctrl-secret DHHC-1:02:NjQ1ZjA5NmFiNmNiMDRhMjAxYzIzYjY0MTYxZjlkYjExZjMxM2JmMWZmZjUyNjY1DEziog==: 00:21:57.382 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.382 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:57.382 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.382 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.382 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.382 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:57.382 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:57.382 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:57.382 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:21:57.382 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:57.382 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:57.382 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:57.382 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:57.382 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.382 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.382 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.382 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.382 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.382 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.382 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.382 17:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.643 00:21:57.643 17:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:57.643 17:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:57.643 17:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.904 17:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.904 17:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.904 17:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.904 17:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.904 17:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.904 17:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:57.904 { 00:21:57.904 "cntlid": 69, 00:21:57.904 "qid": 0, 00:21:57.904 "state": "enabled", 00:21:57.904 "thread": "nvmf_tgt_poll_group_000", 00:21:57.904 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:57.904 "listen_address": { 00:21:57.904 "trtype": "TCP", 00:21:57.904 "adrfam": "IPv4", 00:21:57.904 "traddr": "10.0.0.2", 00:21:57.904 "trsvcid": "4420" 00:21:57.904 }, 00:21:57.904 "peer_address": { 00:21:57.904 "trtype": "TCP", 00:21:57.904 "adrfam": "IPv4", 00:21:57.904 "traddr": "10.0.0.1", 00:21:57.904 "trsvcid": "47818" 00:21:57.904 }, 00:21:57.904 "auth": { 00:21:57.904 "state": "completed", 00:21:57.904 "digest": "sha384", 00:21:57.904 "dhgroup": "ffdhe3072" 00:21:57.904 } 00:21:57.904 } 00:21:57.904 ]' 00:21:57.904 17:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:57.904 17:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:57.904 17:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:57.904 17:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:57.904 17:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:57.904 17:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.904 17:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.904 17:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.165 17:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTkwYmUwYzBkN2JhZDAyZWEwNTU1MDkwNDI4YzM2MzM3MjY2YmNkYTRlMmI3NzQwcDDsRw==: --dhchap-ctrl-secret DHHC-1:01:NmUyYzVlZDllZTFkNTA5MDQ0ZTQ3ZTExZDc2YzY0ZGXBtqiM: 00:21:58.165 17:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YTkwYmUwYzBkN2JhZDAyZWEwNTU1MDkwNDI4YzM2MzM3MjY2YmNkYTRlMmI3NzQwcDDsRw==: --dhchap-ctrl-secret DHHC-1:01:NmUyYzVlZDllZTFkNTA5MDQ0ZTQ3ZTExZDc2YzY0ZGXBtqiM: 00:21:58.735 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.735 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:58.735 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.735 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.735 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.735 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:58.735 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:58.735 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:58.996 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:21:58.996 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:58.996 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:58.996 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:58.996 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:58.996 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.996 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:58.996 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.996 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.996 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.996 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:58.996 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:58.996 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:59.257 00:21:59.257 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:59.257 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:59.257 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.517 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.517 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.517 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.517 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.517 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.517 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:59.517 { 00:21:59.517 "cntlid": 71, 00:21:59.517 "qid": 0, 00:21:59.517 "state": "enabled", 00:21:59.517 "thread": "nvmf_tgt_poll_group_000", 00:21:59.517 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:59.517 "listen_address": { 00:21:59.518 "trtype": "TCP", 00:21:59.518 "adrfam": "IPv4", 00:21:59.518 "traddr": "10.0.0.2", 00:21:59.518 "trsvcid": "4420" 00:21:59.518 }, 00:21:59.518 "peer_address": { 00:21:59.518 "trtype": "TCP", 00:21:59.518 "adrfam": "IPv4", 00:21:59.518 "traddr": "10.0.0.1", 00:21:59.518 "trsvcid": "47832" 00:21:59.518 }, 00:21:59.518 "auth": { 00:21:59.518 "state": "completed", 00:21:59.518 "digest": "sha384", 00:21:59.518 "dhgroup": "ffdhe3072" 00:21:59.518 } 00:21:59.518 } 00:21:59.518 ]' 00:21:59.518 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:59.518 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:59.518 17:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:59.518 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:59.518 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:59.518 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.518 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.518 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.778 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjAzMWY3YjdkZWUxMDQ3YjY3YmE3YTNjOWMyMDMzOTQ2MmZlNGYxYzQ3NTg5ODM1MjI3MGNjNzhmZDJiYTM1NaBnAy0=: 00:21:59.778 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjAzMWY3YjdkZWUxMDQ3YjY3YmE3YTNjOWMyMDMzOTQ2MmZlNGYxYzQ3NTg5ODM1MjI3MGNjNzhmZDJiYTM1NaBnAy0=: 00:22:00.720 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.720 17:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:00.720 17:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.720 17:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.720 17:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.720 17:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:00.720 17:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:00.720 17:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:00.720 17:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:00.720 17:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:22:00.720 17:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:00.720 17:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:00.720 17:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:00.720 17:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:00.720 17:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.720 17:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.720 17:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.720 17:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.720 17:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.720 17:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.720 17:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.720 17:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.980 00:22:00.980 17:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:00.980 17:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:00.980 17:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.240 17:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.240 17:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.240 17:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.240 17:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.240 17:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.240 17:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:01.240 { 00:22:01.240 "cntlid": 73, 00:22:01.240 "qid": 0, 00:22:01.240 "state": "enabled", 00:22:01.240 "thread": "nvmf_tgt_poll_group_000", 00:22:01.240 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:01.240 "listen_address": { 00:22:01.240 "trtype": "TCP", 00:22:01.240 "adrfam": "IPv4", 00:22:01.240 "traddr": "10.0.0.2", 00:22:01.240 "trsvcid": "4420" 00:22:01.240 }, 00:22:01.240 "peer_address": { 00:22:01.240 "trtype": "TCP", 00:22:01.240 "adrfam": "IPv4", 00:22:01.240 "traddr": "10.0.0.1", 00:22:01.240 "trsvcid": "47842" 00:22:01.240 }, 00:22:01.240 "auth": { 00:22:01.240 "state": "completed", 00:22:01.240 "digest": "sha384", 00:22:01.240 "dhgroup": "ffdhe4096" 00:22:01.240 } 00:22:01.240 } 00:22:01.240 ]' 00:22:01.240 17:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:01.240 17:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:01.240 17:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:01.240 17:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:01.240 17:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:01.240 17:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.240 17:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.240 17:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.501 17:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmI4NGI2NDMzN2ZiYTdkMGIxZDFhYWFmMTc1YjMwOGUwOTQxNzJjN2NlM2NkNTFmTz+1jQ==: --dhchap-ctrl-secret DHHC-1:03:ZjliYmFhNzYzN2NlZTk0NTBmOGM0OGFjY2E1MTcxN2UwNDU4NmNkNTQyNmEwNGFjMTAyZDU5ODNjMDc2ZjIxOdcbX0o=: 00:22:01.501 17:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZmI4NGI2NDMzN2ZiYTdkMGIxZDFhYWFmMTc1YjMwOGUwOTQxNzJjN2NlM2NkNTFmTz+1jQ==: --dhchap-ctrl-secret DHHC-1:03:ZjliYmFhNzYzN2NlZTk0NTBmOGM0OGFjY2E1MTcxN2UwNDU4NmNkNTQyNmEwNGFjMTAyZDU5ODNjMDc2ZjIxOdcbX0o=: 00:22:02.444 17:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.444 17:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:02.444 17:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.444 17:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.444 17:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.444 17:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:02.444 17:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:02.444 17:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:02.444 17:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:22:02.444 17:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:02.444 17:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:02.444 17:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:02.444 17:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:02.444 17:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.444 17:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.444 17:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.444 17:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.444 17:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.444 17:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.444 17:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.444 17:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.705 00:22:02.705 17:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:02.705 17:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:02.705 17:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.966 17:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.966 17:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.966 17:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.966 17:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.966 17:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.966 17:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:02.966 { 00:22:02.966 "cntlid": 75, 00:22:02.966 "qid": 0, 00:22:02.966 "state": "enabled", 00:22:02.966 "thread": "nvmf_tgt_poll_group_000", 00:22:02.966 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:02.966 "listen_address": { 00:22:02.966 "trtype": "TCP", 00:22:02.966 "adrfam": "IPv4", 00:22:02.966 "traddr": "10.0.0.2", 00:22:02.966 "trsvcid": "4420" 00:22:02.966 }, 00:22:02.966 "peer_address": { 00:22:02.966 "trtype": "TCP", 00:22:02.966 "adrfam": "IPv4", 00:22:02.966 "traddr": "10.0.0.1", 00:22:02.966 "trsvcid": "42480" 00:22:02.966 }, 00:22:02.966 "auth": { 00:22:02.966 "state": "completed", 00:22:02.966 "digest": "sha384", 00:22:02.966 "dhgroup": "ffdhe4096" 00:22:02.966 } 00:22:02.966 } 00:22:02.966 ]' 00:22:02.966 17:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:02.966 17:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:02.966 17:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:02.966 17:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:02.966 17:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:02.966 17:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.966 17:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.966 17:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.227 17:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGEyN2QyMDMxMjhiMmU0YzNlZGU3ZDU3YTA5OTI4MDGfzKv6: --dhchap-ctrl-secret DHHC-1:02:NjQ1ZjA5NmFiNmNiMDRhMjAxYzIzYjY0MTYxZjlkYjExZjMxM2JmMWZmZjUyNjY1DEziog==: 00:22:03.227 17:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NGEyN2QyMDMxMjhiMmU0YzNlZGU3ZDU3YTA5OTI4MDGfzKv6: --dhchap-ctrl-secret DHHC-1:02:NjQ1ZjA5NmFiNmNiMDRhMjAxYzIzYjY0MTYxZjlkYjExZjMxM2JmMWZmZjUyNjY1DEziog==: 00:22:04.169 17:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.169 17:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:04.169 17:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.169 17:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.169 17:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.169 17:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:04.169 17:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:04.169 17:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:04.169 17:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:22:04.169 17:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:04.169 17:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:04.169 17:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:04.169 17:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:04.169 17:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:04.169 17:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.169 17:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.169 17:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.169 17:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.169 17:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.169 17:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.169 17:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.430 00:22:04.430 17:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:04.430 17:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:04.430 17:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.691 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.691 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.691 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.691 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.691 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.691 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:04.691 { 00:22:04.691 "cntlid": 77, 00:22:04.691 "qid": 0, 00:22:04.691 "state": "enabled", 00:22:04.691 "thread": "nvmf_tgt_poll_group_000", 00:22:04.691 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:04.691 "listen_address": { 00:22:04.691 "trtype": "TCP", 00:22:04.691 "adrfam": "IPv4", 00:22:04.691 "traddr": "10.0.0.2", 00:22:04.691 "trsvcid": "4420" 00:22:04.691 }, 00:22:04.691 "peer_address": { 00:22:04.691 "trtype": "TCP", 00:22:04.691 "adrfam": "IPv4", 00:22:04.691 "traddr": "10.0.0.1", 00:22:04.691 "trsvcid": "42508" 00:22:04.691 }, 00:22:04.691 "auth": { 00:22:04.691 "state": "completed", 00:22:04.691 "digest": "sha384", 00:22:04.691 "dhgroup": "ffdhe4096" 00:22:04.691 } 00:22:04.691 } 00:22:04.691 ]' 00:22:04.691 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:04.691 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:04.691 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:04.691 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:04.691 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:04.691 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.691 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.691 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.953 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTkwYmUwYzBkN2JhZDAyZWEwNTU1MDkwNDI4YzM2MzM3MjY2YmNkYTRlMmI3NzQwcDDsRw==: --dhchap-ctrl-secret DHHC-1:01:NmUyYzVlZDllZTFkNTA5MDQ0ZTQ3ZTExZDc2YzY0ZGXBtqiM: 00:22:04.953 17:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YTkwYmUwYzBkN2JhZDAyZWEwNTU1MDkwNDI4YzM2MzM3MjY2YmNkYTRlMmI3NzQwcDDsRw==: --dhchap-ctrl-secret DHHC-1:01:NmUyYzVlZDllZTFkNTA5MDQ0ZTQ3ZTExZDc2YzY0ZGXBtqiM: 00:22:05.894 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.894 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:05.894 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.894 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.895 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.895 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:05.895 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:05.895 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:05.895 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:22:05.895 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:05.895 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:05.895 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:05.895 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:05.895 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.895 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:05.895 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.895 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.895 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.895 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:05.895 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:05.895 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:06.156 00:22:06.156 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:06.156 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:06.156 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.417 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.417 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.417 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.417 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.417 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.417 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:06.417 { 00:22:06.417 "cntlid": 79, 00:22:06.417 "qid": 0, 00:22:06.417 "state": "enabled", 00:22:06.417 "thread": "nvmf_tgt_poll_group_000", 00:22:06.417 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:06.417 "listen_address": { 00:22:06.417 "trtype": "TCP", 00:22:06.417 "adrfam": "IPv4", 00:22:06.417 "traddr": "10.0.0.2", 00:22:06.417 "trsvcid": "4420" 00:22:06.417 }, 00:22:06.417 "peer_address": { 00:22:06.417 "trtype": "TCP", 00:22:06.417 "adrfam": "IPv4", 00:22:06.417 "traddr": "10.0.0.1", 00:22:06.417 "trsvcid": "42530" 00:22:06.417 }, 00:22:06.417 "auth": { 00:22:06.417 "state": "completed", 00:22:06.417 "digest": "sha384", 00:22:06.417 "dhgroup": "ffdhe4096" 00:22:06.417 } 00:22:06.417 } 00:22:06.417 ]' 00:22:06.417 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:06.417 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:06.417 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:06.417 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:06.417 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:06.417 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.417 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.417 17:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.679 17:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjAzMWY3YjdkZWUxMDQ3YjY3YmE3YTNjOWMyMDMzOTQ2MmZlNGYxYzQ3NTg5ODM1MjI3MGNjNzhmZDJiYTM1NaBnAy0=: 00:22:06.679 17:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjAzMWY3YjdkZWUxMDQ3YjY3YmE3YTNjOWMyMDMzOTQ2MmZlNGYxYzQ3NTg5ODM1MjI3MGNjNzhmZDJiYTM1NaBnAy0=: 00:22:07.624 17:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.624 17:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:07.624 17:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.624 17:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.624 17:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.624 17:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:07.624 17:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:07.624 17:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:07.624 17:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:07.624 17:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:22:07.624 17:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:07.624 17:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:07.624 17:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:07.624 17:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:07.624 17:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.624 17:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.624 17:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.624 17:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.624 17:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.624 17:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.624 17:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.624 17:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.885 00:22:07.885 17:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:07.885 17:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:07.885 17:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.146 17:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.146 17:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.146 17:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.146 17:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.146 17:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.146 17:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:08.146 { 00:22:08.146 "cntlid": 81, 00:22:08.146 "qid": 0, 00:22:08.146 "state": "enabled", 00:22:08.146 "thread": "nvmf_tgt_poll_group_000", 00:22:08.146 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:08.146 "listen_address": { 00:22:08.146 "trtype": "TCP", 00:22:08.146 "adrfam": "IPv4", 00:22:08.146 "traddr": "10.0.0.2", 00:22:08.146 "trsvcid": "4420" 00:22:08.146 }, 00:22:08.146 "peer_address": { 00:22:08.146 "trtype": "TCP", 00:22:08.146 "adrfam": "IPv4", 00:22:08.146 "traddr": "10.0.0.1", 00:22:08.146 "trsvcid": "42552" 00:22:08.146 }, 00:22:08.146 "auth": { 00:22:08.146 "state": "completed", 00:22:08.146 "digest": "sha384", 00:22:08.146 "dhgroup": "ffdhe6144" 00:22:08.146 } 00:22:08.146 } 00:22:08.146 ]' 00:22:08.146 17:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:08.146 17:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:08.146 17:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:08.146 17:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:08.146 17:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:08.407 17:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.407 17:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.407 17:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.407 17:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmI4NGI2NDMzN2ZiYTdkMGIxZDFhYWFmMTc1YjMwOGUwOTQxNzJjN2NlM2NkNTFmTz+1jQ==: --dhchap-ctrl-secret DHHC-1:03:ZjliYmFhNzYzN2NlZTk0NTBmOGM0OGFjY2E1MTcxN2UwNDU4NmNkNTQyNmEwNGFjMTAyZDU5ODNjMDc2ZjIxOdcbX0o=: 00:22:08.407 17:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZmI4NGI2NDMzN2ZiYTdkMGIxZDFhYWFmMTc1YjMwOGUwOTQxNzJjN2NlM2NkNTFmTz+1jQ==: --dhchap-ctrl-secret DHHC-1:03:ZjliYmFhNzYzN2NlZTk0NTBmOGM0OGFjY2E1MTcxN2UwNDU4NmNkNTQyNmEwNGFjMTAyZDU5ODNjMDc2ZjIxOdcbX0o=: 00:22:09.348 17:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.348 17:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:09.348 17:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.348 17:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.348 17:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.348 17:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:09.348 17:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:09.348 17:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:09.348 17:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:22:09.348 17:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:09.348 17:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:09.348 17:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:09.348 17:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:09.348 17:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.348 17:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.348 17:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.348 17:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.348 17:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.348 17:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.348 17:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.348 17:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.919 00:22:09.919 17:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:09.919 17:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:09.919 17:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.919 17:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.919 17:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.919 17:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.919 17:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.919 17:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.919 17:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:09.919 { 00:22:09.919 "cntlid": 83, 00:22:09.919 "qid": 0, 00:22:09.919 "state": "enabled", 00:22:09.919 "thread": "nvmf_tgt_poll_group_000", 00:22:09.919 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:09.919 "listen_address": { 00:22:09.919 "trtype": "TCP", 00:22:09.919 "adrfam": "IPv4", 00:22:09.919 "traddr": "10.0.0.2", 00:22:09.919 "trsvcid": "4420" 00:22:09.919 }, 00:22:09.919 "peer_address": { 00:22:09.919 "trtype": "TCP", 00:22:09.919 "adrfam": "IPv4", 00:22:09.919 "traddr": "10.0.0.1", 00:22:09.919 "trsvcid": "42584" 00:22:09.919 }, 00:22:09.919 "auth": { 00:22:09.919 "state": "completed", 00:22:09.919 "digest": "sha384", 00:22:09.919 "dhgroup": "ffdhe6144" 00:22:09.919 } 00:22:09.919 } 00:22:09.919 ]' 00:22:09.919 17:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:09.919 17:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:09.919 17:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:09.919 17:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:09.919 17:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:10.180 17:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.180 17:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.180 17:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.180 17:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGEyN2QyMDMxMjhiMmU0YzNlZGU3ZDU3YTA5OTI4MDGfzKv6: --dhchap-ctrl-secret DHHC-1:02:NjQ1ZjA5NmFiNmNiMDRhMjAxYzIzYjY0MTYxZjlkYjExZjMxM2JmMWZmZjUyNjY1DEziog==: 00:22:10.180 17:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NGEyN2QyMDMxMjhiMmU0YzNlZGU3ZDU3YTA5OTI4MDGfzKv6: --dhchap-ctrl-secret DHHC-1:02:NjQ1ZjA5NmFiNmNiMDRhMjAxYzIzYjY0MTYxZjlkYjExZjMxM2JmMWZmZjUyNjY1DEziog==: 00:22:11.119 17:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.119 17:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:11.119 17:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.119 17:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.119 17:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.119 17:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:11.119 17:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:11.119 17:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:11.119 17:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:22:11.119 17:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:11.119 17:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:11.119 17:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:11.119 17:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:11.119 17:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:11.119 17:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.119 17:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.119 17:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.119 17:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.119 17:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.119 17:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.119 17:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.692 00:22:11.692 17:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:11.692 17:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:11.692 17:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.692 17:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.692 17:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.692 17:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.692 17:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.692 17:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.692 17:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:11.692 { 00:22:11.692 "cntlid": 85, 00:22:11.692 "qid": 0, 00:22:11.692 "state": "enabled", 00:22:11.692 "thread": "nvmf_tgt_poll_group_000", 00:22:11.692 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:11.692 "listen_address": { 00:22:11.692 "trtype": "TCP", 00:22:11.692 "adrfam": "IPv4", 00:22:11.692 "traddr": "10.0.0.2", 00:22:11.692 "trsvcid": "4420" 00:22:11.692 }, 00:22:11.692 "peer_address": { 00:22:11.692 "trtype": "TCP", 00:22:11.692 "adrfam": "IPv4", 00:22:11.692 "traddr": "10.0.0.1", 00:22:11.692 "trsvcid": "42608" 00:22:11.692 }, 00:22:11.692 "auth": { 00:22:11.692 "state": "completed", 00:22:11.692 "digest": "sha384", 00:22:11.692 "dhgroup": "ffdhe6144" 00:22:11.692 } 00:22:11.692 } 00:22:11.692 ]' 00:22:11.692 17:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:11.953 17:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:11.953 17:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:11.953 17:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:11.953 17:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:11.953 17:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.953 17:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.953 17:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.953 17:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTkwYmUwYzBkN2JhZDAyZWEwNTU1MDkwNDI4YzM2MzM3MjY2YmNkYTRlMmI3NzQwcDDsRw==: --dhchap-ctrl-secret DHHC-1:01:NmUyYzVlZDllZTFkNTA5MDQ0ZTQ3ZTExZDc2YzY0ZGXBtqiM: 00:22:11.953 17:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YTkwYmUwYzBkN2JhZDAyZWEwNTU1MDkwNDI4YzM2MzM3MjY2YmNkYTRlMmI3NzQwcDDsRw==: --dhchap-ctrl-secret DHHC-1:01:NmUyYzVlZDllZTFkNTA5MDQ0ZTQ3ZTExZDc2YzY0ZGXBtqiM: 00:22:12.893 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.893 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:12.893 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.893 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.893 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.893 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:12.893 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:12.893 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:13.157 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:22:13.157 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:13.157 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:13.157 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:13.157 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:13.157 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:13.157 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:13.157 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.157 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.157 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.157 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:13.157 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:13.157 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:13.417 00:22:13.417 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:13.417 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.417 17:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:13.678 17:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.678 17:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.678 17:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.678 17:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.678 17:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.678 17:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:13.678 { 00:22:13.678 "cntlid": 87, 00:22:13.678 "qid": 0, 00:22:13.678 "state": "enabled", 00:22:13.678 "thread": "nvmf_tgt_poll_group_000", 00:22:13.678 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:13.678 "listen_address": { 00:22:13.678 "trtype": "TCP", 00:22:13.678 "adrfam": "IPv4", 00:22:13.678 "traddr": "10.0.0.2", 00:22:13.678 "trsvcid": "4420" 00:22:13.678 }, 00:22:13.678 "peer_address": { 00:22:13.678 "trtype": "TCP", 00:22:13.678 "adrfam": "IPv4", 00:22:13.678 "traddr": "10.0.0.1", 00:22:13.678 "trsvcid": "41164" 00:22:13.678 }, 00:22:13.678 "auth": { 00:22:13.678 "state": "completed", 00:22:13.678 "digest": "sha384", 00:22:13.678 "dhgroup": "ffdhe6144" 00:22:13.678 } 00:22:13.678 } 00:22:13.678 ]' 00:22:13.678 17:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:13.678 17:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:13.678 17:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:13.678 17:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:13.678 17:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:13.678 17:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.678 17:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.678 17:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.938 17:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjAzMWY3YjdkZWUxMDQ3YjY3YmE3YTNjOWMyMDMzOTQ2MmZlNGYxYzQ3NTg5ODM1MjI3MGNjNzhmZDJiYTM1NaBnAy0=: 00:22:13.938 17:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjAzMWY3YjdkZWUxMDQ3YjY3YmE3YTNjOWMyMDMzOTQ2MmZlNGYxYzQ3NTg5ODM1MjI3MGNjNzhmZDJiYTM1NaBnAy0=: 00:22:14.879 17:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.879 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.879 17:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:14.879 17:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.879 17:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.879 17:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.879 17:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:14.879 17:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:14.879 17:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:14.879 17:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:14.879 17:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:22:14.879 17:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:14.879 17:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:14.879 17:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:14.879 17:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:14.879 17:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.879 17:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.879 17:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.879 17:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.879 17:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.879 17:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.879 17:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.879 17:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.450 00:22:15.450 17:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:15.450 17:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:15.450 17:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.711 17:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.711 17:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.711 17:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.711 17:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.711 17:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.711 17:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:15.711 { 00:22:15.711 "cntlid": 89, 00:22:15.711 "qid": 0, 00:22:15.711 "state": "enabled", 00:22:15.711 "thread": "nvmf_tgt_poll_group_000", 00:22:15.711 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:15.711 "listen_address": { 00:22:15.711 "trtype": "TCP", 00:22:15.711 "adrfam": "IPv4", 00:22:15.711 "traddr": "10.0.0.2", 00:22:15.711 "trsvcid": "4420" 00:22:15.711 }, 00:22:15.711 "peer_address": { 00:22:15.711 "trtype": "TCP", 00:22:15.711 "adrfam": "IPv4", 00:22:15.711 "traddr": "10.0.0.1", 00:22:15.711 "trsvcid": "41176" 00:22:15.711 }, 00:22:15.711 "auth": { 00:22:15.711 "state": "completed", 00:22:15.711 "digest": "sha384", 00:22:15.711 "dhgroup": "ffdhe8192" 00:22:15.711 } 00:22:15.711 } 00:22:15.711 ]' 00:22:15.711 17:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:15.711 17:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:15.712 17:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:15.712 17:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:15.712 17:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:15.712 17:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.712 17:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.712 17:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.972 17:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmI4NGI2NDMzN2ZiYTdkMGIxZDFhYWFmMTc1YjMwOGUwOTQxNzJjN2NlM2NkNTFmTz+1jQ==: --dhchap-ctrl-secret DHHC-1:03:ZjliYmFhNzYzN2NlZTk0NTBmOGM0OGFjY2E1MTcxN2UwNDU4NmNkNTQyNmEwNGFjMTAyZDU5ODNjMDc2ZjIxOdcbX0o=: 00:22:15.972 17:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZmI4NGI2NDMzN2ZiYTdkMGIxZDFhYWFmMTc1YjMwOGUwOTQxNzJjN2NlM2NkNTFmTz+1jQ==: --dhchap-ctrl-secret DHHC-1:03:ZjliYmFhNzYzN2NlZTk0NTBmOGM0OGFjY2E1MTcxN2UwNDU4NmNkNTQyNmEwNGFjMTAyZDU5ODNjMDc2ZjIxOdcbX0o=: 00:22:16.914 17:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:16.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:16.914 17:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:16.914 17:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.914 17:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.914 17:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.914 17:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:16.914 17:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:16.914 17:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:16.914 17:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:22:16.914 17:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:16.914 17:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:16.914 17:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:16.914 17:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:16.914 17:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:16.914 17:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.914 17:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.914 17:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.914 17:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.914 17:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.914 17:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.914 17:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:17.508 00:22:17.508 17:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:17.508 17:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:17.508 17:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.508 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.508 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:17.508 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.508 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.508 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.509 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:17.509 { 00:22:17.509 "cntlid": 91, 00:22:17.509 "qid": 0, 00:22:17.509 "state": "enabled", 00:22:17.509 "thread": "nvmf_tgt_poll_group_000", 00:22:17.509 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:17.509 "listen_address": { 00:22:17.509 "trtype": "TCP", 00:22:17.509 "adrfam": "IPv4", 00:22:17.509 "traddr": "10.0.0.2", 00:22:17.509 "trsvcid": "4420" 00:22:17.509 }, 00:22:17.509 "peer_address": { 00:22:17.509 "trtype": "TCP", 00:22:17.509 "adrfam": "IPv4", 00:22:17.509 "traddr": "10.0.0.1", 00:22:17.509 "trsvcid": "41204" 00:22:17.509 }, 00:22:17.509 "auth": { 00:22:17.509 "state": "completed", 00:22:17.509 "digest": "sha384", 00:22:17.509 "dhgroup": "ffdhe8192" 00:22:17.509 } 00:22:17.509 } 00:22:17.509 ]' 00:22:17.509 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:17.768 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:17.768 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:17.768 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:17.768 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:17.768 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:17.768 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.769 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.029 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGEyN2QyMDMxMjhiMmU0YzNlZGU3ZDU3YTA5OTI4MDGfzKv6: --dhchap-ctrl-secret DHHC-1:02:NjQ1ZjA5NmFiNmNiMDRhMjAxYzIzYjY0MTYxZjlkYjExZjMxM2JmMWZmZjUyNjY1DEziog==: 00:22:18.029 17:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NGEyN2QyMDMxMjhiMmU0YzNlZGU3ZDU3YTA5OTI4MDGfzKv6: --dhchap-ctrl-secret DHHC-1:02:NjQ1ZjA5NmFiNmNiMDRhMjAxYzIzYjY0MTYxZjlkYjExZjMxM2JmMWZmZjUyNjY1DEziog==: 00:22:18.600 17:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:18.860 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:18.861 17:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:18.861 17:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.861 17:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.861 17:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.861 17:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:18.861 17:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:18.861 17:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:18.861 17:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:22:18.861 17:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:18.861 17:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:18.861 17:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:18.861 17:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:18.861 17:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:18.861 17:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:18.861 17:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.861 17:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.861 17:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.861 17:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:18.861 17:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:18.861 17:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:19.432 00:22:19.432 17:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:19.432 17:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:19.432 17:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.692 17:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.692 17:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:19.693 17:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.693 17:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.693 17:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.693 17:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:19.693 { 00:22:19.693 "cntlid": 93, 00:22:19.693 "qid": 0, 00:22:19.693 "state": "enabled", 00:22:19.693 "thread": "nvmf_tgt_poll_group_000", 00:22:19.693 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:19.693 "listen_address": { 00:22:19.693 "trtype": "TCP", 00:22:19.693 "adrfam": "IPv4", 00:22:19.693 "traddr": "10.0.0.2", 00:22:19.693 "trsvcid": "4420" 00:22:19.693 }, 00:22:19.693 "peer_address": { 00:22:19.693 "trtype": "TCP", 00:22:19.693 "adrfam": "IPv4", 00:22:19.693 "traddr": "10.0.0.1", 00:22:19.693 "trsvcid": "41228" 00:22:19.693 }, 00:22:19.693 "auth": { 00:22:19.693 "state": "completed", 00:22:19.693 "digest": "sha384", 00:22:19.693 "dhgroup": "ffdhe8192" 00:22:19.693 } 00:22:19.693 } 00:22:19.693 ]' 00:22:19.693 17:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:19.693 17:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:19.693 17:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:19.693 17:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:19.693 17:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:19.693 17:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:19.693 17:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.693 17:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.952 17:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTkwYmUwYzBkN2JhZDAyZWEwNTU1MDkwNDI4YzM2MzM3MjY2YmNkYTRlMmI3NzQwcDDsRw==: --dhchap-ctrl-secret DHHC-1:01:NmUyYzVlZDllZTFkNTA5MDQ0ZTQ3ZTExZDc2YzY0ZGXBtqiM: 00:22:19.952 17:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YTkwYmUwYzBkN2JhZDAyZWEwNTU1MDkwNDI4YzM2MzM3MjY2YmNkYTRlMmI3NzQwcDDsRw==: --dhchap-ctrl-secret DHHC-1:01:NmUyYzVlZDllZTFkNTA5MDQ0ZTQ3ZTExZDc2YzY0ZGXBtqiM: 00:22:20.893 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.893 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:20.893 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.893 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.893 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.893 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:20.893 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:20.893 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:20.893 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:22:20.893 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:20.893 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:20.893 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:20.893 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:20.893 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:20.893 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:20.893 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.893 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.893 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.893 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:20.893 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:20.893 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:21.463 00:22:21.463 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:21.463 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:21.463 17:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.723 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.723 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:21.723 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.723 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.723 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.723 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:21.723 { 00:22:21.723 "cntlid": 95, 00:22:21.723 "qid": 0, 00:22:21.723 "state": "enabled", 00:22:21.723 "thread": "nvmf_tgt_poll_group_000", 00:22:21.723 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:21.723 "listen_address": { 00:22:21.723 "trtype": "TCP", 00:22:21.723 "adrfam": "IPv4", 00:22:21.723 "traddr": "10.0.0.2", 00:22:21.723 "trsvcid": "4420" 00:22:21.723 }, 00:22:21.723 "peer_address": { 00:22:21.723 "trtype": "TCP", 00:22:21.723 "adrfam": "IPv4", 00:22:21.723 "traddr": "10.0.0.1", 00:22:21.723 "trsvcid": "41260" 00:22:21.723 }, 00:22:21.723 "auth": { 00:22:21.723 "state": "completed", 00:22:21.723 "digest": "sha384", 00:22:21.723 "dhgroup": "ffdhe8192" 00:22:21.723 } 00:22:21.723 } 00:22:21.723 ]' 00:22:21.723 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:21.723 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:21.723 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:21.723 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:21.723 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:21.723 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:21.723 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.723 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:21.984 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjAzMWY3YjdkZWUxMDQ3YjY3YmE3YTNjOWMyMDMzOTQ2MmZlNGYxYzQ3NTg5ODM1MjI3MGNjNzhmZDJiYTM1NaBnAy0=: 00:22:21.984 17:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjAzMWY3YjdkZWUxMDQ3YjY3YmE3YTNjOWMyMDMzOTQ2MmZlNGYxYzQ3NTg5ODM1MjI3MGNjNzhmZDJiYTM1NaBnAy0=: 00:22:22.598 17:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:22.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:22.926 17:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:22.926 17:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.926 17:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.926 17:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.926 17:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:22:22.926 17:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:22.926 17:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:22.926 17:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:22.926 17:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:22.927 17:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:22:22.927 17:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:22.927 17:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:22.927 17:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:22.927 17:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:22.927 17:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:22.927 17:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.927 17:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.927 17:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.927 17:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.927 17:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.927 17:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.927 17:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:23.187 00:22:23.187 17:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:23.187 17:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:23.187 17:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.449 17:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.449 17:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.449 17:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.449 17:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.449 17:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.449 17:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:23.449 { 00:22:23.449 "cntlid": 97, 00:22:23.449 "qid": 0, 00:22:23.449 "state": "enabled", 00:22:23.449 "thread": "nvmf_tgt_poll_group_000", 00:22:23.449 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:23.449 "listen_address": { 00:22:23.449 "trtype": "TCP", 00:22:23.449 "adrfam": "IPv4", 00:22:23.449 "traddr": "10.0.0.2", 00:22:23.449 "trsvcid": "4420" 00:22:23.449 }, 00:22:23.449 "peer_address": { 00:22:23.449 "trtype": "TCP", 00:22:23.449 "adrfam": "IPv4", 00:22:23.449 "traddr": "10.0.0.1", 00:22:23.449 "trsvcid": "46762" 00:22:23.449 }, 00:22:23.449 "auth": { 00:22:23.449 "state": "completed", 00:22:23.449 "digest": "sha512", 00:22:23.449 "dhgroup": "null" 00:22:23.449 } 00:22:23.449 } 00:22:23.449 ]' 00:22:23.449 17:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:23.449 17:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:23.449 17:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:23.449 17:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:23.449 17:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:23.449 17:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:23.449 17:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.449 17:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:23.711 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmI4NGI2NDMzN2ZiYTdkMGIxZDFhYWFmMTc1YjMwOGUwOTQxNzJjN2NlM2NkNTFmTz+1jQ==: --dhchap-ctrl-secret DHHC-1:03:ZjliYmFhNzYzN2NlZTk0NTBmOGM0OGFjY2E1MTcxN2UwNDU4NmNkNTQyNmEwNGFjMTAyZDU5ODNjMDc2ZjIxOdcbX0o=: 00:22:23.711 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZmI4NGI2NDMzN2ZiYTdkMGIxZDFhYWFmMTc1YjMwOGUwOTQxNzJjN2NlM2NkNTFmTz+1jQ==: --dhchap-ctrl-secret DHHC-1:03:ZjliYmFhNzYzN2NlZTk0NTBmOGM0OGFjY2E1MTcxN2UwNDU4NmNkNTQyNmEwNGFjMTAyZDU5ODNjMDc2ZjIxOdcbX0o=: 00:22:24.653 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:24.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:24.653 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:24.653 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.653 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.653 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.653 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:24.653 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:24.653 17:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:24.653 17:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:22:24.653 17:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:24.653 17:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:24.653 17:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:24.653 17:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:24.653 17:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:24.653 17:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:24.653 17:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.653 17:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.653 17:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.653 17:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:24.653 17:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:24.653 17:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:24.915 00:22:24.915 17:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:24.915 17:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:24.915 17:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:24.915 17:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.915 17:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:24.915 17:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.915 17:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.176 17:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.176 17:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:25.176 { 00:22:25.176 "cntlid": 99, 00:22:25.176 "qid": 0, 00:22:25.176 "state": "enabled", 00:22:25.176 "thread": "nvmf_tgt_poll_group_000", 00:22:25.176 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:25.176 "listen_address": { 00:22:25.176 "trtype": "TCP", 00:22:25.176 "adrfam": "IPv4", 00:22:25.176 "traddr": "10.0.0.2", 00:22:25.176 "trsvcid": "4420" 00:22:25.176 }, 00:22:25.176 "peer_address": { 00:22:25.176 "trtype": "TCP", 00:22:25.176 "adrfam": "IPv4", 00:22:25.176 "traddr": "10.0.0.1", 00:22:25.176 "trsvcid": "46790" 00:22:25.176 }, 00:22:25.176 "auth": { 00:22:25.176 "state": "completed", 00:22:25.176 "digest": "sha512", 00:22:25.176 "dhgroup": "null" 00:22:25.176 } 00:22:25.176 } 00:22:25.176 ]' 00:22:25.176 17:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:25.176 17:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:25.176 17:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:25.176 17:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:25.176 17:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:25.176 17:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:25.176 17:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:25.176 17:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:25.437 17:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGEyN2QyMDMxMjhiMmU0YzNlZGU3ZDU3YTA5OTI4MDGfzKv6: --dhchap-ctrl-secret DHHC-1:02:NjQ1ZjA5NmFiNmNiMDRhMjAxYzIzYjY0MTYxZjlkYjExZjMxM2JmMWZmZjUyNjY1DEziog==: 00:22:25.437 17:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NGEyN2QyMDMxMjhiMmU0YzNlZGU3ZDU3YTA5OTI4MDGfzKv6: --dhchap-ctrl-secret DHHC-1:02:NjQ1ZjA5NmFiNmNiMDRhMjAxYzIzYjY0MTYxZjlkYjExZjMxM2JmMWZmZjUyNjY1DEziog==: 00:22:26.009 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:26.009 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:26.009 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:26.009 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.009 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.269 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.269 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:26.269 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:26.269 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:26.269 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:22:26.269 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:26.269 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:26.269 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:26.269 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:26.269 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:26.269 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:26.269 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.269 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.269 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.269 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:26.269 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:26.269 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:26.531 00:22:26.531 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:26.531 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:26.531 17:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.791 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.791 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:26.791 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.791 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.791 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.791 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:26.791 { 00:22:26.791 "cntlid": 101, 00:22:26.791 "qid": 0, 00:22:26.791 "state": "enabled", 00:22:26.791 "thread": "nvmf_tgt_poll_group_000", 00:22:26.791 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:26.791 "listen_address": { 00:22:26.791 "trtype": "TCP", 00:22:26.791 "adrfam": "IPv4", 00:22:26.791 "traddr": "10.0.0.2", 00:22:26.791 "trsvcid": "4420" 00:22:26.791 }, 00:22:26.791 "peer_address": { 00:22:26.791 "trtype": "TCP", 00:22:26.791 "adrfam": "IPv4", 00:22:26.791 "traddr": "10.0.0.1", 00:22:26.791 "trsvcid": "46812" 00:22:26.791 }, 00:22:26.791 "auth": { 00:22:26.791 "state": "completed", 00:22:26.791 "digest": "sha512", 00:22:26.791 "dhgroup": "null" 00:22:26.791 } 00:22:26.791 } 00:22:26.791 ]' 00:22:26.791 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:26.791 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:26.791 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:26.791 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:26.791 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:26.791 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:26.791 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:26.791 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:27.051 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTkwYmUwYzBkN2JhZDAyZWEwNTU1MDkwNDI4YzM2MzM3MjY2YmNkYTRlMmI3NzQwcDDsRw==: --dhchap-ctrl-secret DHHC-1:01:NmUyYzVlZDllZTFkNTA5MDQ0ZTQ3ZTExZDc2YzY0ZGXBtqiM: 00:22:27.051 17:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YTkwYmUwYzBkN2JhZDAyZWEwNTU1MDkwNDI4YzM2MzM3MjY2YmNkYTRlMmI3NzQwcDDsRw==: --dhchap-ctrl-secret DHHC-1:01:NmUyYzVlZDllZTFkNTA5MDQ0ZTQ3ZTExZDc2YzY0ZGXBtqiM: 00:22:27.990 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:27.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:27.990 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:27.990 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.990 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.990 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.990 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:27.990 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:27.990 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:27.990 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:22:27.990 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:27.990 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:27.990 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:27.990 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:27.990 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:27.990 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:27.990 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.990 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.990 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.990 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:27.990 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:27.990 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:28.250 00:22:28.250 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:28.250 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:28.250 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.510 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.510 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:28.510 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.510 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.510 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.510 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:28.510 { 00:22:28.510 "cntlid": 103, 00:22:28.510 "qid": 0, 00:22:28.510 "state": "enabled", 00:22:28.510 "thread": "nvmf_tgt_poll_group_000", 00:22:28.510 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:28.510 "listen_address": { 00:22:28.510 "trtype": "TCP", 00:22:28.510 "adrfam": "IPv4", 00:22:28.510 "traddr": "10.0.0.2", 00:22:28.510 "trsvcid": "4420" 00:22:28.510 }, 00:22:28.511 "peer_address": { 00:22:28.511 "trtype": "TCP", 00:22:28.511 "adrfam": "IPv4", 00:22:28.511 "traddr": "10.0.0.1", 00:22:28.511 "trsvcid": "46840" 00:22:28.511 }, 00:22:28.511 "auth": { 00:22:28.511 "state": "completed", 00:22:28.511 "digest": "sha512", 00:22:28.511 "dhgroup": "null" 00:22:28.511 } 00:22:28.511 } 00:22:28.511 ]' 00:22:28.511 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:28.511 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:28.511 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:28.511 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:28.511 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:28.511 17:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:28.511 17:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:28.511 17:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.771 17:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjAzMWY3YjdkZWUxMDQ3YjY3YmE3YTNjOWMyMDMzOTQ2MmZlNGYxYzQ3NTg5ODM1MjI3MGNjNzhmZDJiYTM1NaBnAy0=: 00:22:28.771 17:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjAzMWY3YjdkZWUxMDQ3YjY3YmE3YTNjOWMyMDMzOTQ2MmZlNGYxYzQ3NTg5ODM1MjI3MGNjNzhmZDJiYTM1NaBnAy0=: 00:22:29.711 17:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:29.711 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:29.711 17:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:29.711 17:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.711 17:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.711 17:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.711 17:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:29.711 17:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:29.711 17:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:29.711 17:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:29.711 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:22:29.711 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:29.711 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:29.711 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:29.711 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:29.711 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:29.711 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:29.711 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.711 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.711 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.711 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:29.711 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:29.711 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:29.971 00:22:29.971 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:29.971 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:29.971 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.231 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.231 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:30.231 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.231 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.231 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.231 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:30.231 { 00:22:30.231 "cntlid": 105, 00:22:30.231 "qid": 0, 00:22:30.231 "state": "enabled", 00:22:30.231 "thread": "nvmf_tgt_poll_group_000", 00:22:30.231 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:30.231 "listen_address": { 00:22:30.231 "trtype": "TCP", 00:22:30.231 "adrfam": "IPv4", 00:22:30.231 "traddr": "10.0.0.2", 00:22:30.231 "trsvcid": "4420" 00:22:30.231 }, 00:22:30.231 "peer_address": { 00:22:30.231 "trtype": "TCP", 00:22:30.231 "adrfam": "IPv4", 00:22:30.231 "traddr": "10.0.0.1", 00:22:30.231 "trsvcid": "46868" 00:22:30.231 }, 00:22:30.231 "auth": { 00:22:30.231 "state": "completed", 00:22:30.231 "digest": "sha512", 00:22:30.231 "dhgroup": "ffdhe2048" 00:22:30.231 } 00:22:30.231 } 00:22:30.231 ]' 00:22:30.231 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:30.231 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:30.231 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:30.231 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:30.231 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:30.231 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:30.232 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.232 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:30.492 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmI4NGI2NDMzN2ZiYTdkMGIxZDFhYWFmMTc1YjMwOGUwOTQxNzJjN2NlM2NkNTFmTz+1jQ==: --dhchap-ctrl-secret DHHC-1:03:ZjliYmFhNzYzN2NlZTk0NTBmOGM0OGFjY2E1MTcxN2UwNDU4NmNkNTQyNmEwNGFjMTAyZDU5ODNjMDc2ZjIxOdcbX0o=: 00:22:30.492 17:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZmI4NGI2NDMzN2ZiYTdkMGIxZDFhYWFmMTc1YjMwOGUwOTQxNzJjN2NlM2NkNTFmTz+1jQ==: --dhchap-ctrl-secret DHHC-1:03:ZjliYmFhNzYzN2NlZTk0NTBmOGM0OGFjY2E1MTcxN2UwNDU4NmNkNTQyNmEwNGFjMTAyZDU5ODNjMDc2ZjIxOdcbX0o=: 00:22:31.433 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:31.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:31.433 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:31.433 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.433 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.433 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.433 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:31.433 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:31.433 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:31.434 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:22:31.434 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:31.434 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:31.434 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:31.434 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:31.434 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:31.434 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:31.434 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.434 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.434 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.434 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:31.434 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:31.434 17:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:31.694 00:22:31.694 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:31.694 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:31.694 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:31.955 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.955 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:31.955 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.955 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.955 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.955 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:31.955 { 00:22:31.955 "cntlid": 107, 00:22:31.955 "qid": 0, 00:22:31.955 "state": "enabled", 00:22:31.955 "thread": "nvmf_tgt_poll_group_000", 00:22:31.955 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:31.955 "listen_address": { 00:22:31.955 "trtype": "TCP", 00:22:31.955 "adrfam": "IPv4", 00:22:31.955 "traddr": "10.0.0.2", 00:22:31.955 "trsvcid": "4420" 00:22:31.955 }, 00:22:31.955 "peer_address": { 00:22:31.955 "trtype": "TCP", 00:22:31.955 "adrfam": "IPv4", 00:22:31.955 "traddr": "10.0.0.1", 00:22:31.955 "trsvcid": "46882" 00:22:31.955 }, 00:22:31.955 "auth": { 00:22:31.955 "state": "completed", 00:22:31.955 "digest": "sha512", 00:22:31.955 "dhgroup": "ffdhe2048" 00:22:31.955 } 00:22:31.955 } 00:22:31.955 ]' 00:22:31.955 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:31.955 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:31.955 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:31.955 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:31.955 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:31.955 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:31.955 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:31.955 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:32.215 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGEyN2QyMDMxMjhiMmU0YzNlZGU3ZDU3YTA5OTI4MDGfzKv6: --dhchap-ctrl-secret DHHC-1:02:NjQ1ZjA5NmFiNmNiMDRhMjAxYzIzYjY0MTYxZjlkYjExZjMxM2JmMWZmZjUyNjY1DEziog==: 00:22:32.215 17:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NGEyN2QyMDMxMjhiMmU0YzNlZGU3ZDU3YTA5OTI4MDGfzKv6: --dhchap-ctrl-secret DHHC-1:02:NjQ1ZjA5NmFiNmNiMDRhMjAxYzIzYjY0MTYxZjlkYjExZjMxM2JmMWZmZjUyNjY1DEziog==: 00:22:33.157 17:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:33.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:33.157 17:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:33.157 17:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.157 17:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.157 17:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.157 17:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:33.157 17:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:33.157 17:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:33.157 17:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:22:33.157 17:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:33.157 17:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:33.157 17:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:33.157 17:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:33.157 17:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:33.157 17:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:33.157 17:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.157 17:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.157 17:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.157 17:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:33.157 17:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:33.157 17:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:33.417 00:22:33.417 17:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:33.417 17:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:33.417 17:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:33.679 17:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.679 17:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:33.679 17:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.679 17:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.679 17:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.679 17:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:33.679 { 00:22:33.679 "cntlid": 109, 00:22:33.679 "qid": 0, 00:22:33.679 "state": "enabled", 00:22:33.679 "thread": "nvmf_tgt_poll_group_000", 00:22:33.679 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:33.679 "listen_address": { 00:22:33.679 "trtype": "TCP", 00:22:33.679 "adrfam": "IPv4", 00:22:33.679 "traddr": "10.0.0.2", 00:22:33.679 "trsvcid": "4420" 00:22:33.679 }, 00:22:33.679 "peer_address": { 00:22:33.679 "trtype": "TCP", 00:22:33.679 "adrfam": "IPv4", 00:22:33.679 "traddr": "10.0.0.1", 00:22:33.679 "trsvcid": "49064" 00:22:33.679 }, 00:22:33.679 "auth": { 00:22:33.679 "state": "completed", 00:22:33.679 "digest": "sha512", 00:22:33.679 "dhgroup": "ffdhe2048" 00:22:33.679 } 00:22:33.679 } 00:22:33.679 ]' 00:22:33.679 17:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:33.679 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:33.679 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:33.679 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:33.679 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:33.679 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:33.679 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:33.679 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:33.940 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTkwYmUwYzBkN2JhZDAyZWEwNTU1MDkwNDI4YzM2MzM3MjY2YmNkYTRlMmI3NzQwcDDsRw==: --dhchap-ctrl-secret DHHC-1:01:NmUyYzVlZDllZTFkNTA5MDQ0ZTQ3ZTExZDc2YzY0ZGXBtqiM: 00:22:33.940 17:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YTkwYmUwYzBkN2JhZDAyZWEwNTU1MDkwNDI4YzM2MzM3MjY2YmNkYTRlMmI3NzQwcDDsRw==: --dhchap-ctrl-secret DHHC-1:01:NmUyYzVlZDllZTFkNTA5MDQ0ZTQ3ZTExZDc2YzY0ZGXBtqiM: 00:22:34.511 17:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:34.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:34.511 17:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:34.511 17:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.511 17:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.773 17:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.773 17:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:34.773 17:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:34.773 17:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:34.773 17:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:22:34.773 17:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:34.773 17:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:34.773 17:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:34.773 17:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:34.773 17:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:34.773 17:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:34.773 17:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.773 17:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.773 17:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.773 17:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:34.773 17:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:34.773 17:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:35.034 00:22:35.034 17:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:35.034 17:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:35.034 17:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:35.294 17:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.294 17:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:35.294 17:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.294 17:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.294 17:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.294 17:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:35.294 { 00:22:35.294 "cntlid": 111, 00:22:35.294 "qid": 0, 00:22:35.294 "state": "enabled", 00:22:35.294 "thread": "nvmf_tgt_poll_group_000", 00:22:35.294 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:35.294 "listen_address": { 00:22:35.294 "trtype": "TCP", 00:22:35.294 "adrfam": "IPv4", 00:22:35.294 "traddr": "10.0.0.2", 00:22:35.294 "trsvcid": "4420" 00:22:35.294 }, 00:22:35.294 "peer_address": { 00:22:35.294 "trtype": "TCP", 00:22:35.294 "adrfam": "IPv4", 00:22:35.294 "traddr": "10.0.0.1", 00:22:35.294 "trsvcid": "49108" 00:22:35.294 }, 00:22:35.294 "auth": { 00:22:35.294 "state": "completed", 00:22:35.294 "digest": "sha512", 00:22:35.294 "dhgroup": "ffdhe2048" 00:22:35.294 } 00:22:35.294 } 00:22:35.294 ]' 00:22:35.294 17:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:35.294 17:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:35.294 17:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:35.294 17:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:35.294 17:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:35.294 17:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:35.294 17:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:35.294 17:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:35.555 17:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjAzMWY3YjdkZWUxMDQ3YjY3YmE3YTNjOWMyMDMzOTQ2MmZlNGYxYzQ3NTg5ODM1MjI3MGNjNzhmZDJiYTM1NaBnAy0=: 00:22:35.555 17:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjAzMWY3YjdkZWUxMDQ3YjY3YmE3YTNjOWMyMDMzOTQ2MmZlNGYxYzQ3NTg5ODM1MjI3MGNjNzhmZDJiYTM1NaBnAy0=: 00:22:36.496 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:36.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:36.496 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:36.496 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.496 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.496 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.496 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:36.496 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:36.496 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:36.496 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:36.496 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:22:36.496 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:36.496 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:36.496 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:36.496 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:36.496 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:36.496 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:36.496 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.496 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.496 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.496 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:36.496 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:36.496 17:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:36.756 00:22:36.756 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:36.756 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:36.756 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.016 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.016 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:37.016 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.016 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.016 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.016 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:37.016 { 00:22:37.016 "cntlid": 113, 00:22:37.016 "qid": 0, 00:22:37.016 "state": "enabled", 00:22:37.016 "thread": "nvmf_tgt_poll_group_000", 00:22:37.016 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:37.016 "listen_address": { 00:22:37.016 "trtype": "TCP", 00:22:37.016 "adrfam": "IPv4", 00:22:37.016 "traddr": "10.0.0.2", 00:22:37.016 "trsvcid": "4420" 00:22:37.016 }, 00:22:37.016 "peer_address": { 00:22:37.016 "trtype": "TCP", 00:22:37.016 "adrfam": "IPv4", 00:22:37.016 "traddr": "10.0.0.1", 00:22:37.016 "trsvcid": "49136" 00:22:37.016 }, 00:22:37.016 "auth": { 00:22:37.016 "state": "completed", 00:22:37.016 "digest": "sha512", 00:22:37.016 "dhgroup": "ffdhe3072" 00:22:37.016 } 00:22:37.016 } 00:22:37.016 ]' 00:22:37.016 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:37.017 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:37.017 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:37.017 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:37.017 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:37.017 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:37.017 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:37.017 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:37.276 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmI4NGI2NDMzN2ZiYTdkMGIxZDFhYWFmMTc1YjMwOGUwOTQxNzJjN2NlM2NkNTFmTz+1jQ==: --dhchap-ctrl-secret DHHC-1:03:ZjliYmFhNzYzN2NlZTk0NTBmOGM0OGFjY2E1MTcxN2UwNDU4NmNkNTQyNmEwNGFjMTAyZDU5ODNjMDc2ZjIxOdcbX0o=: 00:22:37.276 17:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZmI4NGI2NDMzN2ZiYTdkMGIxZDFhYWFmMTc1YjMwOGUwOTQxNzJjN2NlM2NkNTFmTz+1jQ==: --dhchap-ctrl-secret DHHC-1:03:ZjliYmFhNzYzN2NlZTk0NTBmOGM0OGFjY2E1MTcxN2UwNDU4NmNkNTQyNmEwNGFjMTAyZDU5ODNjMDc2ZjIxOdcbX0o=: 00:22:38.219 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:38.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:38.219 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:38.219 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.219 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.219 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.219 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:38.219 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:38.219 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:38.219 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:22:38.219 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:38.219 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:38.219 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:38.219 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:38.219 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:38.219 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:38.219 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.219 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.219 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.219 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:38.219 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:38.219 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:38.480 00:22:38.480 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:38.480 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:38.480 17:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:38.740 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.740 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:38.740 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.740 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.740 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.740 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:38.740 { 00:22:38.740 "cntlid": 115, 00:22:38.740 "qid": 0, 00:22:38.740 "state": "enabled", 00:22:38.740 "thread": "nvmf_tgt_poll_group_000", 00:22:38.740 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:38.740 "listen_address": { 00:22:38.740 "trtype": "TCP", 00:22:38.740 "adrfam": "IPv4", 00:22:38.740 "traddr": "10.0.0.2", 00:22:38.740 "trsvcid": "4420" 00:22:38.740 }, 00:22:38.740 "peer_address": { 00:22:38.740 "trtype": "TCP", 00:22:38.740 "adrfam": "IPv4", 00:22:38.740 "traddr": "10.0.0.1", 00:22:38.740 "trsvcid": "49168" 00:22:38.740 }, 00:22:38.740 "auth": { 00:22:38.740 "state": "completed", 00:22:38.740 "digest": "sha512", 00:22:38.740 "dhgroup": "ffdhe3072" 00:22:38.740 } 00:22:38.740 } 00:22:38.740 ]' 00:22:38.740 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:38.740 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:38.740 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:38.740 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:38.740 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:38.740 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:38.740 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:38.740 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:39.000 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGEyN2QyMDMxMjhiMmU0YzNlZGU3ZDU3YTA5OTI4MDGfzKv6: --dhchap-ctrl-secret DHHC-1:02:NjQ1ZjA5NmFiNmNiMDRhMjAxYzIzYjY0MTYxZjlkYjExZjMxM2JmMWZmZjUyNjY1DEziog==: 00:22:39.000 17:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NGEyN2QyMDMxMjhiMmU0YzNlZGU3ZDU3YTA5OTI4MDGfzKv6: --dhchap-ctrl-secret DHHC-1:02:NjQ1ZjA5NmFiNmNiMDRhMjAxYzIzYjY0MTYxZjlkYjExZjMxM2JmMWZmZjUyNjY1DEziog==: 00:22:39.570 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:39.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:39.829 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:39.829 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.829 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.829 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.829 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:39.829 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:39.829 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:39.829 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:22:39.829 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:39.829 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:39.829 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:39.829 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:39.829 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:39.829 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:39.829 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.829 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.829 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.829 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:39.829 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:39.829 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:40.089 00:22:40.089 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:40.089 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:40.089 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:40.349 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.349 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:40.349 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.349 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.349 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.349 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:40.349 { 00:22:40.349 "cntlid": 117, 00:22:40.349 "qid": 0, 00:22:40.349 "state": "enabled", 00:22:40.349 "thread": "nvmf_tgt_poll_group_000", 00:22:40.349 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:40.349 "listen_address": { 00:22:40.349 "trtype": "TCP", 00:22:40.349 "adrfam": "IPv4", 00:22:40.349 "traddr": "10.0.0.2", 00:22:40.349 "trsvcid": "4420" 00:22:40.349 }, 00:22:40.349 "peer_address": { 00:22:40.349 "trtype": "TCP", 00:22:40.349 "adrfam": "IPv4", 00:22:40.349 "traddr": "10.0.0.1", 00:22:40.349 "trsvcid": "49212" 00:22:40.349 }, 00:22:40.349 "auth": { 00:22:40.349 "state": "completed", 00:22:40.349 "digest": "sha512", 00:22:40.349 "dhgroup": "ffdhe3072" 00:22:40.349 } 00:22:40.349 } 00:22:40.349 ]' 00:22:40.349 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:40.349 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:40.349 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:40.349 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:40.349 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:40.608 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:40.608 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:40.608 17:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:40.608 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTkwYmUwYzBkN2JhZDAyZWEwNTU1MDkwNDI4YzM2MzM3MjY2YmNkYTRlMmI3NzQwcDDsRw==: --dhchap-ctrl-secret DHHC-1:01:NmUyYzVlZDllZTFkNTA5MDQ0ZTQ3ZTExZDc2YzY0ZGXBtqiM: 00:22:40.608 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YTkwYmUwYzBkN2JhZDAyZWEwNTU1MDkwNDI4YzM2MzM3MjY2YmNkYTRlMmI3NzQwcDDsRw==: --dhchap-ctrl-secret DHHC-1:01:NmUyYzVlZDllZTFkNTA5MDQ0ZTQ3ZTExZDc2YzY0ZGXBtqiM: 00:22:41.549 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:41.549 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:41.549 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:41.549 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.549 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.549 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.549 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:41.549 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:41.549 17:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:41.549 17:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:22:41.549 17:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:41.549 17:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:41.549 17:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:41.549 17:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:41.549 17:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:41.549 17:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:41.549 17:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.549 17:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.549 17:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.549 17:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:41.549 17:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:41.549 17:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:41.809 00:22:41.809 17:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:41.809 17:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:41.809 17:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:42.070 17:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.070 17:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:42.070 17:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.070 17:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.070 17:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.070 17:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:42.070 { 00:22:42.070 "cntlid": 119, 00:22:42.070 "qid": 0, 00:22:42.070 "state": "enabled", 00:22:42.070 "thread": "nvmf_tgt_poll_group_000", 00:22:42.070 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:42.070 "listen_address": { 00:22:42.070 "trtype": "TCP", 00:22:42.070 "adrfam": "IPv4", 00:22:42.070 "traddr": "10.0.0.2", 00:22:42.070 "trsvcid": "4420" 00:22:42.070 }, 00:22:42.070 "peer_address": { 00:22:42.070 "trtype": "TCP", 00:22:42.070 "adrfam": "IPv4", 00:22:42.070 "traddr": "10.0.0.1", 00:22:42.070 "trsvcid": "49236" 00:22:42.070 }, 00:22:42.070 "auth": { 00:22:42.070 "state": "completed", 00:22:42.070 "digest": "sha512", 00:22:42.070 "dhgroup": "ffdhe3072" 00:22:42.070 } 00:22:42.070 } 00:22:42.070 ]' 00:22:42.070 17:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:42.070 17:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:42.070 17:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:42.070 17:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:42.070 17:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:42.070 17:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:42.070 17:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:42.070 17:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:42.330 17:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjAzMWY3YjdkZWUxMDQ3YjY3YmE3YTNjOWMyMDMzOTQ2MmZlNGYxYzQ3NTg5ODM1MjI3MGNjNzhmZDJiYTM1NaBnAy0=: 00:22:42.330 17:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjAzMWY3YjdkZWUxMDQ3YjY3YmE3YTNjOWMyMDMzOTQ2MmZlNGYxYzQ3NTg5ODM1MjI3MGNjNzhmZDJiYTM1NaBnAy0=: 00:22:43.270 17:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:43.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:43.270 17:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:43.270 17:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.270 17:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.270 17:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.270 17:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:43.270 17:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:43.270 17:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:43.270 17:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:43.270 17:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:22:43.270 17:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:43.270 17:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:43.270 17:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:43.270 17:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:43.270 17:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:43.270 17:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:43.270 17:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.270 17:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.270 17:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.270 17:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:43.270 17:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:43.270 17:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:43.531 00:22:43.531 17:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:43.531 17:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:43.531 17:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.792 17:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.792 17:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:43.792 17:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.792 17:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.792 17:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.792 17:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:43.792 { 00:22:43.792 "cntlid": 121, 00:22:43.792 "qid": 0, 00:22:43.792 "state": "enabled", 00:22:43.792 "thread": "nvmf_tgt_poll_group_000", 00:22:43.792 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:43.792 "listen_address": { 00:22:43.792 "trtype": "TCP", 00:22:43.792 "adrfam": "IPv4", 00:22:43.792 "traddr": "10.0.0.2", 00:22:43.792 "trsvcid": "4420" 00:22:43.792 }, 00:22:43.792 "peer_address": { 00:22:43.792 "trtype": "TCP", 00:22:43.792 "adrfam": "IPv4", 00:22:43.792 "traddr": "10.0.0.1", 00:22:43.792 "trsvcid": "37698" 00:22:43.792 }, 00:22:43.792 "auth": { 00:22:43.792 "state": "completed", 00:22:43.792 "digest": "sha512", 00:22:43.792 "dhgroup": "ffdhe4096" 00:22:43.792 } 00:22:43.792 } 00:22:43.792 ]' 00:22:43.792 17:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:43.792 17:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:43.792 17:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:43.792 17:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:43.792 17:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:44.052 17:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:44.052 17:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:44.052 17:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:44.052 17:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmI4NGI2NDMzN2ZiYTdkMGIxZDFhYWFmMTc1YjMwOGUwOTQxNzJjN2NlM2NkNTFmTz+1jQ==: --dhchap-ctrl-secret DHHC-1:03:ZjliYmFhNzYzN2NlZTk0NTBmOGM0OGFjY2E1MTcxN2UwNDU4NmNkNTQyNmEwNGFjMTAyZDU5ODNjMDc2ZjIxOdcbX0o=: 00:22:44.052 17:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZmI4NGI2NDMzN2ZiYTdkMGIxZDFhYWFmMTc1YjMwOGUwOTQxNzJjN2NlM2NkNTFmTz+1jQ==: --dhchap-ctrl-secret DHHC-1:03:ZjliYmFhNzYzN2NlZTk0NTBmOGM0OGFjY2E1MTcxN2UwNDU4NmNkNTQyNmEwNGFjMTAyZDU5ODNjMDc2ZjIxOdcbX0o=: 00:22:44.993 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:44.993 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:44.993 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:44.993 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.993 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.993 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.993 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:44.993 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:44.993 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:44.993 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:22:44.993 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:44.993 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:44.993 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:44.993 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:44.993 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:44.993 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:44.993 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.993 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.993 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.993 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:44.993 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:44.993 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:45.254 00:22:45.254 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:45.254 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:45.254 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:45.514 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.515 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:45.515 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.515 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.515 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.515 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:45.515 { 00:22:45.515 "cntlid": 123, 00:22:45.515 "qid": 0, 00:22:45.515 "state": "enabled", 00:22:45.515 "thread": "nvmf_tgt_poll_group_000", 00:22:45.515 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:45.515 "listen_address": { 00:22:45.515 "trtype": "TCP", 00:22:45.515 "adrfam": "IPv4", 00:22:45.515 "traddr": "10.0.0.2", 00:22:45.515 "trsvcid": "4420" 00:22:45.515 }, 00:22:45.515 "peer_address": { 00:22:45.515 "trtype": "TCP", 00:22:45.515 "adrfam": "IPv4", 00:22:45.515 "traddr": "10.0.0.1", 00:22:45.515 "trsvcid": "37728" 00:22:45.515 }, 00:22:45.515 "auth": { 00:22:45.515 "state": "completed", 00:22:45.515 "digest": "sha512", 00:22:45.515 "dhgroup": "ffdhe4096" 00:22:45.515 } 00:22:45.515 } 00:22:45.515 ]' 00:22:45.515 17:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:45.515 17:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:45.515 17:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:45.515 17:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:45.515 17:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:45.775 17:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:45.775 17:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:45.775 17:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:45.775 17:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGEyN2QyMDMxMjhiMmU0YzNlZGU3ZDU3YTA5OTI4MDGfzKv6: --dhchap-ctrl-secret DHHC-1:02:NjQ1ZjA5NmFiNmNiMDRhMjAxYzIzYjY0MTYxZjlkYjExZjMxM2JmMWZmZjUyNjY1DEziog==: 00:22:45.775 17:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NGEyN2QyMDMxMjhiMmU0YzNlZGU3ZDU3YTA5OTI4MDGfzKv6: --dhchap-ctrl-secret DHHC-1:02:NjQ1ZjA5NmFiNmNiMDRhMjAxYzIzYjY0MTYxZjlkYjExZjMxM2JmMWZmZjUyNjY1DEziog==: 00:22:46.716 17:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:46.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:46.717 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:46.717 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.717 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.717 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.717 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:46.717 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:46.717 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:46.717 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:22:46.717 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:46.717 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:46.717 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:46.717 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:46.717 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:46.717 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:46.717 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.717 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.717 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.717 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:46.717 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:46.717 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:46.977 00:22:46.977 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:46.977 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:46.977 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:47.238 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.238 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:47.238 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.238 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.238 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.238 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:47.238 { 00:22:47.238 "cntlid": 125, 00:22:47.238 "qid": 0, 00:22:47.238 "state": "enabled", 00:22:47.238 "thread": "nvmf_tgt_poll_group_000", 00:22:47.238 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:47.238 "listen_address": { 00:22:47.238 "trtype": "TCP", 00:22:47.238 "adrfam": "IPv4", 00:22:47.238 "traddr": "10.0.0.2", 00:22:47.238 "trsvcid": "4420" 00:22:47.238 }, 00:22:47.238 "peer_address": { 00:22:47.238 "trtype": "TCP", 00:22:47.238 "adrfam": "IPv4", 00:22:47.238 "traddr": "10.0.0.1", 00:22:47.238 "trsvcid": "37740" 00:22:47.238 }, 00:22:47.238 "auth": { 00:22:47.238 "state": "completed", 00:22:47.238 "digest": "sha512", 00:22:47.238 "dhgroup": "ffdhe4096" 00:22:47.238 } 00:22:47.238 } 00:22:47.238 ]' 00:22:47.238 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:47.238 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:47.238 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:47.499 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:47.499 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:47.499 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:47.499 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:47.499 17:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:47.499 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTkwYmUwYzBkN2JhZDAyZWEwNTU1MDkwNDI4YzM2MzM3MjY2YmNkYTRlMmI3NzQwcDDsRw==: --dhchap-ctrl-secret DHHC-1:01:NmUyYzVlZDllZTFkNTA5MDQ0ZTQ3ZTExZDc2YzY0ZGXBtqiM: 00:22:47.499 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YTkwYmUwYzBkN2JhZDAyZWEwNTU1MDkwNDI4YzM2MzM3MjY2YmNkYTRlMmI3NzQwcDDsRw==: --dhchap-ctrl-secret DHHC-1:01:NmUyYzVlZDllZTFkNTA5MDQ0ZTQ3ZTExZDc2YzY0ZGXBtqiM: 00:22:48.441 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:48.441 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:48.441 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:48.441 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.441 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.441 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.441 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:48.441 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:48.441 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:48.441 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:22:48.441 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:48.441 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:48.441 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:48.441 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:48.441 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:48.441 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:48.441 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.441 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.441 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.441 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:48.441 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:48.441 17:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:48.701 00:22:48.962 17:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:48.962 17:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:48.962 17:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:48.962 17:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.962 17:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:48.962 17:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.962 17:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.962 17:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.962 17:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:48.962 { 00:22:48.962 "cntlid": 127, 00:22:48.962 "qid": 0, 00:22:48.962 "state": "enabled", 00:22:48.962 "thread": "nvmf_tgt_poll_group_000", 00:22:48.962 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:48.962 "listen_address": { 00:22:48.962 "trtype": "TCP", 00:22:48.962 "adrfam": "IPv4", 00:22:48.962 "traddr": "10.0.0.2", 00:22:48.962 "trsvcid": "4420" 00:22:48.962 }, 00:22:48.962 "peer_address": { 00:22:48.962 "trtype": "TCP", 00:22:48.962 "adrfam": "IPv4", 00:22:48.962 "traddr": "10.0.0.1", 00:22:48.962 "trsvcid": "37776" 00:22:48.962 }, 00:22:48.962 "auth": { 00:22:48.962 "state": "completed", 00:22:48.962 "digest": "sha512", 00:22:48.962 "dhgroup": "ffdhe4096" 00:22:48.962 } 00:22:48.962 } 00:22:48.962 ]' 00:22:48.962 17:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:48.962 17:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:48.962 17:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:49.222 17:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:49.222 17:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:49.222 17:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:49.223 17:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:49.223 17:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:49.223 17:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjAzMWY3YjdkZWUxMDQ3YjY3YmE3YTNjOWMyMDMzOTQ2MmZlNGYxYzQ3NTg5ODM1MjI3MGNjNzhmZDJiYTM1NaBnAy0=: 00:22:49.223 17:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjAzMWY3YjdkZWUxMDQ3YjY3YmE3YTNjOWMyMDMzOTQ2MmZlNGYxYzQ3NTg5ODM1MjI3MGNjNzhmZDJiYTM1NaBnAy0=: 00:22:50.165 17:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:50.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:50.165 17:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:50.165 17:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.165 17:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.165 17:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.165 17:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:50.165 17:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:50.165 17:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:50.165 17:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:50.165 17:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:22:50.165 17:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:50.165 17:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:50.165 17:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:50.165 17:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:50.165 17:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:50.166 17:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:50.166 17:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.166 17:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.426 17:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.426 17:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:50.426 17:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:50.426 17:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:50.685 00:22:50.685 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:50.685 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:50.685 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:50.946 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.946 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:50.946 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.946 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.946 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.946 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:50.946 { 00:22:50.946 "cntlid": 129, 00:22:50.946 "qid": 0, 00:22:50.946 "state": "enabled", 00:22:50.946 "thread": "nvmf_tgt_poll_group_000", 00:22:50.946 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:50.946 "listen_address": { 00:22:50.946 "trtype": "TCP", 00:22:50.946 "adrfam": "IPv4", 00:22:50.946 "traddr": "10.0.0.2", 00:22:50.946 "trsvcid": "4420" 00:22:50.946 }, 00:22:50.946 "peer_address": { 00:22:50.946 "trtype": "TCP", 00:22:50.946 "adrfam": "IPv4", 00:22:50.946 "traddr": "10.0.0.1", 00:22:50.946 "trsvcid": "37808" 00:22:50.946 }, 00:22:50.946 "auth": { 00:22:50.946 "state": "completed", 00:22:50.946 "digest": "sha512", 00:22:50.946 "dhgroup": "ffdhe6144" 00:22:50.946 } 00:22:50.946 } 00:22:50.946 ]' 00:22:50.946 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:50.946 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:50.946 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:50.946 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:50.946 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:50.946 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:50.946 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:50.946 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:51.207 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmI4NGI2NDMzN2ZiYTdkMGIxZDFhYWFmMTc1YjMwOGUwOTQxNzJjN2NlM2NkNTFmTz+1jQ==: --dhchap-ctrl-secret DHHC-1:03:ZjliYmFhNzYzN2NlZTk0NTBmOGM0OGFjY2E1MTcxN2UwNDU4NmNkNTQyNmEwNGFjMTAyZDU5ODNjMDc2ZjIxOdcbX0o=: 00:22:51.207 17:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZmI4NGI2NDMzN2ZiYTdkMGIxZDFhYWFmMTc1YjMwOGUwOTQxNzJjN2NlM2NkNTFmTz+1jQ==: --dhchap-ctrl-secret DHHC-1:03:ZjliYmFhNzYzN2NlZTk0NTBmOGM0OGFjY2E1MTcxN2UwNDU4NmNkNTQyNmEwNGFjMTAyZDU5ODNjMDc2ZjIxOdcbX0o=: 00:22:51.778 17:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:52.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:52.040 17:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:52.040 17:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.040 17:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.040 17:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.040 17:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:52.040 17:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:52.040 17:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:52.040 17:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:22:52.040 17:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:52.040 17:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:52.040 17:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:52.040 17:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:52.040 17:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:52.040 17:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:52.040 17:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.040 17:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.040 17:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.040 17:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:52.040 17:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:52.040 17:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:52.611 00:22:52.611 17:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:52.611 17:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:52.611 17:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:52.611 17:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.611 17:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:52.611 17:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.611 17:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.611 17:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.611 17:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:52.611 { 00:22:52.611 "cntlid": 131, 00:22:52.611 "qid": 0, 00:22:52.611 "state": "enabled", 00:22:52.611 "thread": "nvmf_tgt_poll_group_000", 00:22:52.611 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:52.611 "listen_address": { 00:22:52.611 "trtype": "TCP", 00:22:52.611 "adrfam": "IPv4", 00:22:52.611 "traddr": "10.0.0.2", 00:22:52.611 "trsvcid": "4420" 00:22:52.611 }, 00:22:52.611 "peer_address": { 00:22:52.611 "trtype": "TCP", 00:22:52.611 "adrfam": "IPv4", 00:22:52.611 "traddr": "10.0.0.1", 00:22:52.611 "trsvcid": "37836" 00:22:52.611 }, 00:22:52.611 "auth": { 00:22:52.611 "state": "completed", 00:22:52.611 "digest": "sha512", 00:22:52.611 "dhgroup": "ffdhe6144" 00:22:52.611 } 00:22:52.611 } 00:22:52.611 ]' 00:22:52.611 17:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:52.611 17:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:52.611 17:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:52.871 17:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:52.871 17:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:52.871 17:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:52.871 17:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:52.871 17:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:52.871 17:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGEyN2QyMDMxMjhiMmU0YzNlZGU3ZDU3YTA5OTI4MDGfzKv6: --dhchap-ctrl-secret DHHC-1:02:NjQ1ZjA5NmFiNmNiMDRhMjAxYzIzYjY0MTYxZjlkYjExZjMxM2JmMWZmZjUyNjY1DEziog==: 00:22:52.871 17:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NGEyN2QyMDMxMjhiMmU0YzNlZGU3ZDU3YTA5OTI4MDGfzKv6: --dhchap-ctrl-secret DHHC-1:02:NjQ1ZjA5NmFiNmNiMDRhMjAxYzIzYjY0MTYxZjlkYjExZjMxM2JmMWZmZjUyNjY1DEziog==: 00:22:53.814 17:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:53.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:53.814 17:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:53.814 17:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.814 17:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.814 17:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.814 17:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:53.814 17:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:53.814 17:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:53.814 17:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:22:53.814 17:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:53.814 17:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:53.814 17:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:53.814 17:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:53.814 17:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:53.814 17:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:53.814 17:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.814 17:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.814 17:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.814 17:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:53.814 17:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:53.814 17:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:54.386 00:22:54.386 17:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:54.386 17:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:54.386 17:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:54.386 17:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.386 17:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:54.386 17:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.386 17:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.386 17:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.386 17:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:54.386 { 00:22:54.386 "cntlid": 133, 00:22:54.386 "qid": 0, 00:22:54.386 "state": "enabled", 00:22:54.386 "thread": "nvmf_tgt_poll_group_000", 00:22:54.386 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:54.386 "listen_address": { 00:22:54.386 "trtype": "TCP", 00:22:54.386 "adrfam": "IPv4", 00:22:54.386 "traddr": "10.0.0.2", 00:22:54.386 "trsvcid": "4420" 00:22:54.386 }, 00:22:54.386 "peer_address": { 00:22:54.386 "trtype": "TCP", 00:22:54.386 "adrfam": "IPv4", 00:22:54.386 "traddr": "10.0.0.1", 00:22:54.386 "trsvcid": "55364" 00:22:54.386 }, 00:22:54.386 "auth": { 00:22:54.386 "state": "completed", 00:22:54.386 "digest": "sha512", 00:22:54.386 "dhgroup": "ffdhe6144" 00:22:54.386 } 00:22:54.386 } 00:22:54.386 ]' 00:22:54.386 17:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:54.647 17:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:54.647 17:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:54.647 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:54.647 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:54.647 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:54.647 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:54.647 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:54.908 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTkwYmUwYzBkN2JhZDAyZWEwNTU1MDkwNDI4YzM2MzM3MjY2YmNkYTRlMmI3NzQwcDDsRw==: --dhchap-ctrl-secret DHHC-1:01:NmUyYzVlZDllZTFkNTA5MDQ0ZTQ3ZTExZDc2YzY0ZGXBtqiM: 00:22:54.908 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YTkwYmUwYzBkN2JhZDAyZWEwNTU1MDkwNDI4YzM2MzM3MjY2YmNkYTRlMmI3NzQwcDDsRw==: --dhchap-ctrl-secret DHHC-1:01:NmUyYzVlZDllZTFkNTA5MDQ0ZTQ3ZTExZDc2YzY0ZGXBtqiM: 00:22:55.479 17:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:55.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:55.479 17:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:55.479 17:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.479 17:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.479 17:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.479 17:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:55.479 17:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:55.479 17:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:55.739 17:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:22:55.739 17:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:55.739 17:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:55.739 17:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:55.739 17:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:55.739 17:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:55.739 17:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:55.739 17:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.739 17:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.739 17:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.739 17:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:55.739 17:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:55.739 17:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:56.000 00:22:56.260 17:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:56.260 17:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:56.260 17:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:56.260 17:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.260 17:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:56.260 17:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.260 17:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.260 17:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.260 17:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:56.260 { 00:22:56.260 "cntlid": 135, 00:22:56.260 "qid": 0, 00:22:56.260 "state": "enabled", 00:22:56.260 "thread": "nvmf_tgt_poll_group_000", 00:22:56.260 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:56.260 "listen_address": { 00:22:56.260 "trtype": "TCP", 00:22:56.260 "adrfam": "IPv4", 00:22:56.260 "traddr": "10.0.0.2", 00:22:56.260 "trsvcid": "4420" 00:22:56.260 }, 00:22:56.260 "peer_address": { 00:22:56.260 "trtype": "TCP", 00:22:56.260 "adrfam": "IPv4", 00:22:56.260 "traddr": "10.0.0.1", 00:22:56.260 "trsvcid": "55396" 00:22:56.260 }, 00:22:56.260 "auth": { 00:22:56.260 "state": "completed", 00:22:56.260 "digest": "sha512", 00:22:56.260 "dhgroup": "ffdhe6144" 00:22:56.260 } 00:22:56.260 } 00:22:56.260 ]' 00:22:56.260 17:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:56.260 17:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:56.260 17:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:56.521 17:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:56.521 17:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:56.521 17:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:56.521 17:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:56.521 17:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:56.521 17:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjAzMWY3YjdkZWUxMDQ3YjY3YmE3YTNjOWMyMDMzOTQ2MmZlNGYxYzQ3NTg5ODM1MjI3MGNjNzhmZDJiYTM1NaBnAy0=: 00:22:56.521 17:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjAzMWY3YjdkZWUxMDQ3YjY3YmE3YTNjOWMyMDMzOTQ2MmZlNGYxYzQ3NTg5ODM1MjI3MGNjNzhmZDJiYTM1NaBnAy0=: 00:22:57.462 17:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:57.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:57.462 17:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:57.462 17:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.462 17:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.462 17:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.462 17:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:57.462 17:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:57.462 17:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:57.462 17:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:57.462 17:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:22:57.462 17:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:57.462 17:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:57.462 17:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:57.462 17:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:57.462 17:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:57.462 17:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:57.462 17:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.462 17:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.462 17:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.462 17:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:57.462 17:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:57.462 17:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:58.034 00:22:58.034 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:58.034 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:58.034 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:58.294 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.294 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:58.294 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.294 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.294 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.294 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:58.294 { 00:22:58.294 "cntlid": 137, 00:22:58.294 "qid": 0, 00:22:58.294 "state": "enabled", 00:22:58.294 "thread": "nvmf_tgt_poll_group_000", 00:22:58.294 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:58.294 "listen_address": { 00:22:58.294 "trtype": "TCP", 00:22:58.294 "adrfam": "IPv4", 00:22:58.294 "traddr": "10.0.0.2", 00:22:58.294 "trsvcid": "4420" 00:22:58.294 }, 00:22:58.294 "peer_address": { 00:22:58.294 "trtype": "TCP", 00:22:58.294 "adrfam": "IPv4", 00:22:58.294 "traddr": "10.0.0.1", 00:22:58.294 "trsvcid": "55432" 00:22:58.294 }, 00:22:58.294 "auth": { 00:22:58.294 "state": "completed", 00:22:58.294 "digest": "sha512", 00:22:58.294 "dhgroup": "ffdhe8192" 00:22:58.294 } 00:22:58.294 } 00:22:58.294 ]' 00:22:58.294 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:58.294 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:58.294 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:58.294 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:58.294 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:58.294 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:58.294 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:58.294 17:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:58.555 17:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmI4NGI2NDMzN2ZiYTdkMGIxZDFhYWFmMTc1YjMwOGUwOTQxNzJjN2NlM2NkNTFmTz+1jQ==: --dhchap-ctrl-secret DHHC-1:03:ZjliYmFhNzYzN2NlZTk0NTBmOGM0OGFjY2E1MTcxN2UwNDU4NmNkNTQyNmEwNGFjMTAyZDU5ODNjMDc2ZjIxOdcbX0o=: 00:22:58.555 17:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZmI4NGI2NDMzN2ZiYTdkMGIxZDFhYWFmMTc1YjMwOGUwOTQxNzJjN2NlM2NkNTFmTz+1jQ==: --dhchap-ctrl-secret DHHC-1:03:ZjliYmFhNzYzN2NlZTk0NTBmOGM0OGFjY2E1MTcxN2UwNDU4NmNkNTQyNmEwNGFjMTAyZDU5ODNjMDc2ZjIxOdcbX0o=: 00:22:59.498 17:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:59.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:59.498 17:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:59.498 17:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.498 17:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.498 17:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.498 17:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:59.498 17:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:59.498 17:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:59.498 17:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:22:59.498 17:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:59.498 17:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:59.498 17:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:59.498 17:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:59.498 17:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:59.498 17:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:59.498 17:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.498 17:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.498 17:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.498 17:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:59.498 17:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:59.498 17:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:00.070 00:23:00.071 17:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:00.071 17:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:00.071 17:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:00.331 17:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.331 17:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:00.331 17:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.331 17:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.331 17:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.332 17:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:00.332 { 00:23:00.332 "cntlid": 139, 00:23:00.332 "qid": 0, 00:23:00.332 "state": "enabled", 00:23:00.332 "thread": "nvmf_tgt_poll_group_000", 00:23:00.332 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:00.332 "listen_address": { 00:23:00.332 "trtype": "TCP", 00:23:00.332 "adrfam": "IPv4", 00:23:00.332 "traddr": "10.0.0.2", 00:23:00.332 "trsvcid": "4420" 00:23:00.332 }, 00:23:00.332 "peer_address": { 00:23:00.332 "trtype": "TCP", 00:23:00.332 "adrfam": "IPv4", 00:23:00.332 "traddr": "10.0.0.1", 00:23:00.332 "trsvcid": "55466" 00:23:00.332 }, 00:23:00.332 "auth": { 00:23:00.332 "state": "completed", 00:23:00.332 "digest": "sha512", 00:23:00.332 "dhgroup": "ffdhe8192" 00:23:00.332 } 00:23:00.332 } 00:23:00.332 ]' 00:23:00.332 17:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:00.332 17:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:00.332 17:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:00.332 17:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:00.332 17:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:00.332 17:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:00.332 17:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:00.332 17:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:00.592 17:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGEyN2QyMDMxMjhiMmU0YzNlZGU3ZDU3YTA5OTI4MDGfzKv6: --dhchap-ctrl-secret DHHC-1:02:NjQ1ZjA5NmFiNmNiMDRhMjAxYzIzYjY0MTYxZjlkYjExZjMxM2JmMWZmZjUyNjY1DEziog==: 00:23:00.592 17:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NGEyN2QyMDMxMjhiMmU0YzNlZGU3ZDU3YTA5OTI4MDGfzKv6: --dhchap-ctrl-secret DHHC-1:02:NjQ1ZjA5NmFiNmNiMDRhMjAxYzIzYjY0MTYxZjlkYjExZjMxM2JmMWZmZjUyNjY1DEziog==: 00:23:01.535 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:01.535 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:01.535 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:01.535 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.535 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.535 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.535 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:01.535 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:01.535 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:01.535 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:23:01.535 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:01.535 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:01.535 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:01.535 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:01.535 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:01.535 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:01.535 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.535 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.535 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.535 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:01.535 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:01.535 17:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:02.108 00:23:02.108 17:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:02.108 17:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:02.108 17:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:02.108 17:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.108 17:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:02.108 17:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.108 17:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.369 17:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.369 17:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:02.369 { 00:23:02.369 "cntlid": 141, 00:23:02.369 "qid": 0, 00:23:02.369 "state": "enabled", 00:23:02.369 "thread": "nvmf_tgt_poll_group_000", 00:23:02.369 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:02.369 "listen_address": { 00:23:02.369 "trtype": "TCP", 00:23:02.369 "adrfam": "IPv4", 00:23:02.369 "traddr": "10.0.0.2", 00:23:02.369 "trsvcid": "4420" 00:23:02.369 }, 00:23:02.369 "peer_address": { 00:23:02.369 "trtype": "TCP", 00:23:02.369 "adrfam": "IPv4", 00:23:02.369 "traddr": "10.0.0.1", 00:23:02.369 "trsvcid": "55504" 00:23:02.369 }, 00:23:02.369 "auth": { 00:23:02.369 "state": "completed", 00:23:02.369 "digest": "sha512", 00:23:02.369 "dhgroup": "ffdhe8192" 00:23:02.369 } 00:23:02.369 } 00:23:02.369 ]' 00:23:02.369 17:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:02.369 17:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:02.369 17:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:02.369 17:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:02.369 17:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:02.369 17:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:02.369 17:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:02.369 17:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:02.630 17:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTkwYmUwYzBkN2JhZDAyZWEwNTU1MDkwNDI4YzM2MzM3MjY2YmNkYTRlMmI3NzQwcDDsRw==: --dhchap-ctrl-secret DHHC-1:01:NmUyYzVlZDllZTFkNTA5MDQ0ZTQ3ZTExZDc2YzY0ZGXBtqiM: 00:23:02.630 17:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YTkwYmUwYzBkN2JhZDAyZWEwNTU1MDkwNDI4YzM2MzM3MjY2YmNkYTRlMmI3NzQwcDDsRw==: --dhchap-ctrl-secret DHHC-1:01:NmUyYzVlZDllZTFkNTA5MDQ0ZTQ3ZTExZDc2YzY0ZGXBtqiM: 00:23:03.200 17:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:03.200 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:03.200 17:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:03.200 17:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.200 17:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.200 17:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.485 17:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:03.485 17:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:03.485 17:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:03.485 17:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:23:03.485 17:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:03.485 17:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:03.485 17:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:03.485 17:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:03.485 17:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:03.485 17:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:03.485 17:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.485 17:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.485 17:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.485 17:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:03.485 17:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:03.485 17:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:04.165 00:23:04.165 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:04.165 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:04.165 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:04.165 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.165 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:04.165 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.165 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.165 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.165 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:04.165 { 00:23:04.165 "cntlid": 143, 00:23:04.165 "qid": 0, 00:23:04.165 "state": "enabled", 00:23:04.165 "thread": "nvmf_tgt_poll_group_000", 00:23:04.165 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:04.165 "listen_address": { 00:23:04.165 "trtype": "TCP", 00:23:04.165 "adrfam": "IPv4", 00:23:04.165 "traddr": "10.0.0.2", 00:23:04.165 "trsvcid": "4420" 00:23:04.165 }, 00:23:04.165 "peer_address": { 00:23:04.165 "trtype": "TCP", 00:23:04.165 "adrfam": "IPv4", 00:23:04.165 "traddr": "10.0.0.1", 00:23:04.165 "trsvcid": "43704" 00:23:04.165 }, 00:23:04.165 "auth": { 00:23:04.165 "state": "completed", 00:23:04.165 "digest": "sha512", 00:23:04.165 "dhgroup": "ffdhe8192" 00:23:04.165 } 00:23:04.165 } 00:23:04.165 ]' 00:23:04.165 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:04.165 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:04.165 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:04.442 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:04.442 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:04.442 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:04.442 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:04.442 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:04.442 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjAzMWY3YjdkZWUxMDQ3YjY3YmE3YTNjOWMyMDMzOTQ2MmZlNGYxYzQ3NTg5ODM1MjI3MGNjNzhmZDJiYTM1NaBnAy0=: 00:23:04.442 17:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjAzMWY3YjdkZWUxMDQ3YjY3YmE3YTNjOWMyMDMzOTQ2MmZlNGYxYzQ3NTg5ODM1MjI3MGNjNzhmZDJiYTM1NaBnAy0=: 00:23:05.383 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:05.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:05.383 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:05.383 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.383 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.383 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.383 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:23:05.383 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:23:05.384 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:23:05.384 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:05.384 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:05.384 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:05.384 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:23:05.384 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:05.384 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:05.384 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:05.384 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:05.384 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:05.384 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:05.384 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.384 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.644 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.644 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:05.644 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:05.644 17:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:06.215 00:23:06.215 17:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:06.215 17:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:06.215 17:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:06.215 17:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.215 17:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:06.215 17:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.215 17:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.215 17:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.215 17:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:06.215 { 00:23:06.215 "cntlid": 145, 00:23:06.215 "qid": 0, 00:23:06.215 "state": "enabled", 00:23:06.215 "thread": "nvmf_tgt_poll_group_000", 00:23:06.215 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:06.215 "listen_address": { 00:23:06.215 "trtype": "TCP", 00:23:06.215 "adrfam": "IPv4", 00:23:06.215 "traddr": "10.0.0.2", 00:23:06.215 "trsvcid": "4420" 00:23:06.215 }, 00:23:06.215 "peer_address": { 00:23:06.215 "trtype": "TCP", 00:23:06.215 "adrfam": "IPv4", 00:23:06.215 "traddr": "10.0.0.1", 00:23:06.215 "trsvcid": "43720" 00:23:06.215 }, 00:23:06.215 "auth": { 00:23:06.215 "state": "completed", 00:23:06.215 "digest": "sha512", 00:23:06.215 "dhgroup": "ffdhe8192" 00:23:06.215 } 00:23:06.215 } 00:23:06.215 ]' 00:23:06.215 17:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:06.215 17:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:06.215 17:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:06.215 17:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:06.215 17:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:06.476 17:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:06.476 17:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:06.476 17:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:06.476 17:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmI4NGI2NDMzN2ZiYTdkMGIxZDFhYWFmMTc1YjMwOGUwOTQxNzJjN2NlM2NkNTFmTz+1jQ==: --dhchap-ctrl-secret DHHC-1:03:ZjliYmFhNzYzN2NlZTk0NTBmOGM0OGFjY2E1MTcxN2UwNDU4NmNkNTQyNmEwNGFjMTAyZDU5ODNjMDc2ZjIxOdcbX0o=: 00:23:06.476 17:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZmI4NGI2NDMzN2ZiYTdkMGIxZDFhYWFmMTc1YjMwOGUwOTQxNzJjN2NlM2NkNTFmTz+1jQ==: --dhchap-ctrl-secret DHHC-1:03:ZjliYmFhNzYzN2NlZTk0NTBmOGM0OGFjY2E1MTcxN2UwNDU4NmNkNTQyNmEwNGFjMTAyZDU5ODNjMDc2ZjIxOdcbX0o=: 00:23:07.418 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:07.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:07.418 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:07.418 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.418 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.418 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.418 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:23:07.418 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.418 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.419 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.419 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:23:07.419 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:07.419 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:23:07.419 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:07.419 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:07.419 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:07.419 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:07.419 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:23:07.419 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:23:07.419 17:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:23:07.679 request: 00:23:07.679 { 00:23:07.679 "name": "nvme0", 00:23:07.679 "trtype": "tcp", 00:23:07.679 "traddr": "10.0.0.2", 00:23:07.679 "adrfam": "ipv4", 00:23:07.679 "trsvcid": "4420", 00:23:07.679 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:07.679 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:07.679 "prchk_reftag": false, 00:23:07.679 "prchk_guard": false, 00:23:07.679 "hdgst": false, 00:23:07.679 "ddgst": false, 00:23:07.679 "dhchap_key": "key2", 00:23:07.679 "allow_unrecognized_csi": false, 00:23:07.679 "method": "bdev_nvme_attach_controller", 00:23:07.679 "req_id": 1 00:23:07.679 } 00:23:07.679 Got JSON-RPC error response 00:23:07.679 response: 00:23:07.679 { 00:23:07.679 "code": -5, 00:23:07.679 "message": "Input/output error" 00:23:07.679 } 00:23:07.940 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:07.940 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:07.940 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:07.940 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:07.940 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:07.940 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.940 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.940 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.940 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:07.940 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.940 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.940 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.940 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:07.940 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:07.940 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:07.940 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:07.940 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:07.940 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:07.940 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:07.940 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:07.940 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:07.940 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:08.201 request: 00:23:08.201 { 00:23:08.201 "name": "nvme0", 00:23:08.201 "trtype": "tcp", 00:23:08.201 "traddr": "10.0.0.2", 00:23:08.201 "adrfam": "ipv4", 00:23:08.201 "trsvcid": "4420", 00:23:08.201 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:08.201 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:08.201 "prchk_reftag": false, 00:23:08.201 "prchk_guard": false, 00:23:08.201 "hdgst": false, 00:23:08.201 "ddgst": false, 00:23:08.201 "dhchap_key": "key1", 00:23:08.201 "dhchap_ctrlr_key": "ckey2", 00:23:08.201 "allow_unrecognized_csi": false, 00:23:08.201 "method": "bdev_nvme_attach_controller", 00:23:08.201 "req_id": 1 00:23:08.201 } 00:23:08.201 Got JSON-RPC error response 00:23:08.201 response: 00:23:08.201 { 00:23:08.201 "code": -5, 00:23:08.201 "message": "Input/output error" 00:23:08.201 } 00:23:08.463 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:08.463 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:08.463 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:08.463 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:08.463 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:08.463 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.463 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.463 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.463 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:23:08.463 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.463 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.463 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.463 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:08.463 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:08.463 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:08.463 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:08.463 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:08.463 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:08.463 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:08.463 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:08.463 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:08.463 17:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:08.724 request: 00:23:08.724 { 00:23:08.724 "name": "nvme0", 00:23:08.724 "trtype": "tcp", 00:23:08.724 "traddr": "10.0.0.2", 00:23:08.724 "adrfam": "ipv4", 00:23:08.724 "trsvcid": "4420", 00:23:08.724 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:08.724 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:08.724 "prchk_reftag": false, 00:23:08.724 "prchk_guard": false, 00:23:08.724 "hdgst": false, 00:23:08.724 "ddgst": false, 00:23:08.724 "dhchap_key": "key1", 00:23:08.724 "dhchap_ctrlr_key": "ckey1", 00:23:08.724 "allow_unrecognized_csi": false, 00:23:08.724 "method": "bdev_nvme_attach_controller", 00:23:08.724 "req_id": 1 00:23:08.724 } 00:23:08.724 Got JSON-RPC error response 00:23:08.724 response: 00:23:08.724 { 00:23:08.724 "code": -5, 00:23:08.724 "message": "Input/output error" 00:23:08.724 } 00:23:08.985 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:08.985 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:08.985 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:08.985 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:08.985 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:08.985 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.985 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.985 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.985 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3026474 00:23:08.985 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3026474 ']' 00:23:08.985 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3026474 00:23:08.985 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:23:08.985 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:08.985 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3026474 00:23:08.985 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:08.985 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:08.985 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3026474' 00:23:08.985 killing process with pid 3026474 00:23:08.985 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3026474 00:23:08.985 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3026474 00:23:08.985 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:23:08.985 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:08.985 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:08.985 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.985 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=3054304 00:23:08.985 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 3054304 00:23:08.985 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:23:08.985 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3054304 ']' 00:23:08.985 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:08.985 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:08.985 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:08.985 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:08.985 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.247 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:09.247 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:23:09.247 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:09.247 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:09.247 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.247 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:09.247 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:23:09.247 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3054304 00:23:09.247 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3054304 ']' 00:23:09.247 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.247 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:09.247 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.247 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:09.247 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.507 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:09.507 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:23:09.507 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:23:09.507 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.507 17:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.507 null0 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.AiM 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.9eN ]] 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.9eN 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.c5C 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.BKb ]] 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.BKb 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.wAD 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.lWL ]] 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.lWL 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.1Uu 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:09.769 17:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:10.711 nvme0n1 00:23:10.711 17:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:10.711 17:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:10.711 17:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:10.711 17:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.711 17:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:10.711 17:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.711 17:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.711 17:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.711 17:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:10.711 { 00:23:10.711 "cntlid": 1, 00:23:10.711 "qid": 0, 00:23:10.711 "state": "enabled", 00:23:10.711 "thread": "nvmf_tgt_poll_group_000", 00:23:10.711 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:10.711 "listen_address": { 00:23:10.711 "trtype": "TCP", 00:23:10.711 "adrfam": "IPv4", 00:23:10.711 "traddr": "10.0.0.2", 00:23:10.711 "trsvcid": "4420" 00:23:10.711 }, 00:23:10.711 "peer_address": { 00:23:10.711 "trtype": "TCP", 00:23:10.711 "adrfam": "IPv4", 00:23:10.711 "traddr": "10.0.0.1", 00:23:10.711 "trsvcid": "43782" 00:23:10.711 }, 00:23:10.711 "auth": { 00:23:10.711 "state": "completed", 00:23:10.711 "digest": "sha512", 00:23:10.711 "dhgroup": "ffdhe8192" 00:23:10.711 } 00:23:10.711 } 00:23:10.711 ]' 00:23:10.711 17:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:10.711 17:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:10.711 17:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:10.971 17:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:10.971 17:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:10.971 17:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:10.971 17:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:10.971 17:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:10.971 17:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjAzMWY3YjdkZWUxMDQ3YjY3YmE3YTNjOWMyMDMzOTQ2MmZlNGYxYzQ3NTg5ODM1MjI3MGNjNzhmZDJiYTM1NaBnAy0=: 00:23:10.971 17:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjAzMWY3YjdkZWUxMDQ3YjY3YmE3YTNjOWMyMDMzOTQ2MmZlNGYxYzQ3NTg5ODM1MjI3MGNjNzhmZDJiYTM1NaBnAy0=: 00:23:11.942 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:11.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:11.942 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:11.942 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.942 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.942 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.942 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:11.942 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.942 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.942 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.942 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:23:11.942 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:23:11.942 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:23:11.942 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:11.942 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:23:11.942 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:11.942 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:11.942 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:11.942 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:11.942 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:11.942 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:11.943 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:12.202 request: 00:23:12.202 { 00:23:12.202 "name": "nvme0", 00:23:12.202 "trtype": "tcp", 00:23:12.202 "traddr": "10.0.0.2", 00:23:12.202 "adrfam": "ipv4", 00:23:12.202 "trsvcid": "4420", 00:23:12.202 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:12.202 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:12.202 "prchk_reftag": false, 00:23:12.202 "prchk_guard": false, 00:23:12.202 "hdgst": false, 00:23:12.202 "ddgst": false, 00:23:12.202 "dhchap_key": "key3", 00:23:12.202 "allow_unrecognized_csi": false, 00:23:12.202 "method": "bdev_nvme_attach_controller", 00:23:12.202 "req_id": 1 00:23:12.202 } 00:23:12.202 Got JSON-RPC error response 00:23:12.202 response: 00:23:12.202 { 00:23:12.202 "code": -5, 00:23:12.202 "message": "Input/output error" 00:23:12.202 } 00:23:12.202 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:12.202 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:12.202 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:12.202 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:12.202 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:23:12.202 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:23:12.202 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:12.202 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:12.462 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:23:12.462 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:12.462 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:23:12.462 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:12.462 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:12.462 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:12.462 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:12.462 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:12.462 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:12.462 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:12.462 request: 00:23:12.462 { 00:23:12.462 "name": "nvme0", 00:23:12.462 "trtype": "tcp", 00:23:12.462 "traddr": "10.0.0.2", 00:23:12.462 "adrfam": "ipv4", 00:23:12.462 "trsvcid": "4420", 00:23:12.462 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:12.462 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:12.462 "prchk_reftag": false, 00:23:12.462 "prchk_guard": false, 00:23:12.462 "hdgst": false, 00:23:12.462 "ddgst": false, 00:23:12.462 "dhchap_key": "key3", 00:23:12.462 "allow_unrecognized_csi": false, 00:23:12.462 "method": "bdev_nvme_attach_controller", 00:23:12.462 "req_id": 1 00:23:12.462 } 00:23:12.462 Got JSON-RPC error response 00:23:12.462 response: 00:23:12.462 { 00:23:12.462 "code": -5, 00:23:12.462 "message": "Input/output error" 00:23:12.462 } 00:23:12.462 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:12.462 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:12.462 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:12.462 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:12.462 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:23:12.462 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:23:12.462 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:23:12.462 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:12.462 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:12.462 17:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:12.721 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:12.721 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.721 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.721 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.721 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:12.721 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.721 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.721 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.721 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:12.721 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:12.721 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:12.721 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:12.721 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:12.721 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:12.721 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:12.721 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:12.721 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:12.721 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:12.980 request: 00:23:12.980 { 00:23:12.980 "name": "nvme0", 00:23:12.980 "trtype": "tcp", 00:23:12.980 "traddr": "10.0.0.2", 00:23:12.980 "adrfam": "ipv4", 00:23:12.980 "trsvcid": "4420", 00:23:12.980 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:12.980 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:12.980 "prchk_reftag": false, 00:23:12.980 "prchk_guard": false, 00:23:12.980 "hdgst": false, 00:23:12.980 "ddgst": false, 00:23:12.980 "dhchap_key": "key0", 00:23:12.980 "dhchap_ctrlr_key": "key1", 00:23:12.980 "allow_unrecognized_csi": false, 00:23:12.980 "method": "bdev_nvme_attach_controller", 00:23:12.980 "req_id": 1 00:23:12.980 } 00:23:12.980 Got JSON-RPC error response 00:23:12.980 response: 00:23:12.980 { 00:23:12.980 "code": -5, 00:23:12.980 "message": "Input/output error" 00:23:12.980 } 00:23:12.980 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:12.980 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:12.980 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:12.980 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:12.980 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:23:12.980 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:23:12.980 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:23:13.238 nvme0n1 00:23:13.238 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:23:13.238 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:23:13.238 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:13.498 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.498 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:13.498 17:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:13.757 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:23:13.757 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.757 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.757 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.757 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:23:13.757 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:13.757 17:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:14.696 nvme0n1 00:23:14.696 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:23:14.696 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:23:14.696 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:14.696 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.696 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:14.696 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.696 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.696 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.696 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:23:14.696 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:23:14.696 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:14.957 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.957 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YTkwYmUwYzBkN2JhZDAyZWEwNTU1MDkwNDI4YzM2MzM3MjY2YmNkYTRlMmI3NzQwcDDsRw==: --dhchap-ctrl-secret DHHC-1:03:YjAzMWY3YjdkZWUxMDQ3YjY3YmE3YTNjOWMyMDMzOTQ2MmZlNGYxYzQ3NTg5ODM1MjI3MGNjNzhmZDJiYTM1NaBnAy0=: 00:23:14.957 17:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YTkwYmUwYzBkN2JhZDAyZWEwNTU1MDkwNDI4YzM2MzM3MjY2YmNkYTRlMmI3NzQwcDDsRw==: --dhchap-ctrl-secret DHHC-1:03:YjAzMWY3YjdkZWUxMDQ3YjY3YmE3YTNjOWMyMDMzOTQ2MmZlNGYxYzQ3NTg5ODM1MjI3MGNjNzhmZDJiYTM1NaBnAy0=: 00:23:15.897 17:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:23:15.897 17:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:23:15.897 17:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:23:15.897 17:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:23:15.897 17:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:23:15.897 17:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:23:15.897 17:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:23:15.897 17:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:15.897 17:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:15.897 17:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:23:15.897 17:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:15.897 17:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:23:15.897 17:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:15.897 17:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:15.897 17:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:15.897 17:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:15.897 17:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:23:15.897 17:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:15.897 17:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:16.468 request: 00:23:16.468 { 00:23:16.468 "name": "nvme0", 00:23:16.468 "trtype": "tcp", 00:23:16.468 "traddr": "10.0.0.2", 00:23:16.468 "adrfam": "ipv4", 00:23:16.468 "trsvcid": "4420", 00:23:16.468 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:16.468 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:16.468 "prchk_reftag": false, 00:23:16.468 "prchk_guard": false, 00:23:16.468 "hdgst": false, 00:23:16.468 "ddgst": false, 00:23:16.468 "dhchap_key": "key1", 00:23:16.468 "allow_unrecognized_csi": false, 00:23:16.468 "method": "bdev_nvme_attach_controller", 00:23:16.468 "req_id": 1 00:23:16.468 } 00:23:16.468 Got JSON-RPC error response 00:23:16.468 response: 00:23:16.468 { 00:23:16.468 "code": -5, 00:23:16.468 "message": "Input/output error" 00:23:16.468 } 00:23:16.468 17:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:16.468 17:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:16.468 17:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:16.468 17:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:16.468 17:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:16.468 17:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:16.469 17:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:17.408 nvme0n1 00:23:17.408 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:23:17.408 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:23:17.408 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:17.408 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.408 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:17.408 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:17.675 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:17.675 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.675 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.675 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.675 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:23:17.675 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:23:17.675 17:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:23:17.675 nvme0n1 00:23:17.935 17:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:23:17.935 17:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:23:17.935 17:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:17.935 17:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.935 17:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:17.935 17:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:18.196 17:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:18.196 17:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.196 17:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.196 17:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.196 17:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NGEyN2QyMDMxMjhiMmU0YzNlZGU3ZDU3YTA5OTI4MDGfzKv6: '' 2s 00:23:18.196 17:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:23:18.196 17:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:23:18.196 17:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NGEyN2QyMDMxMjhiMmU0YzNlZGU3ZDU3YTA5OTI4MDGfzKv6: 00:23:18.196 17:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:23:18.196 17:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:23:18.196 17:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:23:18.196 17:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NGEyN2QyMDMxMjhiMmU0YzNlZGU3ZDU3YTA5OTI4MDGfzKv6: ]] 00:23:18.196 17:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NGEyN2QyMDMxMjhiMmU0YzNlZGU3ZDU3YTA5OTI4MDGfzKv6: 00:23:18.196 17:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:23:18.196 17:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:23:18.196 17:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:23:20.111 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:23:20.111 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:23:20.111 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:23:20.111 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:23:20.111 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:23:20.111 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:23:20.111 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:23:20.111 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:23:20.111 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.111 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.111 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.111 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YTkwYmUwYzBkN2JhZDAyZWEwNTU1MDkwNDI4YzM2MzM3MjY2YmNkYTRlMmI3NzQwcDDsRw==: 2s 00:23:20.111 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:23:20.111 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:23:20.111 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:23:20.111 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YTkwYmUwYzBkN2JhZDAyZWEwNTU1MDkwNDI4YzM2MzM3MjY2YmNkYTRlMmI3NzQwcDDsRw==: 00:23:20.111 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:23:20.111 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:23:20.111 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:23:20.111 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YTkwYmUwYzBkN2JhZDAyZWEwNTU1MDkwNDI4YzM2MzM3MjY2YmNkYTRlMmI3NzQwcDDsRw==: ]] 00:23:20.111 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YTkwYmUwYzBkN2JhZDAyZWEwNTU1MDkwNDI4YzM2MzM3MjY2YmNkYTRlMmI3NzQwcDDsRw==: 00:23:20.111 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:23:20.111 17:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:23:22.655 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:23:22.655 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:23:22.655 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:23:22.655 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:23:22.655 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:23:22.655 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:23:22.655 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:23:22.655 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:22.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:22.655 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:22.655 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.655 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.655 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.655 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:22.655 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:22.655 17:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:23.226 nvme0n1 00:23:23.226 17:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:23.226 17:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.226 17:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.226 17:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.226 17:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:23.226 17:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:23.796 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:23:23.796 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:23:23.796 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:23.796 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.796 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:23.796 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.796 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.796 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.796 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:23:23.796 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:23:24.056 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:23:24.056 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:24.056 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:23:24.317 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.317 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:24.317 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.317 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.317 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.317 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:24.317 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:24.317 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:24.317 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:23:24.317 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:24.317 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:23:24.317 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:24.317 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:24.317 17:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:24.887 request: 00:23:24.887 { 00:23:24.887 "name": "nvme0", 00:23:24.887 "dhchap_key": "key1", 00:23:24.887 "dhchap_ctrlr_key": "key3", 00:23:24.887 "method": "bdev_nvme_set_keys", 00:23:24.887 "req_id": 1 00:23:24.887 } 00:23:24.887 Got JSON-RPC error response 00:23:24.887 response: 00:23:24.887 { 00:23:24.887 "code": -13, 00:23:24.887 "message": "Permission denied" 00:23:24.887 } 00:23:24.887 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:24.887 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:24.887 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:24.887 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:24.887 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:24.887 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:24.887 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:24.887 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:23:24.888 17:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:23:25.828 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:25.828 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:25.828 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:26.089 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:23:26.089 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:26.089 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.089 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.089 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.089 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:26.089 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:26.089 17:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:27.029 nvme0n1 00:23:27.030 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:27.030 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.030 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.030 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.030 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:27.030 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:27.030 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:27.030 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:23:27.030 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:27.030 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:23:27.030 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:27.030 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:27.030 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:27.599 request: 00:23:27.599 { 00:23:27.599 "name": "nvme0", 00:23:27.599 "dhchap_key": "key2", 00:23:27.599 "dhchap_ctrlr_key": "key0", 00:23:27.599 "method": "bdev_nvme_set_keys", 00:23:27.599 "req_id": 1 00:23:27.599 } 00:23:27.599 Got JSON-RPC error response 00:23:27.599 response: 00:23:27.599 { 00:23:27.599 "code": -13, 00:23:27.599 "message": "Permission denied" 00:23:27.599 } 00:23:27.599 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:27.599 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:27.599 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:27.599 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:27.599 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:27.599 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:27.599 17:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:27.599 17:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:23:27.599 17:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:23:28.983 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:28.983 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:28.983 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:28.983 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:23:28.983 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:23:28.983 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:23:28.983 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3026734 00:23:28.983 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3026734 ']' 00:23:28.983 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3026734 00:23:28.983 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:23:28.983 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:28.983 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3026734 00:23:28.983 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:28.983 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:28.983 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3026734' 00:23:28.983 killing process with pid 3026734 00:23:28.983 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3026734 00:23:28.983 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3026734 00:23:28.983 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:23:28.983 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:28.983 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:23:28.983 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:28.983 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:23:28.983 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:28.983 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:29.244 rmmod nvme_tcp 00:23:29.244 rmmod nvme_fabrics 00:23:29.244 rmmod nvme_keyring 00:23:29.244 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:29.244 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:23:29.244 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:23:29.244 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 3054304 ']' 00:23:29.244 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 3054304 00:23:29.244 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3054304 ']' 00:23:29.244 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3054304 00:23:29.244 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:23:29.244 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:29.244 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3054304 00:23:29.244 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:29.244 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:29.244 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3054304' 00:23:29.244 killing process with pid 3054304 00:23:29.244 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3054304 00:23:29.244 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3054304 00:23:29.244 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:29.244 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:29.244 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:29.244 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:23:29.244 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-save 00:23:29.244 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:29.244 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-restore 00:23:29.505 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:29.505 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:29.505 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.505 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:29.505 17:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.418 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:31.418 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.AiM /tmp/spdk.key-sha256.c5C /tmp/spdk.key-sha384.wAD /tmp/spdk.key-sha512.1Uu /tmp/spdk.key-sha512.9eN /tmp/spdk.key-sha384.BKb /tmp/spdk.key-sha256.lWL '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:23:31.418 00:23:31.418 real 2m44.582s 00:23:31.418 user 6m6.788s 00:23:31.418 sys 0m24.526s 00:23:31.418 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:31.418 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.418 ************************************ 00:23:31.418 END TEST nvmf_auth_target 00:23:31.418 ************************************ 00:23:31.418 17:23:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:23:31.418 17:23:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:31.418 17:23:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:23:31.418 17:23:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:31.418 17:23:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:31.418 ************************************ 00:23:31.418 START TEST nvmf_bdevio_no_huge 00:23:31.418 ************************************ 00:23:31.418 17:23:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:31.680 * Looking for test storage... 00:23:31.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:31.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.680 --rc genhtml_branch_coverage=1 00:23:31.680 --rc genhtml_function_coverage=1 00:23:31.680 --rc genhtml_legend=1 00:23:31.680 --rc geninfo_all_blocks=1 00:23:31.680 --rc geninfo_unexecuted_blocks=1 00:23:31.680 00:23:31.680 ' 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:31.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.680 --rc genhtml_branch_coverage=1 00:23:31.680 --rc genhtml_function_coverage=1 00:23:31.680 --rc genhtml_legend=1 00:23:31.680 --rc geninfo_all_blocks=1 00:23:31.680 --rc geninfo_unexecuted_blocks=1 00:23:31.680 00:23:31.680 ' 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:31.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.680 --rc genhtml_branch_coverage=1 00:23:31.680 --rc genhtml_function_coverage=1 00:23:31.680 --rc genhtml_legend=1 00:23:31.680 --rc geninfo_all_blocks=1 00:23:31.680 --rc geninfo_unexecuted_blocks=1 00:23:31.680 00:23:31.680 ' 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:31.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.680 --rc genhtml_branch_coverage=1 00:23:31.680 --rc genhtml_function_coverage=1 00:23:31.680 --rc genhtml_legend=1 00:23:31.680 --rc geninfo_all_blocks=1 00:23:31.680 --rc geninfo_unexecuted_blocks=1 00:23:31.680 00:23:31.680 ' 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:31.680 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:31.681 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:31.681 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:31.681 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:31.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:31.681 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:31.681 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:31.681 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:31.681 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:31.681 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:31.681 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:23:31.681 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:31.681 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:31.681 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:31.681 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:31.681 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:31.681 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.681 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:31.681 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.681 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:31.681 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:31.681 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:23:31.681 17:23:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:39.818 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:39.818 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:39.818 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:39.818 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:39.819 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # is_hw=yes 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:39.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:39.819 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:23:39.819 00:23:39.819 --- 10.0.0.2 ping statistics --- 00:23:39.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.819 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:39.819 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:39.819 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:23:39.819 00:23:39.819 --- 10.0.0.1 ping statistics --- 00:23:39.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.819 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # return 0 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # nvmfpid=3062482 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # waitforlisten 3062482 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 3062482 ']' 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:39.819 17:23:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:39.819 [2024-10-01 17:23:37.637555] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:23:39.819 [2024-10-01 17:23:37.637644] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:23:39.819 [2024-10-01 17:23:37.736183] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:39.819 [2024-10-01 17:23:37.818921] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.819 [2024-10-01 17:23:37.818975] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.819 [2024-10-01 17:23:37.818983] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.819 [2024-10-01 17:23:37.818990] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.819 [2024-10-01 17:23:37.819004] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.819 [2024-10-01 17:23:37.819168] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:23:39.819 [2024-10-01 17:23:37.819427] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:23:39.819 [2024-10-01 17:23:37.819584] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:23:39.819 [2024-10-01 17:23:37.819586] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:23:40.080 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:40.080 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:23:40.080 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:40.080 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:40.080 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:40.080 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:40.080 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:40.080 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.080 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:40.080 [2024-10-01 17:23:38.506256] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:40.080 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.080 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:40.080 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.080 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:40.080 Malloc0 00:23:40.080 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.080 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:40.080 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.080 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:40.080 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.080 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:40.080 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.080 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:40.080 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.080 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:40.080 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.080 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:40.080 [2024-10-01 17:23:38.559609] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:40.080 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.080 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:23:40.080 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:40.080 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # config=() 00:23:40.080 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # local subsystem config 00:23:40.080 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:40.080 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:40.080 { 00:23:40.080 "params": { 00:23:40.081 "name": "Nvme$subsystem", 00:23:40.081 "trtype": "$TEST_TRANSPORT", 00:23:40.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.081 "adrfam": "ipv4", 00:23:40.081 "trsvcid": "$NVMF_PORT", 00:23:40.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.081 "hdgst": ${hdgst:-false}, 00:23:40.081 "ddgst": ${ddgst:-false} 00:23:40.081 }, 00:23:40.081 "method": "bdev_nvme_attach_controller" 00:23:40.081 } 00:23:40.081 EOF 00:23:40.081 )") 00:23:40.081 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # cat 00:23:40.081 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # jq . 00:23:40.081 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@583 -- # IFS=, 00:23:40.081 17:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:23:40.081 "params": { 00:23:40.081 "name": "Nvme1", 00:23:40.081 "trtype": "tcp", 00:23:40.081 "traddr": "10.0.0.2", 00:23:40.081 "adrfam": "ipv4", 00:23:40.081 "trsvcid": "4420", 00:23:40.081 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.081 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:40.081 "hdgst": false, 00:23:40.081 "ddgst": false 00:23:40.081 }, 00:23:40.081 "method": "bdev_nvme_attach_controller" 00:23:40.081 }' 00:23:40.081 [2024-10-01 17:23:38.616055] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:23:40.081 [2024-10-01 17:23:38.616129] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3062525 ] 00:23:40.341 [2024-10-01 17:23:38.685524] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:40.341 [2024-10-01 17:23:38.758762] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:40.341 [2024-10-01 17:23:38.758882] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:40.341 [2024-10-01 17:23:38.758886] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.601 I/O targets: 00:23:40.601 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:40.601 00:23:40.601 00:23:40.601 CUnit - A unit testing framework for C - Version 2.1-3 00:23:40.601 http://cunit.sourceforge.net/ 00:23:40.601 00:23:40.601 00:23:40.601 Suite: bdevio tests on: Nvme1n1 00:23:40.601 Test: blockdev write read block ...passed 00:23:40.601 Test: blockdev write zeroes read block ...passed 00:23:40.601 Test: blockdev write zeroes read no split ...passed 00:23:40.601 Test: blockdev write zeroes read split ...passed 00:23:40.601 Test: blockdev write zeroes read split partial ...passed 00:23:40.601 Test: blockdev reset ...[2024-10-01 17:23:39.139308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:40.601 [2024-10-01 17:23:39.139377] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18855f0 (9): Bad file descriptor 00:23:40.862 [2024-10-01 17:23:39.199194] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:40.862 passed 00:23:40.862 Test: blockdev write read 8 blocks ...passed 00:23:40.862 Test: blockdev write read size > 128k ...passed 00:23:40.862 Test: blockdev write read invalid size ...passed 00:23:40.862 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:40.862 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:40.862 Test: blockdev write read max offset ...passed 00:23:40.862 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:41.122 Test: blockdev writev readv 8 blocks ...passed 00:23:41.122 Test: blockdev writev readv 30 x 1block ...passed 00:23:41.122 Test: blockdev writev readv block ...passed 00:23:41.122 Test: blockdev writev readv size > 128k ...passed 00:23:41.122 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:41.122 Test: blockdev comparev and writev ...[2024-10-01 17:23:39.464490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:41.122 [2024-10-01 17:23:39.464519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:41.122 [2024-10-01 17:23:39.464531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:41.122 [2024-10-01 17:23:39.464537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:41.122 [2024-10-01 17:23:39.465005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:41.122 [2024-10-01 17:23:39.465016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:41.122 [2024-10-01 17:23:39.465026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:41.122 [2024-10-01 17:23:39.465031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:41.122 [2024-10-01 17:23:39.465534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:41.122 [2024-10-01 17:23:39.465542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:41.122 [2024-10-01 17:23:39.465552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:41.122 [2024-10-01 17:23:39.465557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:41.122 [2024-10-01 17:23:39.466020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:41.122 [2024-10-01 17:23:39.466029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:41.122 [2024-10-01 17:23:39.466039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:41.122 [2024-10-01 17:23:39.466044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:41.122 passed 00:23:41.122 Test: blockdev nvme passthru rw ...passed 00:23:41.122 Test: blockdev nvme passthru vendor specific ...[2024-10-01 17:23:39.551800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:41.122 [2024-10-01 17:23:39.551811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:41.122 [2024-10-01 17:23:39.552150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:41.122 [2024-10-01 17:23:39.552160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:41.122 [2024-10-01 17:23:39.552494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:41.122 [2024-10-01 17:23:39.552503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:41.122 [2024-10-01 17:23:39.552842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:41.122 [2024-10-01 17:23:39.552850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:41.122 passed 00:23:41.122 Test: blockdev nvme admin passthru ...passed 00:23:41.122 Test: blockdev copy ...passed 00:23:41.122 00:23:41.122 Run Summary: Type Total Ran Passed Failed Inactive 00:23:41.122 suites 1 1 n/a 0 0 00:23:41.122 tests 23 23 23 0 0 00:23:41.122 asserts 152 152 152 0 n/a 00:23:41.122 00:23:41.122 Elapsed time = 1.397 seconds 00:23:41.382 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:41.382 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.382 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:41.382 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.382 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:41.382 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:23:41.383 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:41.383 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:23:41.383 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:41.383 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:23:41.383 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:41.383 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:41.383 rmmod nvme_tcp 00:23:41.383 rmmod nvme_fabrics 00:23:41.383 rmmod nvme_keyring 00:23:41.643 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:41.643 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:23:41.643 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:23:41.643 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@515 -- # '[' -n 3062482 ']' 00:23:41.643 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # killprocess 3062482 00:23:41.643 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 3062482 ']' 00:23:41.643 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 3062482 00:23:41.643 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:23:41.643 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:41.643 17:23:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3062482 00:23:41.643 17:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:23:41.643 17:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:23:41.643 17:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3062482' 00:23:41.643 killing process with pid 3062482 00:23:41.643 17:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 3062482 00:23:41.643 17:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 3062482 00:23:41.903 17:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:41.903 17:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:41.903 17:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:41.903 17:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:23:41.903 17:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-restore 00:23:41.903 17:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-save 00:23:41.903 17:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:41.903 17:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:41.903 17:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:41.903 17:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.903 17:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:41.903 17:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:44.442 00:23:44.442 real 0m12.420s 00:23:44.442 user 0m14.083s 00:23:44.442 sys 0m6.565s 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:44.442 ************************************ 00:23:44.442 END TEST nvmf_bdevio_no_huge 00:23:44.442 ************************************ 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:44.442 ************************************ 00:23:44.442 START TEST nvmf_tls 00:23:44.442 ************************************ 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:44.442 * Looking for test storage... 00:23:44.442 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:44.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.442 --rc genhtml_branch_coverage=1 00:23:44.442 --rc genhtml_function_coverage=1 00:23:44.442 --rc genhtml_legend=1 00:23:44.442 --rc geninfo_all_blocks=1 00:23:44.442 --rc geninfo_unexecuted_blocks=1 00:23:44.442 00:23:44.442 ' 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:44.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.442 --rc genhtml_branch_coverage=1 00:23:44.442 --rc genhtml_function_coverage=1 00:23:44.442 --rc genhtml_legend=1 00:23:44.442 --rc geninfo_all_blocks=1 00:23:44.442 --rc geninfo_unexecuted_blocks=1 00:23:44.442 00:23:44.442 ' 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:44.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.442 --rc genhtml_branch_coverage=1 00:23:44.442 --rc genhtml_function_coverage=1 00:23:44.442 --rc genhtml_legend=1 00:23:44.442 --rc geninfo_all_blocks=1 00:23:44.442 --rc geninfo_unexecuted_blocks=1 00:23:44.442 00:23:44.442 ' 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:44.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.442 --rc genhtml_branch_coverage=1 00:23:44.442 --rc genhtml_function_coverage=1 00:23:44.442 --rc genhtml_legend=1 00:23:44.442 --rc geninfo_all_blocks=1 00:23:44.442 --rc geninfo_unexecuted_blocks=1 00:23:44.442 00:23:44.442 ' 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:44.442 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.443 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.443 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.443 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:23:44.443 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.443 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:23:44.443 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:44.443 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:44.443 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:44.443 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:44.443 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:44.443 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:44.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:44.443 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:44.443 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:44.443 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:44.443 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:44.443 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:23:44.443 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:44.443 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:44.443 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:44.443 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:44.443 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:44.443 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.443 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:44.443 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.443 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:44.443 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:44.443 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:23:44.443 17:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.023 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:51.023 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:23:51.023 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:51.023 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:51.023 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:51.023 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:51.023 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:51.023 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:23:51.023 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:51.023 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:23:51.023 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:23:51.023 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:23:51.023 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:23:51.023 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:23:51.023 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:23:51.023 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:51.023 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:51.023 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:51.023 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:51.023 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:51.023 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:51.023 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:51.023 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:51.023 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:51.023 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:51.023 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:51.023 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:51.023 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:51.023 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:51.023 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:51.023 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:51.023 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:51.023 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:51.023 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:51.023 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:51.023 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:51.023 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:51.023 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:51.023 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.023 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.023 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:51.023 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:51.023 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:51.023 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:51.024 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:51.024 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # is_hw=yes 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:51.024 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:51.284 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:51.285 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:51.285 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:51.285 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:51.285 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:51.285 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:51.285 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:51.546 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:51.546 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:51.546 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:23:51.546 00:23:51.546 --- 10.0.0.2 ping statistics --- 00:23:51.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.546 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:23:51.546 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:51.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:51.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:23:51.546 00:23:51.546 --- 10.0.0.1 ping statistics --- 00:23:51.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.546 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:23:51.546 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:51.546 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # return 0 00:23:51.546 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:51.546 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:51.546 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:51.546 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:51.546 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:51.546 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:51.546 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:51.546 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:51.546 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:51.546 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:51.546 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.546 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3067102 00:23:51.546 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3067102 00:23:51.546 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:51.546 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3067102 ']' 00:23:51.546 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.546 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:51.546 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.546 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:51.546 17:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.546 [2024-10-01 17:23:49.964192] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:23:51.546 [2024-10-01 17:23:49.964253] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:51.546 [2024-10-01 17:23:50.051397] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.546 [2024-10-01 17:23:50.082894] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:51.546 [2024-10-01 17:23:50.082930] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:51.546 [2024-10-01 17:23:50.082939] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:51.546 [2024-10-01 17:23:50.082945] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:51.546 [2024-10-01 17:23:50.082954] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:51.546 [2024-10-01 17:23:50.082972] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.807 17:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:51.807 17:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:51.807 17:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:51.807 17:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:51.807 17:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.807 17:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:51.807 17:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:23:51.807 17:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:51.807 true 00:23:52.067 17:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:52.067 17:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:23:52.067 17:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:23:52.067 17:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:23:52.067 17:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:52.327 17:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:52.327 17:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:23:52.588 17:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:23:52.588 17:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:23:52.588 17:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:52.588 17:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:52.588 17:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:23:52.849 17:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:23:52.849 17:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:23:52.849 17:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:52.849 17:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:23:53.110 17:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:23:53.110 17:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:23:53.110 17:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:53.371 17:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:23:53.371 17:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:53.371 17:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:23:53.371 17:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:23:53.371 17:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:53.632 17:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:23:53.632 17:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:53.893 17:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:23:53.894 17:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:23:53.894 17:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:53.894 17:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:53.894 17:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:23:53.894 17:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:23:53.894 17:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:23:53.894 17:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:23:53.894 17:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:23:53.894 17:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:53.894 17:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:53.894 17:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:53.894 17:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:23:53.894 17:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:23:53.894 17:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=ffeeddccbbaa99887766554433221100 00:23:53.894 17:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:23:53.894 17:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:23:53.894 17:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:53.894 17:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:53.894 17:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.pDI60gb5u1 00:23:53.894 17:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:23:53.894 17:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.0xcd3EZDTM 00:23:53.894 17:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:53.894 17:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:53.894 17:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.pDI60gb5u1 00:23:53.894 17:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.0xcd3EZDTM 00:23:53.894 17:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:54.154 17:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:54.433 17:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.pDI60gb5u1 00:23:54.433 17:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.pDI60gb5u1 00:23:54.433 17:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:54.433 [2024-10-01 17:23:52.948573] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:54.433 17:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:54.750 17:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:55.040 [2024-10-01 17:23:53.317489] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:55.040 [2024-10-01 17:23:53.317850] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:55.040 17:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:55.040 malloc0 00:23:55.040 17:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:55.300 17:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.pDI60gb5u1 00:23:55.561 17:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:55.561 17:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.pDI60gb5u1 00:24:05.606 Initializing NVMe Controllers 00:24:05.606 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:05.606 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:05.606 Initialization complete. Launching workers. 00:24:05.606 ======================================================== 00:24:05.606 Latency(us) 00:24:05.606 Device Information : IOPS MiB/s Average min max 00:24:05.606 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18650.36 72.85 3431.58 1144.15 4387.36 00:24:05.606 ======================================================== 00:24:05.606 Total : 18650.36 72.85 3431.58 1144.15 4387.36 00:24:05.606 00:24:05.606 17:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pDI60gb5u1 00:24:05.606 17:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:05.606 17:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:05.606 17:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:05.606 17:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.pDI60gb5u1 00:24:05.606 17:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:05.606 17:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3070015 00:24:05.606 17:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:05.606 17:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3070015 /var/tmp/bdevperf.sock 00:24:05.606 17:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:05.606 17:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3070015 ']' 00:24:05.606 17:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:05.606 17:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:05.606 17:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:05.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:05.606 17:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:05.607 17:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:05.868 [2024-10-01 17:24:04.182424] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:24:05.868 [2024-10-01 17:24:04.182483] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3070015 ] 00:24:05.868 [2024-10-01 17:24:04.232046] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.868 [2024-10-01 17:24:04.260280] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:05.868 17:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:05.868 17:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:05.868 17:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pDI60gb5u1 00:24:06.129 17:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:06.129 [2024-10-01 17:24:04.660105] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:06.389 TLSTESTn1 00:24:06.389 17:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:06.389 Running I/O for 10 seconds... 00:24:16.697 5631.00 IOPS, 22.00 MiB/s 6118.00 IOPS, 23.90 MiB/s 6092.00 IOPS, 23.80 MiB/s 6200.50 IOPS, 24.22 MiB/s 6061.40 IOPS, 23.68 MiB/s 6140.17 IOPS, 23.99 MiB/s 6124.71 IOPS, 23.92 MiB/s 6084.25 IOPS, 23.77 MiB/s 6037.22 IOPS, 23.58 MiB/s 6072.90 IOPS, 23.72 MiB/s 00:24:16.697 Latency(us) 00:24:16.697 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.697 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:16.697 Verification LBA range: start 0x0 length 0x2000 00:24:16.697 TLSTESTn1 : 10.01 6076.97 23.74 0.00 0.00 21032.02 4614.83 79517.01 00:24:16.697 =================================================================================================================== 00:24:16.697 Total : 6076.97 23.74 0.00 0.00 21032.02 4614.83 79517.01 00:24:16.697 { 00:24:16.697 "results": [ 00:24:16.697 { 00:24:16.697 "job": "TLSTESTn1", 00:24:16.697 "core_mask": "0x4", 00:24:16.697 "workload": "verify", 00:24:16.697 "status": "finished", 00:24:16.697 "verify_range": { 00:24:16.697 "start": 0, 00:24:16.697 "length": 8192 00:24:16.697 }, 00:24:16.697 "queue_depth": 128, 00:24:16.697 "io_size": 4096, 00:24:16.697 "runtime": 10.014195, 00:24:16.697 "iops": 6076.973735782058, 00:24:16.697 "mibps": 23.738178655398663, 00:24:16.697 "io_failed": 0, 00:24:16.697 "io_timeout": 0, 00:24:16.697 "avg_latency_us": 21032.01658823014, 00:24:16.697 "min_latency_us": 4614.826666666667, 00:24:16.697 "max_latency_us": 79517.01333333334 00:24:16.697 } 00:24:16.697 ], 00:24:16.697 "core_count": 1 00:24:16.697 } 00:24:16.697 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:16.697 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3070015 00:24:16.697 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3070015 ']' 00:24:16.697 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3070015 00:24:16.697 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:16.697 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:16.697 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3070015 00:24:16.697 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:16.697 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:16.697 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3070015' 00:24:16.697 killing process with pid 3070015 00:24:16.697 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3070015 00:24:16.697 Received shutdown signal, test time was about 10.000000 seconds 00:24:16.697 00:24:16.697 Latency(us) 00:24:16.697 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.697 =================================================================================================================== 00:24:16.697 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:16.697 17:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3070015 00:24:16.697 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0xcd3EZDTM 00:24:16.697 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:16.697 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0xcd3EZDTM 00:24:16.697 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:16.697 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:16.697 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:16.697 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:16.697 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0xcd3EZDTM 00:24:16.697 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:16.697 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:16.697 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:16.697 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.0xcd3EZDTM 00:24:16.697 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:16.697 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3072502 00:24:16.698 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:16.698 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3072502 /var/tmp/bdevperf.sock 00:24:16.698 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:16.698 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3072502 ']' 00:24:16.698 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:16.698 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:16.698 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:16.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:16.698 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:16.698 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:16.698 [2024-10-01 17:24:15.128808] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:24:16.698 [2024-10-01 17:24:15.128867] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3072502 ] 00:24:16.698 [2024-10-01 17:24:15.179146] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.698 [2024-10-01 17:24:15.206958] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:16.958 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:16.958 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:16.958 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0xcd3EZDTM 00:24:16.958 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:17.218 [2024-10-01 17:24:15.606672] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:17.218 [2024-10-01 17:24:15.614366] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:17.218 [2024-10-01 17:24:15.614762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf8bf0 (107): Transport endpoint is not connected 00:24:17.218 [2024-10-01 17:24:15.615758] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf8bf0 (9): Bad file descriptor 00:24:17.218 [2024-10-01 17:24:15.616760] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.218 [2024-10-01 17:24:15.616768] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:17.218 [2024-10-01 17:24:15.616773] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:24:17.218 [2024-10-01 17:24:15.616780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.218 request: 00:24:17.218 { 00:24:17.218 "name": "TLSTEST", 00:24:17.218 "trtype": "tcp", 00:24:17.218 "traddr": "10.0.0.2", 00:24:17.218 "adrfam": "ipv4", 00:24:17.218 "trsvcid": "4420", 00:24:17.218 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:17.218 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:17.218 "prchk_reftag": false, 00:24:17.218 "prchk_guard": false, 00:24:17.218 "hdgst": false, 00:24:17.218 "ddgst": false, 00:24:17.218 "psk": "key0", 00:24:17.218 "allow_unrecognized_csi": false, 00:24:17.218 "method": "bdev_nvme_attach_controller", 00:24:17.218 "req_id": 1 00:24:17.218 } 00:24:17.218 Got JSON-RPC error response 00:24:17.218 response: 00:24:17.218 { 00:24:17.218 "code": -5, 00:24:17.218 "message": "Input/output error" 00:24:17.218 } 00:24:17.218 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3072502 00:24:17.218 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3072502 ']' 00:24:17.218 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3072502 00:24:17.218 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:17.218 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:17.218 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3072502 00:24:17.218 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:17.218 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:17.218 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3072502' 00:24:17.218 killing process with pid 3072502 00:24:17.218 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3072502 00:24:17.218 Received shutdown signal, test time was about 10.000000 seconds 00:24:17.218 00:24:17.218 Latency(us) 00:24:17.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.218 =================================================================================================================== 00:24:17.218 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:17.218 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3072502 00:24:17.478 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:17.478 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:17.478 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:17.478 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:17.478 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:17.478 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.pDI60gb5u1 00:24:17.478 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:17.478 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.pDI60gb5u1 00:24:17.478 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:17.478 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:17.478 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:17.478 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:17.478 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.pDI60gb5u1 00:24:17.478 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:17.478 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:17.478 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:24:17.478 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.pDI60gb5u1 00:24:17.478 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:17.478 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3072569 00:24:17.478 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:17.478 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3072569 /var/tmp/bdevperf.sock 00:24:17.478 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:17.478 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3072569 ']' 00:24:17.478 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:17.478 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:17.478 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:17.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:17.478 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:17.478 17:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:17.478 [2024-10-01 17:24:15.870926] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:24:17.478 [2024-10-01 17:24:15.870986] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3072569 ] 00:24:17.478 [2024-10-01 17:24:15.923733] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.478 [2024-10-01 17:24:15.950899] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:17.738 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:17.738 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:17.738 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pDI60gb5u1 00:24:17.738 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:24:17.999 [2024-10-01 17:24:16.362672] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:17.999 [2024-10-01 17:24:16.367023] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:17.999 [2024-10-01 17:24:16.367044] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:17.999 [2024-10-01 17:24:16.367063] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:17.999 [2024-10-01 17:24:16.367775] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdbbf0 (107): Transport endpoint is not connected 00:24:17.999 [2024-10-01 17:24:16.368770] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdbbf0 (9): Bad file descriptor 00:24:17.999 [2024-10-01 17:24:16.369772] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.999 [2024-10-01 17:24:16.369779] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:17.999 [2024-10-01 17:24:16.369786] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:24:17.999 [2024-10-01 17:24:16.369794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.999 request: 00:24:17.999 { 00:24:17.999 "name": "TLSTEST", 00:24:17.999 "trtype": "tcp", 00:24:17.999 "traddr": "10.0.0.2", 00:24:17.999 "adrfam": "ipv4", 00:24:17.999 "trsvcid": "4420", 00:24:17.999 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:17.999 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:17.999 "prchk_reftag": false, 00:24:17.999 "prchk_guard": false, 00:24:17.999 "hdgst": false, 00:24:17.999 "ddgst": false, 00:24:17.999 "psk": "key0", 00:24:17.999 "allow_unrecognized_csi": false, 00:24:17.999 "method": "bdev_nvme_attach_controller", 00:24:17.999 "req_id": 1 00:24:18.000 } 00:24:18.000 Got JSON-RPC error response 00:24:18.000 response: 00:24:18.000 { 00:24:18.000 "code": -5, 00:24:18.000 "message": "Input/output error" 00:24:18.000 } 00:24:18.000 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3072569 00:24:18.000 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3072569 ']' 00:24:18.000 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3072569 00:24:18.000 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:18.000 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:18.000 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3072569 00:24:18.000 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:18.000 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:18.000 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3072569' 00:24:18.000 killing process with pid 3072569 00:24:18.000 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3072569 00:24:18.000 Received shutdown signal, test time was about 10.000000 seconds 00:24:18.000 00:24:18.000 Latency(us) 00:24:18.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.000 =================================================================================================================== 00:24:18.000 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:18.000 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3072569 00:24:18.260 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:18.260 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:18.260 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:18.260 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:18.260 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:18.260 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.pDI60gb5u1 00:24:18.260 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:18.260 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.pDI60gb5u1 00:24:18.260 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:18.260 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:18.260 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:18.260 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:18.260 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.pDI60gb5u1 00:24:18.260 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:18.260 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:24:18.260 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:18.260 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.pDI60gb5u1 00:24:18.260 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:18.260 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3072855 00:24:18.260 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:18.260 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3072855 /var/tmp/bdevperf.sock 00:24:18.260 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:18.260 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3072855 ']' 00:24:18.260 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:18.260 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:18.260 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:18.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:18.260 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:18.260 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:18.260 [2024-10-01 17:24:16.629226] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:24:18.260 [2024-10-01 17:24:16.629284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3072855 ] 00:24:18.260 [2024-10-01 17:24:16.680529] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.260 [2024-10-01 17:24:16.706454] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:18.260 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:18.260 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:18.260 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pDI60gb5u1 00:24:18.520 17:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:18.780 [2024-10-01 17:24:17.114233] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:18.780 [2024-10-01 17:24:17.118751] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:18.780 [2024-10-01 17:24:17.118769] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:18.780 [2024-10-01 17:24:17.118787] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:18.780 [2024-10-01 17:24:17.119439] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x92ebf0 (107): Transport endpoint is not connected 00:24:18.780 [2024-10-01 17:24:17.120434] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x92ebf0 (9): Bad file descriptor 00:24:18.780 [2024-10-01 17:24:17.121435] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:18.780 [2024-10-01 17:24:17.121443] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:18.780 [2024-10-01 17:24:17.121449] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:24:18.780 [2024-10-01 17:24:17.121457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:18.780 request: 00:24:18.780 { 00:24:18.780 "name": "TLSTEST", 00:24:18.780 "trtype": "tcp", 00:24:18.780 "traddr": "10.0.0.2", 00:24:18.780 "adrfam": "ipv4", 00:24:18.780 "trsvcid": "4420", 00:24:18.780 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:18.780 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:18.780 "prchk_reftag": false, 00:24:18.780 "prchk_guard": false, 00:24:18.780 "hdgst": false, 00:24:18.780 "ddgst": false, 00:24:18.780 "psk": "key0", 00:24:18.780 "allow_unrecognized_csi": false, 00:24:18.780 "method": "bdev_nvme_attach_controller", 00:24:18.780 "req_id": 1 00:24:18.780 } 00:24:18.780 Got JSON-RPC error response 00:24:18.780 response: 00:24:18.780 { 00:24:18.780 "code": -5, 00:24:18.780 "message": "Input/output error" 00:24:18.780 } 00:24:18.780 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3072855 00:24:18.780 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3072855 ']' 00:24:18.780 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3072855 00:24:18.780 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:18.780 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:18.780 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3072855 00:24:18.780 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:18.780 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:18.780 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3072855' 00:24:18.780 killing process with pid 3072855 00:24:18.780 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3072855 00:24:18.780 Received shutdown signal, test time was about 10.000000 seconds 00:24:18.780 00:24:18.780 Latency(us) 00:24:18.780 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.780 =================================================================================================================== 00:24:18.780 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:18.780 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3072855 00:24:18.780 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:18.780 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:18.780 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:18.780 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:18.781 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:18.781 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:18.781 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:18.781 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:18.781 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:18.781 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:18.781 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:18.781 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:18.781 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:18.781 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:18.781 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:18.781 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:18.781 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:24:18.781 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:18.781 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3072873 00:24:18.781 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:18.781 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3072873 /var/tmp/bdevperf.sock 00:24:18.781 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:18.781 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3072873 ']' 00:24:18.781 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:18.781 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:18.781 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:18.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:18.781 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:18.781 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:19.041 [2024-10-01 17:24:17.378896] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:24:19.041 [2024-10-01 17:24:17.378959] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3072873 ] 00:24:19.041 [2024-10-01 17:24:17.432608] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.041 [2024-10-01 17:24:17.460210] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:19.041 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:19.041 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:19.041 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:24:19.303 [2024-10-01 17:24:17.679773] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:24:19.303 [2024-10-01 17:24:17.679803] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:19.303 request: 00:24:19.303 { 00:24:19.303 "name": "key0", 00:24:19.303 "path": "", 00:24:19.303 "method": "keyring_file_add_key", 00:24:19.303 "req_id": 1 00:24:19.303 } 00:24:19.303 Got JSON-RPC error response 00:24:19.303 response: 00:24:19.303 { 00:24:19.303 "code": -1, 00:24:19.303 "message": "Operation not permitted" 00:24:19.303 } 00:24:19.303 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:19.563 [2024-10-01 17:24:17.864310] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:19.563 [2024-10-01 17:24:17.864333] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:24:19.563 request: 00:24:19.563 { 00:24:19.563 "name": "TLSTEST", 00:24:19.563 "trtype": "tcp", 00:24:19.563 "traddr": "10.0.0.2", 00:24:19.563 "adrfam": "ipv4", 00:24:19.563 "trsvcid": "4420", 00:24:19.563 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:19.563 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:19.563 "prchk_reftag": false, 00:24:19.563 "prchk_guard": false, 00:24:19.563 "hdgst": false, 00:24:19.563 "ddgst": false, 00:24:19.563 "psk": "key0", 00:24:19.563 "allow_unrecognized_csi": false, 00:24:19.563 "method": "bdev_nvme_attach_controller", 00:24:19.563 "req_id": 1 00:24:19.563 } 00:24:19.563 Got JSON-RPC error response 00:24:19.563 response: 00:24:19.563 { 00:24:19.563 "code": -126, 00:24:19.563 "message": "Required key not available" 00:24:19.563 } 00:24:19.563 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3072873 00:24:19.563 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3072873 ']' 00:24:19.563 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3072873 00:24:19.563 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:19.563 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:19.563 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3072873 00:24:19.563 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:19.563 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:19.563 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3072873' 00:24:19.563 killing process with pid 3072873 00:24:19.563 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3072873 00:24:19.563 Received shutdown signal, test time was about 10.000000 seconds 00:24:19.564 00:24:19.564 Latency(us) 00:24:19.564 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:19.564 =================================================================================================================== 00:24:19.564 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:19.564 17:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3072873 00:24:19.564 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:19.564 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:19.564 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:19.564 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:19.564 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:19.564 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3067102 00:24:19.564 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3067102 ']' 00:24:19.564 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3067102 00:24:19.564 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:19.564 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:19.564 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3067102 00:24:19.824 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:19.824 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:19.824 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3067102' 00:24:19.824 killing process with pid 3067102 00:24:19.824 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3067102 00:24:19.824 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3067102 00:24:19.824 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:24:19.824 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:24:19.824 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:24:19.824 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:24:19.824 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:24:19.824 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=2 00:24:19.824 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:24:19.824 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:19.824 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:24:19.824 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.Ce8noBp1z6 00:24:19.824 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:19.824 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.Ce8noBp1z6 00:24:19.824 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:24:19.824 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:19.824 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:19.824 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:19.824 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3073215 00:24:19.824 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3073215 00:24:19.824 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:19.824 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3073215 ']' 00:24:19.824 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:19.824 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:19.824 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:19.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:19.824 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:19.824 17:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:19.824 [2024-10-01 17:24:18.356286] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:24:19.824 [2024-10-01 17:24:18.356338] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:20.084 [2024-10-01 17:24:18.437020] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.084 [2024-10-01 17:24:18.464708] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:20.084 [2024-10-01 17:24:18.464739] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:20.084 [2024-10-01 17:24:18.464746] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:20.084 [2024-10-01 17:24:18.464750] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:20.084 [2024-10-01 17:24:18.464755] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:20.084 [2024-10-01 17:24:18.464770] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:20.654 17:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:20.654 17:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:20.654 17:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:20.654 17:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:20.654 17:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:20.654 17:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:20.654 17:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.Ce8noBp1z6 00:24:20.654 17:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Ce8noBp1z6 00:24:20.654 17:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:20.914 [2024-10-01 17:24:19.334618] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:20.914 17:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:21.174 17:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:21.174 [2024-10-01 17:24:19.671446] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:21.174 [2024-10-01 17:24:19.671655] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:21.174 17:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:21.434 malloc0 00:24:21.434 17:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:21.696 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Ce8noBp1z6 00:24:21.696 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:21.957 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ce8noBp1z6 00:24:21.957 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:21.957 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:21.957 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:21.957 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Ce8noBp1z6 00:24:21.957 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:21.957 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:21.957 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3073582 00:24:21.957 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:21.957 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3073582 /var/tmp/bdevperf.sock 00:24:21.957 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3073582 ']' 00:24:21.957 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:21.958 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:21.958 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:21.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:21.958 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:21.958 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:21.958 [2024-10-01 17:24:20.440574] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:24:21.958 [2024-10-01 17:24:20.440627] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3073582 ] 00:24:21.958 [2024-10-01 17:24:20.490007] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.218 [2024-10-01 17:24:20.517860] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:22.218 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:22.218 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:22.218 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Ce8noBp1z6 00:24:22.218 17:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:22.480 [2024-10-01 17:24:20.917487] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:22.480 TLSTESTn1 00:24:22.480 17:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:22.741 Running I/O for 10 seconds... 00:24:32.602 5502.00 IOPS, 21.49 MiB/s 5838.50 IOPS, 22.81 MiB/s 5755.00 IOPS, 22.48 MiB/s 5719.50 IOPS, 22.34 MiB/s 5882.60 IOPS, 22.98 MiB/s 5800.17 IOPS, 22.66 MiB/s 5707.00 IOPS, 22.29 MiB/s 5684.75 IOPS, 22.21 MiB/s 5703.67 IOPS, 22.28 MiB/s 5618.90 IOPS, 21.95 MiB/s 00:24:32.602 Latency(us) 00:24:32.602 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.602 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:32.602 Verification LBA range: start 0x0 length 0x2000 00:24:32.602 TLSTESTn1 : 10.02 5621.63 21.96 0.00 0.00 22735.95 4724.05 69468.16 00:24:32.602 =================================================================================================================== 00:24:32.602 Total : 5621.63 21.96 0.00 0.00 22735.95 4724.05 69468.16 00:24:32.602 { 00:24:32.602 "results": [ 00:24:32.602 { 00:24:32.602 "job": "TLSTESTn1", 00:24:32.602 "core_mask": "0x4", 00:24:32.602 "workload": "verify", 00:24:32.602 "status": "finished", 00:24:32.602 "verify_range": { 00:24:32.602 "start": 0, 00:24:32.602 "length": 8192 00:24:32.602 }, 00:24:32.602 "queue_depth": 128, 00:24:32.602 "io_size": 4096, 00:24:32.602 "runtime": 10.017913, 00:24:32.602 "iops": 5621.629974227167, 00:24:32.602 "mibps": 21.95949208682487, 00:24:32.602 "io_failed": 0, 00:24:32.602 "io_timeout": 0, 00:24:32.602 "avg_latency_us": 22735.954107403923, 00:24:32.602 "min_latency_us": 4724.053333333333, 00:24:32.602 "max_latency_us": 69468.16 00:24:32.602 } 00:24:32.602 ], 00:24:32.602 "core_count": 1 00:24:32.602 } 00:24:32.864 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:32.864 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3073582 00:24:32.864 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3073582 ']' 00:24:32.864 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3073582 00:24:32.864 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:32.864 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:32.864 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3073582 00:24:32.864 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:32.864 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:32.864 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3073582' 00:24:32.864 killing process with pid 3073582 00:24:32.864 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3073582 00:24:32.864 Received shutdown signal, test time was about 10.000000 seconds 00:24:32.864 00:24:32.864 Latency(us) 00:24:32.864 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.864 =================================================================================================================== 00:24:32.864 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:32.864 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3073582 00:24:32.864 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.Ce8noBp1z6 00:24:32.864 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ce8noBp1z6 00:24:32.864 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:32.864 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ce8noBp1z6 00:24:32.864 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:32.864 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:32.864 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:32.864 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:32.864 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ce8noBp1z6 00:24:32.864 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:32.864 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:32.864 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:32.864 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Ce8noBp1z6 00:24:32.864 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:32.864 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3075600 00:24:32.864 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:32.864 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3075600 /var/tmp/bdevperf.sock 00:24:32.864 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:32.864 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3075600 ']' 00:24:32.864 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:32.864 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:32.864 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:32.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:32.864 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:32.864 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:32.864 [2024-10-01 17:24:31.397148] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:24:32.864 [2024-10-01 17:24:31.397209] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3075600 ] 00:24:33.125 [2024-10-01 17:24:31.447412] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.125 [2024-10-01 17:24:31.475417] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:33.125 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:33.125 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:33.125 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Ce8noBp1z6 00:24:33.386 [2024-10-01 17:24:31.690592] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Ce8noBp1z6': 0100666 00:24:33.386 [2024-10-01 17:24:31.690613] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:33.386 request: 00:24:33.386 { 00:24:33.386 "name": "key0", 00:24:33.386 "path": "/tmp/tmp.Ce8noBp1z6", 00:24:33.386 "method": "keyring_file_add_key", 00:24:33.386 "req_id": 1 00:24:33.386 } 00:24:33.386 Got JSON-RPC error response 00:24:33.386 response: 00:24:33.386 { 00:24:33.386 "code": -1, 00:24:33.386 "message": "Operation not permitted" 00:24:33.386 } 00:24:33.386 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:33.386 [2024-10-01 17:24:31.859085] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:33.386 [2024-10-01 17:24:31.859110] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:24:33.386 request: 00:24:33.386 { 00:24:33.386 "name": "TLSTEST", 00:24:33.386 "trtype": "tcp", 00:24:33.386 "traddr": "10.0.0.2", 00:24:33.386 "adrfam": "ipv4", 00:24:33.386 "trsvcid": "4420", 00:24:33.386 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:33.386 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:33.386 "prchk_reftag": false, 00:24:33.386 "prchk_guard": false, 00:24:33.386 "hdgst": false, 00:24:33.386 "ddgst": false, 00:24:33.386 "psk": "key0", 00:24:33.386 "allow_unrecognized_csi": false, 00:24:33.386 "method": "bdev_nvme_attach_controller", 00:24:33.386 "req_id": 1 00:24:33.386 } 00:24:33.386 Got JSON-RPC error response 00:24:33.386 response: 00:24:33.386 { 00:24:33.386 "code": -126, 00:24:33.386 "message": "Required key not available" 00:24:33.386 } 00:24:33.386 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3075600 00:24:33.386 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3075600 ']' 00:24:33.386 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3075600 00:24:33.387 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:33.387 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:33.387 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3075600 00:24:33.648 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:33.648 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:33.648 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3075600' 00:24:33.648 killing process with pid 3075600 00:24:33.648 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3075600 00:24:33.648 Received shutdown signal, test time was about 10.000000 seconds 00:24:33.648 00:24:33.648 Latency(us) 00:24:33.648 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.648 =================================================================================================================== 00:24:33.648 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:33.648 17:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3075600 00:24:33.648 17:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:33.648 17:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:33.648 17:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:33.648 17:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:33.648 17:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:33.648 17:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3073215 00:24:33.648 17:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3073215 ']' 00:24:33.648 17:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3073215 00:24:33.648 17:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:33.648 17:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:33.648 17:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3073215 00:24:33.648 17:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:33.648 17:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:33.648 17:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3073215' 00:24:33.648 killing process with pid 3073215 00:24:33.648 17:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3073215 00:24:33.648 17:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3073215 00:24:33.910 17:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:24:33.910 17:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:33.910 17:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:33.910 17:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:33.910 17:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3075933 00:24:33.910 17:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3075933 00:24:33.910 17:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:33.910 17:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3075933 ']' 00:24:33.910 17:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:33.910 17:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:33.910 17:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:33.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:33.910 17:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:33.910 17:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:33.910 [2024-10-01 17:24:32.284323] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:24:33.910 [2024-10-01 17:24:32.284377] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:33.910 [2024-10-01 17:24:32.367206] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.910 [2024-10-01 17:24:32.394787] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:33.910 [2024-10-01 17:24:32.394826] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:33.910 [2024-10-01 17:24:32.394832] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:33.910 [2024-10-01 17:24:32.394836] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:33.910 [2024-10-01 17:24:32.394841] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:33.910 [2024-10-01 17:24:32.394858] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:34.852 17:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:34.852 17:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:34.852 17:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:34.852 17:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:34.852 17:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:34.852 17:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:34.852 17:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.Ce8noBp1z6 00:24:34.852 17:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:34.852 17:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Ce8noBp1z6 00:24:34.852 17:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:24:34.852 17:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:34.852 17:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:24:34.852 17:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:34.852 17:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.Ce8noBp1z6 00:24:34.852 17:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Ce8noBp1z6 00:24:34.852 17:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:34.852 [2024-10-01 17:24:33.273054] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:34.852 17:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:35.113 17:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:35.113 [2024-10-01 17:24:33.593845] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:35.113 [2024-10-01 17:24:33.594074] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:35.113 17:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:35.373 malloc0 00:24:35.373 17:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:35.634 17:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Ce8noBp1z6 00:24:35.634 [2024-10-01 17:24:34.105136] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Ce8noBp1z6': 0100666 00:24:35.634 [2024-10-01 17:24:34.105161] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:35.634 request: 00:24:35.634 { 00:24:35.634 "name": "key0", 00:24:35.634 "path": "/tmp/tmp.Ce8noBp1z6", 00:24:35.634 "method": "keyring_file_add_key", 00:24:35.634 "req_id": 1 00:24:35.634 } 00:24:35.634 Got JSON-RPC error response 00:24:35.634 response: 00:24:35.634 { 00:24:35.634 "code": -1, 00:24:35.634 "message": "Operation not permitted" 00:24:35.634 } 00:24:35.634 17:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:35.895 [2024-10-01 17:24:34.269562] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:24:35.895 [2024-10-01 17:24:34.269589] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:24:35.895 request: 00:24:35.895 { 00:24:35.895 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:35.895 "host": "nqn.2016-06.io.spdk:host1", 00:24:35.895 "psk": "key0", 00:24:35.895 "method": "nvmf_subsystem_add_host", 00:24:35.895 "req_id": 1 00:24:35.895 } 00:24:35.895 Got JSON-RPC error response 00:24:35.895 response: 00:24:35.895 { 00:24:35.895 "code": -32603, 00:24:35.895 "message": "Internal error" 00:24:35.895 } 00:24:35.895 17:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:35.895 17:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:35.895 17:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:35.895 17:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:35.895 17:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3075933 00:24:35.895 17:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3075933 ']' 00:24:35.895 17:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3075933 00:24:35.895 17:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:35.895 17:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:35.895 17:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3075933 00:24:35.895 17:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:35.895 17:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:35.895 17:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3075933' 00:24:35.895 killing process with pid 3075933 00:24:35.895 17:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3075933 00:24:35.895 17:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3075933 00:24:36.156 17:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.Ce8noBp1z6 00:24:36.156 17:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:24:36.156 17:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:36.156 17:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:36.156 17:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:36.156 17:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3076314 00:24:36.156 17:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3076314 00:24:36.156 17:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:36.156 17:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3076314 ']' 00:24:36.156 17:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:36.156 17:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:36.156 17:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:36.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:36.156 17:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:36.156 17:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:36.156 [2024-10-01 17:24:34.547977] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:24:36.156 [2024-10-01 17:24:34.548055] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:36.156 [2024-10-01 17:24:34.630678] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.156 [2024-10-01 17:24:34.658802] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:36.156 [2024-10-01 17:24:34.658837] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:36.156 [2024-10-01 17:24:34.658843] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:36.156 [2024-10-01 17:24:34.658847] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:36.156 [2024-10-01 17:24:34.658851] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:36.156 [2024-10-01 17:24:34.658866] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:37.098 17:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:37.098 17:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:37.098 17:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:37.098 17:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:37.098 17:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:37.098 17:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:37.098 17:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.Ce8noBp1z6 00:24:37.098 17:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Ce8noBp1z6 00:24:37.098 17:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:37.098 [2024-10-01 17:24:35.508236] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:37.098 17:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:37.358 17:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:37.358 [2024-10-01 17:24:35.829034] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:37.358 [2024-10-01 17:24:35.829240] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:37.358 17:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:37.620 malloc0 00:24:37.620 17:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:37.883 17:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Ce8noBp1z6 00:24:37.883 17:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:38.150 17:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3076678 00:24:38.150 17:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:38.150 17:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:38.150 17:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3076678 /var/tmp/bdevperf.sock 00:24:38.150 17:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3076678 ']' 00:24:38.150 17:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:38.150 17:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:38.150 17:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:38.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:38.150 17:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:38.150 17:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:38.150 [2024-10-01 17:24:36.544891] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:24:38.150 [2024-10-01 17:24:36.544944] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3076678 ] 00:24:38.150 [2024-10-01 17:24:36.595627] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.150 [2024-10-01 17:24:36.623730] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:38.410 17:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:38.410 17:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:38.410 17:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Ce8noBp1z6 00:24:38.410 17:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:38.670 [2024-10-01 17:24:37.023464] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:38.670 TLSTESTn1 00:24:38.670 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:24:38.931 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:24:38.931 "subsystems": [ 00:24:38.931 { 00:24:38.931 "subsystem": "keyring", 00:24:38.931 "config": [ 00:24:38.931 { 00:24:38.931 "method": "keyring_file_add_key", 00:24:38.931 "params": { 00:24:38.931 "name": "key0", 00:24:38.931 "path": "/tmp/tmp.Ce8noBp1z6" 00:24:38.931 } 00:24:38.931 } 00:24:38.931 ] 00:24:38.931 }, 00:24:38.931 { 00:24:38.931 "subsystem": "iobuf", 00:24:38.931 "config": [ 00:24:38.931 { 00:24:38.931 "method": "iobuf_set_options", 00:24:38.931 "params": { 00:24:38.931 "small_pool_count": 8192, 00:24:38.931 "large_pool_count": 1024, 00:24:38.931 "small_bufsize": 8192, 00:24:38.931 "large_bufsize": 135168 00:24:38.931 } 00:24:38.931 } 00:24:38.931 ] 00:24:38.931 }, 00:24:38.931 { 00:24:38.931 "subsystem": "sock", 00:24:38.931 "config": [ 00:24:38.931 { 00:24:38.931 "method": "sock_set_default_impl", 00:24:38.931 "params": { 00:24:38.931 "impl_name": "posix" 00:24:38.931 } 00:24:38.931 }, 00:24:38.931 { 00:24:38.931 "method": "sock_impl_set_options", 00:24:38.931 "params": { 00:24:38.931 "impl_name": "ssl", 00:24:38.931 "recv_buf_size": 4096, 00:24:38.931 "send_buf_size": 4096, 00:24:38.931 "enable_recv_pipe": true, 00:24:38.931 "enable_quickack": false, 00:24:38.931 "enable_placement_id": 0, 00:24:38.931 "enable_zerocopy_send_server": true, 00:24:38.931 "enable_zerocopy_send_client": false, 00:24:38.931 "zerocopy_threshold": 0, 00:24:38.931 "tls_version": 0, 00:24:38.931 "enable_ktls": false 00:24:38.931 } 00:24:38.931 }, 00:24:38.931 { 00:24:38.931 "method": "sock_impl_set_options", 00:24:38.931 "params": { 00:24:38.931 "impl_name": "posix", 00:24:38.931 "recv_buf_size": 2097152, 00:24:38.931 "send_buf_size": 2097152, 00:24:38.931 "enable_recv_pipe": true, 00:24:38.931 "enable_quickack": false, 00:24:38.931 "enable_placement_id": 0, 00:24:38.931 "enable_zerocopy_send_server": true, 00:24:38.931 "enable_zerocopy_send_client": false, 00:24:38.931 "zerocopy_threshold": 0, 00:24:38.931 "tls_version": 0, 00:24:38.931 "enable_ktls": false 00:24:38.931 } 00:24:38.931 } 00:24:38.931 ] 00:24:38.931 }, 00:24:38.931 { 00:24:38.931 "subsystem": "vmd", 00:24:38.931 "config": [] 00:24:38.931 }, 00:24:38.931 { 00:24:38.931 "subsystem": "accel", 00:24:38.932 "config": [ 00:24:38.932 { 00:24:38.932 "method": "accel_set_options", 00:24:38.932 "params": { 00:24:38.932 "small_cache_size": 128, 00:24:38.932 "large_cache_size": 16, 00:24:38.932 "task_count": 2048, 00:24:38.932 "sequence_count": 2048, 00:24:38.932 "buf_count": 2048 00:24:38.932 } 00:24:38.932 } 00:24:38.932 ] 00:24:38.932 }, 00:24:38.932 { 00:24:38.932 "subsystem": "bdev", 00:24:38.932 "config": [ 00:24:38.932 { 00:24:38.932 "method": "bdev_set_options", 00:24:38.932 "params": { 00:24:38.932 "bdev_io_pool_size": 65535, 00:24:38.932 "bdev_io_cache_size": 256, 00:24:38.932 "bdev_auto_examine": true, 00:24:38.932 "iobuf_small_cache_size": 128, 00:24:38.932 "iobuf_large_cache_size": 16 00:24:38.932 } 00:24:38.932 }, 00:24:38.932 { 00:24:38.932 "method": "bdev_raid_set_options", 00:24:38.932 "params": { 00:24:38.932 "process_window_size_kb": 1024, 00:24:38.932 "process_max_bandwidth_mb_sec": 0 00:24:38.932 } 00:24:38.932 }, 00:24:38.932 { 00:24:38.932 "method": "bdev_iscsi_set_options", 00:24:38.932 "params": { 00:24:38.932 "timeout_sec": 30 00:24:38.932 } 00:24:38.932 }, 00:24:38.932 { 00:24:38.932 "method": "bdev_nvme_set_options", 00:24:38.932 "params": { 00:24:38.932 "action_on_timeout": "none", 00:24:38.932 "timeout_us": 0, 00:24:38.932 "timeout_admin_us": 0, 00:24:38.932 "keep_alive_timeout_ms": 10000, 00:24:38.932 "arbitration_burst": 0, 00:24:38.932 "low_priority_weight": 0, 00:24:38.932 "medium_priority_weight": 0, 00:24:38.932 "high_priority_weight": 0, 00:24:38.932 "nvme_adminq_poll_period_us": 10000, 00:24:38.932 "nvme_ioq_poll_period_us": 0, 00:24:38.932 "io_queue_requests": 0, 00:24:38.932 "delay_cmd_submit": true, 00:24:38.932 "transport_retry_count": 4, 00:24:38.932 "bdev_retry_count": 3, 00:24:38.932 "transport_ack_timeout": 0, 00:24:38.932 "ctrlr_loss_timeout_sec": 0, 00:24:38.932 "reconnect_delay_sec": 0, 00:24:38.932 "fast_io_fail_timeout_sec": 0, 00:24:38.932 "disable_auto_failback": false, 00:24:38.932 "generate_uuids": false, 00:24:38.932 "transport_tos": 0, 00:24:38.932 "nvme_error_stat": false, 00:24:38.932 "rdma_srq_size": 0, 00:24:38.932 "io_path_stat": false, 00:24:38.932 "allow_accel_sequence": false, 00:24:38.932 "rdma_max_cq_size": 0, 00:24:38.932 "rdma_cm_event_timeout_ms": 0, 00:24:38.932 "dhchap_digests": [ 00:24:38.932 "sha256", 00:24:38.932 "sha384", 00:24:38.932 "sha512" 00:24:38.932 ], 00:24:38.932 "dhchap_dhgroups": [ 00:24:38.932 "null", 00:24:38.932 "ffdhe2048", 00:24:38.932 "ffdhe3072", 00:24:38.932 "ffdhe4096", 00:24:38.932 "ffdhe6144", 00:24:38.932 "ffdhe8192" 00:24:38.932 ] 00:24:38.932 } 00:24:38.932 }, 00:24:38.932 { 00:24:38.932 "method": "bdev_nvme_set_hotplug", 00:24:38.932 "params": { 00:24:38.932 "period_us": 100000, 00:24:38.932 "enable": false 00:24:38.932 } 00:24:38.932 }, 00:24:38.932 { 00:24:38.932 "method": "bdev_malloc_create", 00:24:38.932 "params": { 00:24:38.932 "name": "malloc0", 00:24:38.932 "num_blocks": 8192, 00:24:38.932 "block_size": 4096, 00:24:38.932 "physical_block_size": 4096, 00:24:38.932 "uuid": "d629e11b-cf24-4005-88f7-e39f19fa3957", 00:24:38.932 "optimal_io_boundary": 0, 00:24:38.932 "md_size": 0, 00:24:38.932 "dif_type": 0, 00:24:38.932 "dif_is_head_of_md": false, 00:24:38.932 "dif_pi_format": 0 00:24:38.932 } 00:24:38.932 }, 00:24:38.932 { 00:24:38.932 "method": "bdev_wait_for_examine" 00:24:38.932 } 00:24:38.932 ] 00:24:38.932 }, 00:24:38.932 { 00:24:38.932 "subsystem": "nbd", 00:24:38.932 "config": [] 00:24:38.932 }, 00:24:38.932 { 00:24:38.932 "subsystem": "scheduler", 00:24:38.932 "config": [ 00:24:38.932 { 00:24:38.932 "method": "framework_set_scheduler", 00:24:38.932 "params": { 00:24:38.932 "name": "static" 00:24:38.932 } 00:24:38.932 } 00:24:38.932 ] 00:24:38.932 }, 00:24:38.932 { 00:24:38.932 "subsystem": "nvmf", 00:24:38.932 "config": [ 00:24:38.932 { 00:24:38.932 "method": "nvmf_set_config", 00:24:38.932 "params": { 00:24:38.932 "discovery_filter": "match_any", 00:24:38.932 "admin_cmd_passthru": { 00:24:38.932 "identify_ctrlr": false 00:24:38.932 }, 00:24:38.932 "dhchap_digests": [ 00:24:38.932 "sha256", 00:24:38.932 "sha384", 00:24:38.932 "sha512" 00:24:38.932 ], 00:24:38.932 "dhchap_dhgroups": [ 00:24:38.932 "null", 00:24:38.932 "ffdhe2048", 00:24:38.932 "ffdhe3072", 00:24:38.932 "ffdhe4096", 00:24:38.932 "ffdhe6144", 00:24:38.932 "ffdhe8192" 00:24:38.932 ] 00:24:38.932 } 00:24:38.932 }, 00:24:38.932 { 00:24:38.932 "method": "nvmf_set_max_subsystems", 00:24:38.932 "params": { 00:24:38.932 "max_subsystems": 1024 00:24:38.932 } 00:24:38.932 }, 00:24:38.932 { 00:24:38.932 "method": "nvmf_set_crdt", 00:24:38.932 "params": { 00:24:38.932 "crdt1": 0, 00:24:38.932 "crdt2": 0, 00:24:38.932 "crdt3": 0 00:24:38.932 } 00:24:38.932 }, 00:24:38.932 { 00:24:38.932 "method": "nvmf_create_transport", 00:24:38.932 "params": { 00:24:38.932 "trtype": "TCP", 00:24:38.932 "max_queue_depth": 128, 00:24:38.932 "max_io_qpairs_per_ctrlr": 127, 00:24:38.932 "in_capsule_data_size": 4096, 00:24:38.932 "max_io_size": 131072, 00:24:38.932 "io_unit_size": 131072, 00:24:38.932 "max_aq_depth": 128, 00:24:38.932 "num_shared_buffers": 511, 00:24:38.932 "buf_cache_size": 4294967295, 00:24:38.932 "dif_insert_or_strip": false, 00:24:38.932 "zcopy": false, 00:24:38.932 "c2h_success": false, 00:24:38.932 "sock_priority": 0, 00:24:38.932 "abort_timeout_sec": 1, 00:24:38.932 "ack_timeout": 0, 00:24:38.932 "data_wr_pool_size": 0 00:24:38.932 } 00:24:38.932 }, 00:24:38.932 { 00:24:38.932 "method": "nvmf_create_subsystem", 00:24:38.932 "params": { 00:24:38.932 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:38.932 "allow_any_host": false, 00:24:38.932 "serial_number": "SPDK00000000000001", 00:24:38.932 "model_number": "SPDK bdev Controller", 00:24:38.932 "max_namespaces": 10, 00:24:38.932 "min_cntlid": 1, 00:24:38.932 "max_cntlid": 65519, 00:24:38.932 "ana_reporting": false 00:24:38.932 } 00:24:38.932 }, 00:24:38.932 { 00:24:38.932 "method": "nvmf_subsystem_add_host", 00:24:38.932 "params": { 00:24:38.932 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:38.932 "host": "nqn.2016-06.io.spdk:host1", 00:24:38.932 "psk": "key0" 00:24:38.932 } 00:24:38.932 }, 00:24:38.932 { 00:24:38.932 "method": "nvmf_subsystem_add_ns", 00:24:38.932 "params": { 00:24:38.933 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:38.933 "namespace": { 00:24:38.933 "nsid": 1, 00:24:38.933 "bdev_name": "malloc0", 00:24:38.933 "nguid": "D629E11BCF24400588F7E39F19FA3957", 00:24:38.933 "uuid": "d629e11b-cf24-4005-88f7-e39f19fa3957", 00:24:38.933 "no_auto_visible": false 00:24:38.933 } 00:24:38.933 } 00:24:38.933 }, 00:24:38.933 { 00:24:38.933 "method": "nvmf_subsystem_add_listener", 00:24:38.933 "params": { 00:24:38.933 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:38.933 "listen_address": { 00:24:38.933 "trtype": "TCP", 00:24:38.933 "adrfam": "IPv4", 00:24:38.933 "traddr": "10.0.0.2", 00:24:38.933 "trsvcid": "4420" 00:24:38.933 }, 00:24:38.933 "secure_channel": true 00:24:38.933 } 00:24:38.933 } 00:24:38.933 ] 00:24:38.933 } 00:24:38.933 ] 00:24:38.933 }' 00:24:38.933 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:39.193 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:24:39.193 "subsystems": [ 00:24:39.193 { 00:24:39.193 "subsystem": "keyring", 00:24:39.193 "config": [ 00:24:39.193 { 00:24:39.194 "method": "keyring_file_add_key", 00:24:39.194 "params": { 00:24:39.194 "name": "key0", 00:24:39.194 "path": "/tmp/tmp.Ce8noBp1z6" 00:24:39.194 } 00:24:39.194 } 00:24:39.194 ] 00:24:39.194 }, 00:24:39.194 { 00:24:39.194 "subsystem": "iobuf", 00:24:39.194 "config": [ 00:24:39.194 { 00:24:39.194 "method": "iobuf_set_options", 00:24:39.194 "params": { 00:24:39.194 "small_pool_count": 8192, 00:24:39.194 "large_pool_count": 1024, 00:24:39.194 "small_bufsize": 8192, 00:24:39.194 "large_bufsize": 135168 00:24:39.194 } 00:24:39.194 } 00:24:39.194 ] 00:24:39.194 }, 00:24:39.194 { 00:24:39.194 "subsystem": "sock", 00:24:39.194 "config": [ 00:24:39.194 { 00:24:39.194 "method": "sock_set_default_impl", 00:24:39.194 "params": { 00:24:39.194 "impl_name": "posix" 00:24:39.194 } 00:24:39.194 }, 00:24:39.194 { 00:24:39.194 "method": "sock_impl_set_options", 00:24:39.194 "params": { 00:24:39.194 "impl_name": "ssl", 00:24:39.194 "recv_buf_size": 4096, 00:24:39.194 "send_buf_size": 4096, 00:24:39.194 "enable_recv_pipe": true, 00:24:39.194 "enable_quickack": false, 00:24:39.194 "enable_placement_id": 0, 00:24:39.194 "enable_zerocopy_send_server": true, 00:24:39.194 "enable_zerocopy_send_client": false, 00:24:39.194 "zerocopy_threshold": 0, 00:24:39.194 "tls_version": 0, 00:24:39.194 "enable_ktls": false 00:24:39.194 } 00:24:39.194 }, 00:24:39.194 { 00:24:39.194 "method": "sock_impl_set_options", 00:24:39.194 "params": { 00:24:39.194 "impl_name": "posix", 00:24:39.194 "recv_buf_size": 2097152, 00:24:39.194 "send_buf_size": 2097152, 00:24:39.194 "enable_recv_pipe": true, 00:24:39.194 "enable_quickack": false, 00:24:39.194 "enable_placement_id": 0, 00:24:39.194 "enable_zerocopy_send_server": true, 00:24:39.194 "enable_zerocopy_send_client": false, 00:24:39.194 "zerocopy_threshold": 0, 00:24:39.194 "tls_version": 0, 00:24:39.194 "enable_ktls": false 00:24:39.194 } 00:24:39.194 } 00:24:39.194 ] 00:24:39.194 }, 00:24:39.194 { 00:24:39.194 "subsystem": "vmd", 00:24:39.194 "config": [] 00:24:39.194 }, 00:24:39.194 { 00:24:39.194 "subsystem": "accel", 00:24:39.194 "config": [ 00:24:39.194 { 00:24:39.194 "method": "accel_set_options", 00:24:39.194 "params": { 00:24:39.194 "small_cache_size": 128, 00:24:39.194 "large_cache_size": 16, 00:24:39.194 "task_count": 2048, 00:24:39.194 "sequence_count": 2048, 00:24:39.194 "buf_count": 2048 00:24:39.194 } 00:24:39.194 } 00:24:39.194 ] 00:24:39.194 }, 00:24:39.194 { 00:24:39.194 "subsystem": "bdev", 00:24:39.194 "config": [ 00:24:39.194 { 00:24:39.194 "method": "bdev_set_options", 00:24:39.194 "params": { 00:24:39.194 "bdev_io_pool_size": 65535, 00:24:39.194 "bdev_io_cache_size": 256, 00:24:39.194 "bdev_auto_examine": true, 00:24:39.194 "iobuf_small_cache_size": 128, 00:24:39.194 "iobuf_large_cache_size": 16 00:24:39.194 } 00:24:39.194 }, 00:24:39.194 { 00:24:39.194 "method": "bdev_raid_set_options", 00:24:39.194 "params": { 00:24:39.194 "process_window_size_kb": 1024, 00:24:39.194 "process_max_bandwidth_mb_sec": 0 00:24:39.194 } 00:24:39.194 }, 00:24:39.194 { 00:24:39.194 "method": "bdev_iscsi_set_options", 00:24:39.194 "params": { 00:24:39.194 "timeout_sec": 30 00:24:39.194 } 00:24:39.194 }, 00:24:39.194 { 00:24:39.194 "method": "bdev_nvme_set_options", 00:24:39.194 "params": { 00:24:39.194 "action_on_timeout": "none", 00:24:39.194 "timeout_us": 0, 00:24:39.194 "timeout_admin_us": 0, 00:24:39.194 "keep_alive_timeout_ms": 10000, 00:24:39.194 "arbitration_burst": 0, 00:24:39.194 "low_priority_weight": 0, 00:24:39.194 "medium_priority_weight": 0, 00:24:39.194 "high_priority_weight": 0, 00:24:39.194 "nvme_adminq_poll_period_us": 10000, 00:24:39.194 "nvme_ioq_poll_period_us": 0, 00:24:39.194 "io_queue_requests": 512, 00:24:39.194 "delay_cmd_submit": true, 00:24:39.194 "transport_retry_count": 4, 00:24:39.194 "bdev_retry_count": 3, 00:24:39.194 "transport_ack_timeout": 0, 00:24:39.194 "ctrlr_loss_timeout_sec": 0, 00:24:39.194 "reconnect_delay_sec": 0, 00:24:39.194 "fast_io_fail_timeout_sec": 0, 00:24:39.194 "disable_auto_failback": false, 00:24:39.194 "generate_uuids": false, 00:24:39.194 "transport_tos": 0, 00:24:39.194 "nvme_error_stat": false, 00:24:39.194 "rdma_srq_size": 0, 00:24:39.194 "io_path_stat": false, 00:24:39.194 "allow_accel_sequence": false, 00:24:39.194 "rdma_max_cq_size": 0, 00:24:39.194 "rdma_cm_event_timeout_ms": 0, 00:24:39.194 "dhchap_digests": [ 00:24:39.194 "sha256", 00:24:39.194 "sha384", 00:24:39.194 "sha512" 00:24:39.194 ], 00:24:39.194 "dhchap_dhgroups": [ 00:24:39.194 "null", 00:24:39.194 "ffdhe2048", 00:24:39.194 "ffdhe3072", 00:24:39.194 "ffdhe4096", 00:24:39.194 "ffdhe6144", 00:24:39.194 "ffdhe8192" 00:24:39.194 ] 00:24:39.194 } 00:24:39.194 }, 00:24:39.194 { 00:24:39.194 "method": "bdev_nvme_attach_controller", 00:24:39.194 "params": { 00:24:39.194 "name": "TLSTEST", 00:24:39.194 "trtype": "TCP", 00:24:39.194 "adrfam": "IPv4", 00:24:39.194 "traddr": "10.0.0.2", 00:24:39.194 "trsvcid": "4420", 00:24:39.194 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:39.194 "prchk_reftag": false, 00:24:39.194 "prchk_guard": false, 00:24:39.194 "ctrlr_loss_timeout_sec": 0, 00:24:39.194 "reconnect_delay_sec": 0, 00:24:39.194 "fast_io_fail_timeout_sec": 0, 00:24:39.194 "psk": "key0", 00:24:39.194 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:39.194 "hdgst": false, 00:24:39.194 "ddgst": false 00:24:39.194 } 00:24:39.194 }, 00:24:39.194 { 00:24:39.194 "method": "bdev_nvme_set_hotplug", 00:24:39.194 "params": { 00:24:39.194 "period_us": 100000, 00:24:39.194 "enable": false 00:24:39.194 } 00:24:39.194 }, 00:24:39.194 { 00:24:39.194 "method": "bdev_wait_for_examine" 00:24:39.194 } 00:24:39.194 ] 00:24:39.194 }, 00:24:39.194 { 00:24:39.194 "subsystem": "nbd", 00:24:39.194 "config": [] 00:24:39.194 } 00:24:39.194 ] 00:24:39.194 }' 00:24:39.194 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3076678 00:24:39.194 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3076678 ']' 00:24:39.194 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3076678 00:24:39.194 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:39.194 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:39.194 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3076678 00:24:39.194 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:39.194 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:39.194 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3076678' 00:24:39.194 killing process with pid 3076678 00:24:39.194 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3076678 00:24:39.194 Received shutdown signal, test time was about 10.000000 seconds 00:24:39.194 00:24:39.194 Latency(us) 00:24:39.194 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.194 =================================================================================================================== 00:24:39.194 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:39.194 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3076678 00:24:39.454 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3076314 00:24:39.455 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3076314 ']' 00:24:39.455 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3076314 00:24:39.455 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:39.455 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:39.455 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3076314 00:24:39.455 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:39.455 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:39.455 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3076314' 00:24:39.455 killing process with pid 3076314 00:24:39.455 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3076314 00:24:39.455 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3076314 00:24:39.455 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:39.455 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:39.455 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:39.455 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:39.455 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:24:39.455 "subsystems": [ 00:24:39.455 { 00:24:39.455 "subsystem": "keyring", 00:24:39.455 "config": [ 00:24:39.455 { 00:24:39.455 "method": "keyring_file_add_key", 00:24:39.455 "params": { 00:24:39.455 "name": "key0", 00:24:39.455 "path": "/tmp/tmp.Ce8noBp1z6" 00:24:39.455 } 00:24:39.455 } 00:24:39.455 ] 00:24:39.455 }, 00:24:39.455 { 00:24:39.455 "subsystem": "iobuf", 00:24:39.455 "config": [ 00:24:39.455 { 00:24:39.455 "method": "iobuf_set_options", 00:24:39.455 "params": { 00:24:39.455 "small_pool_count": 8192, 00:24:39.455 "large_pool_count": 1024, 00:24:39.455 "small_bufsize": 8192, 00:24:39.455 "large_bufsize": 135168 00:24:39.455 } 00:24:39.455 } 00:24:39.455 ] 00:24:39.455 }, 00:24:39.455 { 00:24:39.455 "subsystem": "sock", 00:24:39.455 "config": [ 00:24:39.455 { 00:24:39.455 "method": "sock_set_default_impl", 00:24:39.455 "params": { 00:24:39.455 "impl_name": "posix" 00:24:39.455 } 00:24:39.455 }, 00:24:39.455 { 00:24:39.455 "method": "sock_impl_set_options", 00:24:39.455 "params": { 00:24:39.455 "impl_name": "ssl", 00:24:39.455 "recv_buf_size": 4096, 00:24:39.455 "send_buf_size": 4096, 00:24:39.455 "enable_recv_pipe": true, 00:24:39.455 "enable_quickack": false, 00:24:39.455 "enable_placement_id": 0, 00:24:39.455 "enable_zerocopy_send_server": true, 00:24:39.455 "enable_zerocopy_send_client": false, 00:24:39.455 "zerocopy_threshold": 0, 00:24:39.455 "tls_version": 0, 00:24:39.455 "enable_ktls": false 00:24:39.455 } 00:24:39.455 }, 00:24:39.455 { 00:24:39.455 "method": "sock_impl_set_options", 00:24:39.455 "params": { 00:24:39.455 "impl_name": "posix", 00:24:39.455 "recv_buf_size": 2097152, 00:24:39.455 "send_buf_size": 2097152, 00:24:39.455 "enable_recv_pipe": true, 00:24:39.455 "enable_quickack": false, 00:24:39.455 "enable_placement_id": 0, 00:24:39.455 "enable_zerocopy_send_server": true, 00:24:39.455 "enable_zerocopy_send_client": false, 00:24:39.455 "zerocopy_threshold": 0, 00:24:39.455 "tls_version": 0, 00:24:39.455 "enable_ktls": false 00:24:39.455 } 00:24:39.455 } 00:24:39.455 ] 00:24:39.455 }, 00:24:39.455 { 00:24:39.455 "subsystem": "vmd", 00:24:39.455 "config": [] 00:24:39.455 }, 00:24:39.455 { 00:24:39.455 "subsystem": "accel", 00:24:39.455 "config": [ 00:24:39.455 { 00:24:39.455 "method": "accel_set_options", 00:24:39.455 "params": { 00:24:39.455 "small_cache_size": 128, 00:24:39.455 "large_cache_size": 16, 00:24:39.455 "task_count": 2048, 00:24:39.455 "sequence_count": 2048, 00:24:39.455 "buf_count": 2048 00:24:39.455 } 00:24:39.455 } 00:24:39.455 ] 00:24:39.455 }, 00:24:39.455 { 00:24:39.455 "subsystem": "bdev", 00:24:39.455 "config": [ 00:24:39.455 { 00:24:39.455 "method": "bdev_set_options", 00:24:39.455 "params": { 00:24:39.455 "bdev_io_pool_size": 65535, 00:24:39.455 "bdev_io_cache_size": 256, 00:24:39.455 "bdev_auto_examine": true, 00:24:39.455 "iobuf_small_cache_size": 128, 00:24:39.455 "iobuf_large_cache_size": 16 00:24:39.455 } 00:24:39.455 }, 00:24:39.455 { 00:24:39.455 "method": "bdev_raid_set_options", 00:24:39.455 "params": { 00:24:39.455 "process_window_size_kb": 1024, 00:24:39.455 "process_max_bandwidth_mb_sec": 0 00:24:39.455 } 00:24:39.455 }, 00:24:39.455 { 00:24:39.455 "method": "bdev_iscsi_set_options", 00:24:39.455 "params": { 00:24:39.455 "timeout_sec": 30 00:24:39.455 } 00:24:39.455 }, 00:24:39.455 { 00:24:39.455 "method": "bdev_nvme_set_options", 00:24:39.455 "params": { 00:24:39.455 "action_on_timeout": "none", 00:24:39.455 "timeout_us": 0, 00:24:39.455 "timeout_admin_us": 0, 00:24:39.455 "keep_alive_timeout_ms": 10000, 00:24:39.455 "arbitration_burst": 0, 00:24:39.455 "low_priority_weight": 0, 00:24:39.455 "medium_priority_weight": 0, 00:24:39.455 "high_priority_weight": 0, 00:24:39.455 "nvme_adminq_poll_period_us": 10000, 00:24:39.455 "nvme_ioq_poll_period_us": 0, 00:24:39.455 "io_queue_requests": 0, 00:24:39.455 "delay_cmd_submit": true, 00:24:39.455 "transport_retry_count": 4, 00:24:39.455 "bdev_retry_count": 3, 00:24:39.455 "transport_ack_timeout": 0, 00:24:39.455 "ctrlr_loss_timeout_sec": 0, 00:24:39.455 "reconnect_delay_sec": 0, 00:24:39.455 "fast_io_fail_timeout_sec": 0, 00:24:39.455 "disable_auto_failback": false, 00:24:39.455 "generate_uuids": false, 00:24:39.455 "transport_tos": 0, 00:24:39.455 "nvme_error_stat": false, 00:24:39.455 "rdma_srq_size": 0, 00:24:39.455 "io_path_stat": false, 00:24:39.455 "allow_accel_sequence": false, 00:24:39.455 "rdma_max_cq_size": 0, 00:24:39.455 "rdma_cm_event_timeout_ms": 0, 00:24:39.455 "dhchap_digests": [ 00:24:39.455 "sha256", 00:24:39.455 "sha384", 00:24:39.455 "sha512" 00:24:39.455 ], 00:24:39.455 "dhchap_dhgroups": [ 00:24:39.455 "null", 00:24:39.455 "ffdhe2048", 00:24:39.455 "ffdhe3072", 00:24:39.455 "ffdhe4096", 00:24:39.455 "ffdhe6144", 00:24:39.455 "ffdhe8192" 00:24:39.455 ] 00:24:39.455 } 00:24:39.455 }, 00:24:39.455 { 00:24:39.455 "method": "bdev_nvme_set_hotplug", 00:24:39.455 "params": { 00:24:39.455 "period_us": 100000, 00:24:39.455 "enable": false 00:24:39.455 } 00:24:39.455 }, 00:24:39.455 { 00:24:39.455 "method": "bdev_malloc_create", 00:24:39.455 "params": { 00:24:39.455 "name": "malloc0", 00:24:39.455 "num_blocks": 8192, 00:24:39.455 "block_size": 4096, 00:24:39.455 "physical_block_size": 4096, 00:24:39.455 "uuid": "d629e11b-cf24-4005-88f7-e39f19fa3957", 00:24:39.455 "optimal_io_boundary": 0, 00:24:39.455 "md_size": 0, 00:24:39.455 "dif_type": 0, 00:24:39.455 "dif_is_head_of_md": false, 00:24:39.455 "dif_pi_format": 0 00:24:39.455 } 00:24:39.455 }, 00:24:39.455 { 00:24:39.455 "method": "bdev_wait_for_examine" 00:24:39.455 } 00:24:39.455 ] 00:24:39.455 }, 00:24:39.455 { 00:24:39.455 "subsystem": "nbd", 00:24:39.455 "config": [] 00:24:39.455 }, 00:24:39.455 { 00:24:39.455 "subsystem": "scheduler", 00:24:39.455 "config": [ 00:24:39.455 { 00:24:39.455 "method": "framework_set_scheduler", 00:24:39.455 "params": { 00:24:39.455 "name": "static" 00:24:39.455 } 00:24:39.455 } 00:24:39.455 ] 00:24:39.455 }, 00:24:39.455 { 00:24:39.455 "subsystem": "nvmf", 00:24:39.455 "config": [ 00:24:39.455 { 00:24:39.455 "method": "nvmf_set_config", 00:24:39.455 "params": { 00:24:39.455 "discovery_filter": "match_any", 00:24:39.455 "admin_cmd_passthru": { 00:24:39.455 "identify_ctrlr": false 00:24:39.455 }, 00:24:39.455 "dhchap_digests": [ 00:24:39.455 "sha256", 00:24:39.456 "sha384", 00:24:39.456 "sha512" 00:24:39.456 ], 00:24:39.456 "dhchap_dhgroups": [ 00:24:39.456 "null", 00:24:39.456 "ffdhe2048", 00:24:39.456 "ffdhe3072", 00:24:39.456 "ffdhe4096", 00:24:39.456 "ffdhe6144", 00:24:39.456 "ffdhe8192" 00:24:39.456 ] 00:24:39.456 } 00:24:39.456 }, 00:24:39.456 { 00:24:39.456 "method": "nvmf_set_max_subsystems", 00:24:39.456 "params": { 00:24:39.456 "max_subsystems": 1024 00:24:39.456 } 00:24:39.456 }, 00:24:39.456 { 00:24:39.456 "method": "nvmf_set_crdt", 00:24:39.456 "params": { 00:24:39.456 "crdt1": 0, 00:24:39.456 "crdt2": 0, 00:24:39.456 "crdt3": 0 00:24:39.456 } 00:24:39.456 }, 00:24:39.456 { 00:24:39.456 "method": "nvmf_create_transport", 00:24:39.456 "params": { 00:24:39.456 "trtype": "TCP", 00:24:39.456 "max_queue_depth": 128, 00:24:39.456 "max_io_qpairs_per_ctrlr": 127, 00:24:39.456 "in_capsule_data_size": 4096, 00:24:39.456 "max_io_size": 131072, 00:24:39.456 "io_unit_size": 131072, 00:24:39.456 "max_aq_depth": 128, 00:24:39.456 "num_shared_buffers": 511, 00:24:39.456 "buf_cache_size": 4294967295, 00:24:39.456 "dif_insert_or_strip": false, 00:24:39.456 "zcopy": false, 00:24:39.456 "c2h_success": false, 00:24:39.456 "sock_priority": 0, 00:24:39.456 "abort_timeout_sec": 1, 00:24:39.456 "ack_timeout": 0, 00:24:39.456 "data_wr_pool_size": 0 00:24:39.456 } 00:24:39.456 }, 00:24:39.456 { 00:24:39.456 "method": "nvmf_create_subsystem", 00:24:39.456 "params": { 00:24:39.456 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:39.456 "allow_any_host": false, 00:24:39.456 "serial_number": "SPDK00000000000001", 00:24:39.456 "model_number": "SPDK bdev Controller", 00:24:39.456 "max_namespaces": 10, 00:24:39.456 "min_cntlid": 1, 00:24:39.456 "max_cntlid": 65519, 00:24:39.456 "ana_reporting": false 00:24:39.456 } 00:24:39.456 }, 00:24:39.456 { 00:24:39.456 "method": "nvmf_subsystem_add_host", 00:24:39.456 "params": { 00:24:39.456 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:39.456 "host": "nqn.2016-06.io.spdk:host1", 00:24:39.456 "psk": "key0" 00:24:39.456 } 00:24:39.456 }, 00:24:39.456 { 00:24:39.456 "method": "nvmf_subsystem_add_ns", 00:24:39.456 "params": { 00:24:39.456 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:39.456 "namespace": { 00:24:39.456 "nsid": 1, 00:24:39.456 "bdev_name": "malloc0", 00:24:39.456 "nguid": "D629E11BCF24400588F7E39F19FA3957", 00:24:39.456 "uuid": "d629e11b-cf24-4005-88f7-e39f19fa3957", 00:24:39.456 "no_auto_visible": false 00:24:39.456 } 00:24:39.456 } 00:24:39.456 }, 00:24:39.456 { 00:24:39.456 "method": "nvmf_subsystem_add_listener", 00:24:39.456 "params": { 00:24:39.456 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:39.456 "listen_address": { 00:24:39.456 "trtype": "TCP", 00:24:39.456 "adrfam": "IPv4", 00:24:39.456 "traddr": "10.0.0.2", 00:24:39.456 "trsvcid": "4420" 00:24:39.456 }, 00:24:39.456 "secure_channel": true 00:24:39.456 } 00:24:39.456 } 00:24:39.456 ] 00:24:39.456 } 00:24:39.456 ] 00:24:39.456 }' 00:24:39.456 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3077028 00:24:39.456 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3077028 00:24:39.456 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:39.456 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3077028 ']' 00:24:39.456 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:39.456 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:39.456 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:39.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:39.456 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:39.456 17:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:39.716 [2024-10-01 17:24:38.040198] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:24:39.716 [2024-10-01 17:24:38.040259] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:39.716 [2024-10-01 17:24:38.122182] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.716 [2024-10-01 17:24:38.150674] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:39.716 [2024-10-01 17:24:38.150707] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:39.716 [2024-10-01 17:24:38.150713] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:39.716 [2024-10-01 17:24:38.150718] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:39.716 [2024-10-01 17:24:38.150722] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:39.716 [2024-10-01 17:24:38.150768] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:39.976 [2024-10-01 17:24:38.350997] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:39.976 [2024-10-01 17:24:38.383017] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:39.976 [2024-10-01 17:24:38.383241] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:40.547 17:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:40.547 17:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:40.547 17:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:40.547 17:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:40.547 17:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:40.547 17:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:40.547 17:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3077283 00:24:40.547 17:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3077283 /var/tmp/bdevperf.sock 00:24:40.547 17:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3077283 ']' 00:24:40.547 17:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:40.547 17:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:40.547 17:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:40.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:40.547 17:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:40.547 17:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:40.547 17:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:40.547 17:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:24:40.547 "subsystems": [ 00:24:40.547 { 00:24:40.547 "subsystem": "keyring", 00:24:40.547 "config": [ 00:24:40.547 { 00:24:40.547 "method": "keyring_file_add_key", 00:24:40.547 "params": { 00:24:40.547 "name": "key0", 00:24:40.547 "path": "/tmp/tmp.Ce8noBp1z6" 00:24:40.547 } 00:24:40.547 } 00:24:40.547 ] 00:24:40.547 }, 00:24:40.547 { 00:24:40.547 "subsystem": "iobuf", 00:24:40.547 "config": [ 00:24:40.547 { 00:24:40.547 "method": "iobuf_set_options", 00:24:40.547 "params": { 00:24:40.547 "small_pool_count": 8192, 00:24:40.547 "large_pool_count": 1024, 00:24:40.547 "small_bufsize": 8192, 00:24:40.547 "large_bufsize": 135168 00:24:40.547 } 00:24:40.547 } 00:24:40.547 ] 00:24:40.547 }, 00:24:40.547 { 00:24:40.547 "subsystem": "sock", 00:24:40.547 "config": [ 00:24:40.547 { 00:24:40.547 "method": "sock_set_default_impl", 00:24:40.547 "params": { 00:24:40.547 "impl_name": "posix" 00:24:40.547 } 00:24:40.547 }, 00:24:40.547 { 00:24:40.547 "method": "sock_impl_set_options", 00:24:40.547 "params": { 00:24:40.547 "impl_name": "ssl", 00:24:40.547 "recv_buf_size": 4096, 00:24:40.547 "send_buf_size": 4096, 00:24:40.547 "enable_recv_pipe": true, 00:24:40.547 "enable_quickack": false, 00:24:40.547 "enable_placement_id": 0, 00:24:40.547 "enable_zerocopy_send_server": true, 00:24:40.547 "enable_zerocopy_send_client": false, 00:24:40.547 "zerocopy_threshold": 0, 00:24:40.547 "tls_version": 0, 00:24:40.547 "enable_ktls": false 00:24:40.547 } 00:24:40.547 }, 00:24:40.547 { 00:24:40.547 "method": "sock_impl_set_options", 00:24:40.547 "params": { 00:24:40.547 "impl_name": "posix", 00:24:40.547 "recv_buf_size": 2097152, 00:24:40.547 "send_buf_size": 2097152, 00:24:40.547 "enable_recv_pipe": true, 00:24:40.547 "enable_quickack": false, 00:24:40.547 "enable_placement_id": 0, 00:24:40.547 "enable_zerocopy_send_server": true, 00:24:40.547 "enable_zerocopy_send_client": false, 00:24:40.547 "zerocopy_threshold": 0, 00:24:40.547 "tls_version": 0, 00:24:40.547 "enable_ktls": false 00:24:40.547 } 00:24:40.547 } 00:24:40.547 ] 00:24:40.547 }, 00:24:40.547 { 00:24:40.547 "subsystem": "vmd", 00:24:40.547 "config": [] 00:24:40.547 }, 00:24:40.547 { 00:24:40.547 "subsystem": "accel", 00:24:40.547 "config": [ 00:24:40.547 { 00:24:40.547 "method": "accel_set_options", 00:24:40.547 "params": { 00:24:40.547 "small_cache_size": 128, 00:24:40.547 "large_cache_size": 16, 00:24:40.547 "task_count": 2048, 00:24:40.547 "sequence_count": 2048, 00:24:40.547 "buf_count": 2048 00:24:40.547 } 00:24:40.547 } 00:24:40.547 ] 00:24:40.547 }, 00:24:40.547 { 00:24:40.547 "subsystem": "bdev", 00:24:40.547 "config": [ 00:24:40.547 { 00:24:40.547 "method": "bdev_set_options", 00:24:40.547 "params": { 00:24:40.547 "bdev_io_pool_size": 65535, 00:24:40.547 "bdev_io_cache_size": 256, 00:24:40.547 "bdev_auto_examine": true, 00:24:40.547 "iobuf_small_cache_size": 128, 00:24:40.547 "iobuf_large_cache_size": 16 00:24:40.547 } 00:24:40.547 }, 00:24:40.547 { 00:24:40.547 "method": "bdev_raid_set_options", 00:24:40.547 "params": { 00:24:40.547 "process_window_size_kb": 1024, 00:24:40.547 "process_max_bandwidth_mb_sec": 0 00:24:40.547 } 00:24:40.547 }, 00:24:40.547 { 00:24:40.547 "method": "bdev_iscsi_set_options", 00:24:40.547 "params": { 00:24:40.547 "timeout_sec": 30 00:24:40.547 } 00:24:40.547 }, 00:24:40.547 { 00:24:40.547 "method": "bdev_nvme_set_options", 00:24:40.547 "params": { 00:24:40.547 "action_on_timeout": "none", 00:24:40.547 "timeout_us": 0, 00:24:40.547 "timeout_admin_us": 0, 00:24:40.547 "keep_alive_timeout_ms": 10000, 00:24:40.547 "arbitration_burst": 0, 00:24:40.547 "low_priority_weight": 0, 00:24:40.547 "medium_priority_weight": 0, 00:24:40.547 "high_priority_weight": 0, 00:24:40.547 "nvme_adminq_poll_period_us": 10000, 00:24:40.547 "nvme_ioq_poll_period_us": 0, 00:24:40.547 "io_queue_requests": 512, 00:24:40.547 "delay_cmd_submit": true, 00:24:40.547 "transport_retry_count": 4, 00:24:40.547 "bdev_retry_count": 3, 00:24:40.547 "transport_ack_timeout": 0, 00:24:40.547 "ctrlr_loss_timeout_sec": 0, 00:24:40.547 "reconnect_delay_sec": 0, 00:24:40.547 "fast_io_fail_timeout_sec": 0, 00:24:40.547 "disable_auto_failback": false, 00:24:40.547 "generate_uuids": false, 00:24:40.547 "transport_tos": 0, 00:24:40.547 "nvme_error_stat": false, 00:24:40.547 "rdma_srq_size": 0, 00:24:40.547 "io_path_stat": false, 00:24:40.547 "allow_accel_sequence": false, 00:24:40.547 "rdma_max_cq_size": 0, 00:24:40.547 "rdma_cm_event_timeout_ms": 0, 00:24:40.547 "dhchap_digests": [ 00:24:40.547 "sha256", 00:24:40.547 "sha384", 00:24:40.547 "sha512" 00:24:40.548 ], 00:24:40.548 "dhchap_dhgroups": [ 00:24:40.548 "null", 00:24:40.548 "ffdhe2048", 00:24:40.548 "ffdhe3072", 00:24:40.548 "ffdhe4096", 00:24:40.548 "ffdhe6144", 00:24:40.548 "ffdhe8192" 00:24:40.548 ] 00:24:40.548 } 00:24:40.548 }, 00:24:40.548 { 00:24:40.548 "method": "bdev_nvme_attach_controller", 00:24:40.548 "params": { 00:24:40.548 "name": "TLSTEST", 00:24:40.548 "trtype": "TCP", 00:24:40.548 "adrfam": "IPv4", 00:24:40.548 "traddr": "10.0.0.2", 00:24:40.548 "trsvcid": "4420", 00:24:40.548 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:40.548 "prchk_reftag": false, 00:24:40.548 "prchk_guard": false, 00:24:40.548 "ctrlr_loss_timeout_sec": 0, 00:24:40.548 "reconnect_delay_sec": 0, 00:24:40.548 "fast_io_fail_timeout_sec": 0, 00:24:40.548 "psk": "key0", 00:24:40.548 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:40.548 "hdgst": false, 00:24:40.548 "ddgst": false 00:24:40.548 } 00:24:40.548 }, 00:24:40.548 { 00:24:40.548 "method": "bdev_nvme_set_hotplug", 00:24:40.548 "params": { 00:24:40.548 "period_us": 100000, 00:24:40.548 "enable": false 00:24:40.548 } 00:24:40.548 }, 00:24:40.548 { 00:24:40.548 "method": "bdev_wait_for_examine" 00:24:40.548 } 00:24:40.548 ] 00:24:40.548 }, 00:24:40.548 { 00:24:40.548 "subsystem": "nbd", 00:24:40.548 "config": [] 00:24:40.548 } 00:24:40.548 ] 00:24:40.548 }' 00:24:40.548 [2024-10-01 17:24:38.916260] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:24:40.548 [2024-10-01 17:24:38.916313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3077283 ] 00:24:40.548 [2024-10-01 17:24:38.966591] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.548 [2024-10-01 17:24:38.994714] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:40.808 [2024-10-01 17:24:39.123091] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:41.377 17:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:41.377 17:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:41.377 17:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:41.377 Running I/O for 10 seconds... 00:24:51.345 4680.00 IOPS, 18.28 MiB/s 4981.00 IOPS, 19.46 MiB/s 5188.00 IOPS, 20.27 MiB/s 5131.25 IOPS, 20.04 MiB/s 5133.40 IOPS, 20.05 MiB/s 5129.17 IOPS, 20.04 MiB/s 5117.43 IOPS, 19.99 MiB/s 5097.50 IOPS, 19.91 MiB/s 5099.00 IOPS, 19.92 MiB/s 5084.60 IOPS, 19.86 MiB/s 00:24:51.346 Latency(us) 00:24:51.346 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:51.346 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:51.346 Verification LBA range: start 0x0 length 0x2000 00:24:51.346 TLSTESTn1 : 10.02 5086.59 19.87 0.00 0.00 25123.03 5133.65 65972.91 00:24:51.346 =================================================================================================================== 00:24:51.346 Total : 5086.59 19.87 0.00 0.00 25123.03 5133.65 65972.91 00:24:51.346 { 00:24:51.346 "results": [ 00:24:51.346 { 00:24:51.346 "job": "TLSTESTn1", 00:24:51.346 "core_mask": "0x4", 00:24:51.346 "workload": "verify", 00:24:51.346 "status": "finished", 00:24:51.346 "verify_range": { 00:24:51.346 "start": 0, 00:24:51.346 "length": 8192 00:24:51.346 }, 00:24:51.346 "queue_depth": 128, 00:24:51.346 "io_size": 4096, 00:24:51.346 "runtime": 10.021063, 00:24:51.346 "iops": 5086.58612364776, 00:24:51.346 "mibps": 19.869477045499064, 00:24:51.346 "io_failed": 0, 00:24:51.346 "io_timeout": 0, 00:24:51.346 "avg_latency_us": 25123.03037712776, 00:24:51.346 "min_latency_us": 5133.653333333334, 00:24:51.346 "max_latency_us": 65972.90666666666 00:24:51.346 } 00:24:51.346 ], 00:24:51.346 "core_count": 1 00:24:51.346 } 00:24:51.346 17:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:51.346 17:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3077283 00:24:51.346 17:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3077283 ']' 00:24:51.346 17:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3077283 00:24:51.346 17:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:51.346 17:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:51.346 17:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3077283 00:24:51.606 17:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:51.606 17:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:51.606 17:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3077283' 00:24:51.606 killing process with pid 3077283 00:24:51.606 17:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3077283 00:24:51.606 Received shutdown signal, test time was about 10.000000 seconds 00:24:51.606 00:24:51.606 Latency(us) 00:24:51.606 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:51.606 =================================================================================================================== 00:24:51.606 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:51.606 17:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3077283 00:24:51.606 17:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3077028 00:24:51.606 17:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3077028 ']' 00:24:51.606 17:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3077028 00:24:51.606 17:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:51.606 17:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:51.606 17:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3077028 00:24:51.606 17:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:51.606 17:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:51.606 17:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3077028' 00:24:51.606 killing process with pid 3077028 00:24:51.606 17:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3077028 00:24:51.606 17:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3077028 00:24:51.867 17:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:24:51.867 17:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:51.867 17:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:51.867 17:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:51.867 17:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3079403 00:24:51.867 17:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3079403 00:24:51.867 17:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:51.867 17:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3079403 ']' 00:24:51.867 17:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:51.867 17:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:51.867 17:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:51.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:51.867 17:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:51.867 17:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:51.867 [2024-10-01 17:24:50.299917] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:24:51.867 [2024-10-01 17:24:50.299974] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:51.867 [2024-10-01 17:24:50.366721] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.867 [2024-10-01 17:24:50.395023] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:51.867 [2024-10-01 17:24:50.395068] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:51.867 [2024-10-01 17:24:50.395077] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:51.867 [2024-10-01 17:24:50.395084] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:51.867 [2024-10-01 17:24:50.395090] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:51.867 [2024-10-01 17:24:50.395116] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:52.807 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:52.807 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:52.807 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:52.807 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:52.807 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:52.807 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:52.807 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.Ce8noBp1z6 00:24:52.807 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Ce8noBp1z6 00:24:52.807 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:52.807 [2024-10-01 17:24:51.285006] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:52.807 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:53.068 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:53.329 [2024-10-01 17:24:51.645893] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:53.329 [2024-10-01 17:24:51.646129] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:53.329 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:53.329 malloc0 00:24:53.329 17:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:53.589 17:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Ce8noBp1z6 00:24:53.849 17:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:54.110 17:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:54.110 17:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3079791 00:24:54.110 17:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:54.110 17:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3079791 /var/tmp/bdevperf.sock 00:24:54.110 17:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3079791 ']' 00:24:54.110 17:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:54.110 17:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:54.110 17:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:54.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:54.110 17:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:54.110 17:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:54.110 [2024-10-01 17:24:52.446641] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:24:54.110 [2024-10-01 17:24:52.446694] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3079791 ] 00:24:54.110 [2024-10-01 17:24:52.523087] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.110 [2024-10-01 17:24:52.551452] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:54.110 17:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:54.110 17:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:54.110 17:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Ce8noBp1z6 00:24:54.371 17:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:54.631 [2024-10-01 17:24:52.931899] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:54.631 nvme0n1 00:24:54.631 17:24:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:54.631 Running I/O for 1 seconds... 00:24:55.573 4706.00 IOPS, 18.38 MiB/s 00:24:55.573 Latency(us) 00:24:55.573 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:55.573 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:55.573 Verification LBA range: start 0x0 length 0x2000 00:24:55.573 nvme0n1 : 1.02 4748.05 18.55 0.00 0.00 26737.98 4587.52 73400.32 00:24:55.573 =================================================================================================================== 00:24:55.573 Total : 4748.05 18.55 0.00 0.00 26737.98 4587.52 73400.32 00:24:55.573 { 00:24:55.573 "results": [ 00:24:55.573 { 00:24:55.573 "job": "nvme0n1", 00:24:55.573 "core_mask": "0x2", 00:24:55.573 "workload": "verify", 00:24:55.573 "status": "finished", 00:24:55.573 "verify_range": { 00:24:55.573 "start": 0, 00:24:55.573 "length": 8192 00:24:55.573 }, 00:24:55.573 "queue_depth": 128, 00:24:55.573 "io_size": 4096, 00:24:55.573 "runtime": 1.018313, 00:24:55.573 "iops": 4748.048979046717, 00:24:55.573 "mibps": 18.54706632440124, 00:24:55.573 "io_failed": 0, 00:24:55.573 "io_timeout": 0, 00:24:55.573 "avg_latency_us": 26737.976388831437, 00:24:55.573 "min_latency_us": 4587.52, 00:24:55.573 "max_latency_us": 73400.32 00:24:55.573 } 00:24:55.573 ], 00:24:55.573 "core_count": 1 00:24:55.573 } 00:24:55.834 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3079791 00:24:55.834 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3079791 ']' 00:24:55.834 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3079791 00:24:55.834 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:55.834 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:55.834 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3079791 00:24:55.834 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:55.834 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:55.834 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3079791' 00:24:55.834 killing process with pid 3079791 00:24:55.834 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3079791 00:24:55.834 Received shutdown signal, test time was about 1.000000 seconds 00:24:55.834 00:24:55.834 Latency(us) 00:24:55.834 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:55.834 =================================================================================================================== 00:24:55.834 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:55.834 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3079791 00:24:55.834 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3079403 00:24:55.834 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3079403 ']' 00:24:55.834 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3079403 00:24:55.834 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:55.834 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:55.834 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3079403 00:24:55.834 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:55.834 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:55.834 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3079403' 00:24:55.834 killing process with pid 3079403 00:24:55.834 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3079403 00:24:55.834 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3079403 00:24:56.095 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:24:56.095 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:56.095 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:56.095 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:56.095 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3080309 00:24:56.095 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3080309 00:24:56.095 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:56.095 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3080309 ']' 00:24:56.095 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:56.095 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:56.095 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:56.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:56.095 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:56.095 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:56.096 [2024-10-01 17:24:54.567799] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:24:56.096 [2024-10-01 17:24:54.567858] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:56.096 [2024-10-01 17:24:54.634128] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.356 [2024-10-01 17:24:54.665087] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:56.356 [2024-10-01 17:24:54.665127] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:56.356 [2024-10-01 17:24:54.665135] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:56.356 [2024-10-01 17:24:54.665142] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:56.356 [2024-10-01 17:24:54.665149] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:56.356 [2024-10-01 17:24:54.665173] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:56.356 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:56.356 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:56.356 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:56.356 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:56.356 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:56.356 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:56.357 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:24:56.357 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.357 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:56.357 [2024-10-01 17:24:54.793077] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:56.357 malloc0 00:24:56.357 [2024-10-01 17:24:54.831763] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:56.357 [2024-10-01 17:24:54.832008] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:56.357 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.357 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3080458 00:24:56.357 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3080458 /var/tmp/bdevperf.sock 00:24:56.357 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:56.357 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3080458 ']' 00:24:56.357 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:56.357 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:56.357 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:56.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:56.357 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:56.357 17:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:56.617 [2024-10-01 17:24:54.910434] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:24:56.617 [2024-10-01 17:24:54.910483] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3080458 ] 00:24:56.617 [2024-10-01 17:24:54.985005] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.617 [2024-10-01 17:24:55.013317] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:56.617 17:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:56.617 17:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:56.617 17:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Ce8noBp1z6 00:24:56.877 17:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:56.877 [2024-10-01 17:24:55.421804] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:57.137 nvme0n1 00:24:57.137 17:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:57.137 Running I/O for 1 seconds... 00:24:58.079 3602.00 IOPS, 14.07 MiB/s 00:24:58.079 Latency(us) 00:24:58.079 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:58.079 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:58.079 Verification LBA range: start 0x0 length 0x2000 00:24:58.079 nvme0n1 : 1.01 3676.68 14.36 0.00 0.00 34603.94 5379.41 77769.39 00:24:58.079 =================================================================================================================== 00:24:58.079 Total : 3676.68 14.36 0.00 0.00 34603.94 5379.41 77769.39 00:24:58.079 { 00:24:58.079 "results": [ 00:24:58.079 { 00:24:58.079 "job": "nvme0n1", 00:24:58.079 "core_mask": "0x2", 00:24:58.079 "workload": "verify", 00:24:58.079 "status": "finished", 00:24:58.079 "verify_range": { 00:24:58.079 "start": 0, 00:24:58.079 "length": 8192 00:24:58.079 }, 00:24:58.079 "queue_depth": 128, 00:24:58.079 "io_size": 4096, 00:24:58.079 "runtime": 1.014502, 00:24:58.079 "iops": 3676.6807753952185, 00:24:58.079 "mibps": 14.362034278887572, 00:24:58.079 "io_failed": 0, 00:24:58.079 "io_timeout": 0, 00:24:58.079 "avg_latency_us": 34603.93774441465, 00:24:58.079 "min_latency_us": 5379.413333333333, 00:24:58.079 "max_latency_us": 77769.38666666667 00:24:58.079 } 00:24:58.079 ], 00:24:58.079 "core_count": 1 00:24:58.079 } 00:24:58.340 17:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:24:58.340 17:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.340 17:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:58.340 17:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.340 17:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:24:58.340 "subsystems": [ 00:24:58.340 { 00:24:58.340 "subsystem": "keyring", 00:24:58.340 "config": [ 00:24:58.340 { 00:24:58.340 "method": "keyring_file_add_key", 00:24:58.340 "params": { 00:24:58.340 "name": "key0", 00:24:58.340 "path": "/tmp/tmp.Ce8noBp1z6" 00:24:58.340 } 00:24:58.340 } 00:24:58.340 ] 00:24:58.340 }, 00:24:58.340 { 00:24:58.340 "subsystem": "iobuf", 00:24:58.340 "config": [ 00:24:58.340 { 00:24:58.340 "method": "iobuf_set_options", 00:24:58.340 "params": { 00:24:58.340 "small_pool_count": 8192, 00:24:58.340 "large_pool_count": 1024, 00:24:58.340 "small_bufsize": 8192, 00:24:58.340 "large_bufsize": 135168 00:24:58.340 } 00:24:58.340 } 00:24:58.340 ] 00:24:58.340 }, 00:24:58.340 { 00:24:58.340 "subsystem": "sock", 00:24:58.340 "config": [ 00:24:58.340 { 00:24:58.340 "method": "sock_set_default_impl", 00:24:58.340 "params": { 00:24:58.340 "impl_name": "posix" 00:24:58.340 } 00:24:58.340 }, 00:24:58.340 { 00:24:58.340 "method": "sock_impl_set_options", 00:24:58.340 "params": { 00:24:58.340 "impl_name": "ssl", 00:24:58.340 "recv_buf_size": 4096, 00:24:58.340 "send_buf_size": 4096, 00:24:58.340 "enable_recv_pipe": true, 00:24:58.340 "enable_quickack": false, 00:24:58.340 "enable_placement_id": 0, 00:24:58.340 "enable_zerocopy_send_server": true, 00:24:58.340 "enable_zerocopy_send_client": false, 00:24:58.340 "zerocopy_threshold": 0, 00:24:58.340 "tls_version": 0, 00:24:58.340 "enable_ktls": false 00:24:58.340 } 00:24:58.340 }, 00:24:58.340 { 00:24:58.340 "method": "sock_impl_set_options", 00:24:58.340 "params": { 00:24:58.340 "impl_name": "posix", 00:24:58.340 "recv_buf_size": 2097152, 00:24:58.340 "send_buf_size": 2097152, 00:24:58.340 "enable_recv_pipe": true, 00:24:58.340 "enable_quickack": false, 00:24:58.340 "enable_placement_id": 0, 00:24:58.340 "enable_zerocopy_send_server": true, 00:24:58.340 "enable_zerocopy_send_client": false, 00:24:58.340 "zerocopy_threshold": 0, 00:24:58.340 "tls_version": 0, 00:24:58.340 "enable_ktls": false 00:24:58.340 } 00:24:58.340 } 00:24:58.340 ] 00:24:58.340 }, 00:24:58.340 { 00:24:58.340 "subsystem": "vmd", 00:24:58.340 "config": [] 00:24:58.340 }, 00:24:58.340 { 00:24:58.340 "subsystem": "accel", 00:24:58.340 "config": [ 00:24:58.340 { 00:24:58.340 "method": "accel_set_options", 00:24:58.340 "params": { 00:24:58.340 "small_cache_size": 128, 00:24:58.340 "large_cache_size": 16, 00:24:58.340 "task_count": 2048, 00:24:58.340 "sequence_count": 2048, 00:24:58.340 "buf_count": 2048 00:24:58.340 } 00:24:58.340 } 00:24:58.340 ] 00:24:58.340 }, 00:24:58.340 { 00:24:58.340 "subsystem": "bdev", 00:24:58.340 "config": [ 00:24:58.340 { 00:24:58.340 "method": "bdev_set_options", 00:24:58.340 "params": { 00:24:58.340 "bdev_io_pool_size": 65535, 00:24:58.340 "bdev_io_cache_size": 256, 00:24:58.340 "bdev_auto_examine": true, 00:24:58.340 "iobuf_small_cache_size": 128, 00:24:58.340 "iobuf_large_cache_size": 16 00:24:58.340 } 00:24:58.340 }, 00:24:58.340 { 00:24:58.340 "method": "bdev_raid_set_options", 00:24:58.340 "params": { 00:24:58.340 "process_window_size_kb": 1024, 00:24:58.340 "process_max_bandwidth_mb_sec": 0 00:24:58.340 } 00:24:58.340 }, 00:24:58.340 { 00:24:58.340 "method": "bdev_iscsi_set_options", 00:24:58.340 "params": { 00:24:58.340 "timeout_sec": 30 00:24:58.340 } 00:24:58.340 }, 00:24:58.340 { 00:24:58.340 "method": "bdev_nvme_set_options", 00:24:58.340 "params": { 00:24:58.340 "action_on_timeout": "none", 00:24:58.340 "timeout_us": 0, 00:24:58.340 "timeout_admin_us": 0, 00:24:58.340 "keep_alive_timeout_ms": 10000, 00:24:58.340 "arbitration_burst": 0, 00:24:58.340 "low_priority_weight": 0, 00:24:58.340 "medium_priority_weight": 0, 00:24:58.340 "high_priority_weight": 0, 00:24:58.340 "nvme_adminq_poll_period_us": 10000, 00:24:58.340 "nvme_ioq_poll_period_us": 0, 00:24:58.340 "io_queue_requests": 0, 00:24:58.340 "delay_cmd_submit": true, 00:24:58.340 "transport_retry_count": 4, 00:24:58.340 "bdev_retry_count": 3, 00:24:58.341 "transport_ack_timeout": 0, 00:24:58.341 "ctrlr_loss_timeout_sec": 0, 00:24:58.341 "reconnect_delay_sec": 0, 00:24:58.341 "fast_io_fail_timeout_sec": 0, 00:24:58.341 "disable_auto_failback": false, 00:24:58.341 "generate_uuids": false, 00:24:58.341 "transport_tos": 0, 00:24:58.341 "nvme_error_stat": false, 00:24:58.341 "rdma_srq_size": 0, 00:24:58.341 "io_path_stat": false, 00:24:58.341 "allow_accel_sequence": false, 00:24:58.341 "rdma_max_cq_size": 0, 00:24:58.341 "rdma_cm_event_timeout_ms": 0, 00:24:58.341 "dhchap_digests": [ 00:24:58.341 "sha256", 00:24:58.341 "sha384", 00:24:58.341 "sha512" 00:24:58.341 ], 00:24:58.341 "dhchap_dhgroups": [ 00:24:58.341 "null", 00:24:58.341 "ffdhe2048", 00:24:58.341 "ffdhe3072", 00:24:58.341 "ffdhe4096", 00:24:58.341 "ffdhe6144", 00:24:58.341 "ffdhe8192" 00:24:58.341 ] 00:24:58.341 } 00:24:58.341 }, 00:24:58.341 { 00:24:58.341 "method": "bdev_nvme_set_hotplug", 00:24:58.341 "params": { 00:24:58.341 "period_us": 100000, 00:24:58.341 "enable": false 00:24:58.341 } 00:24:58.341 }, 00:24:58.341 { 00:24:58.341 "method": "bdev_malloc_create", 00:24:58.341 "params": { 00:24:58.341 "name": "malloc0", 00:24:58.341 "num_blocks": 8192, 00:24:58.341 "block_size": 4096, 00:24:58.341 "physical_block_size": 4096, 00:24:58.341 "uuid": "eb1618fa-a0ac-41b2-8da6-1b39bfe663d9", 00:24:58.341 "optimal_io_boundary": 0, 00:24:58.341 "md_size": 0, 00:24:58.341 "dif_type": 0, 00:24:58.341 "dif_is_head_of_md": false, 00:24:58.341 "dif_pi_format": 0 00:24:58.341 } 00:24:58.341 }, 00:24:58.341 { 00:24:58.341 "method": "bdev_wait_for_examine" 00:24:58.341 } 00:24:58.341 ] 00:24:58.341 }, 00:24:58.341 { 00:24:58.341 "subsystem": "nbd", 00:24:58.341 "config": [] 00:24:58.341 }, 00:24:58.341 { 00:24:58.341 "subsystem": "scheduler", 00:24:58.341 "config": [ 00:24:58.341 { 00:24:58.341 "method": "framework_set_scheduler", 00:24:58.341 "params": { 00:24:58.341 "name": "static" 00:24:58.341 } 00:24:58.341 } 00:24:58.341 ] 00:24:58.341 }, 00:24:58.341 { 00:24:58.341 "subsystem": "nvmf", 00:24:58.341 "config": [ 00:24:58.341 { 00:24:58.341 "method": "nvmf_set_config", 00:24:58.341 "params": { 00:24:58.341 "discovery_filter": "match_any", 00:24:58.341 "admin_cmd_passthru": { 00:24:58.341 "identify_ctrlr": false 00:24:58.341 }, 00:24:58.341 "dhchap_digests": [ 00:24:58.341 "sha256", 00:24:58.341 "sha384", 00:24:58.341 "sha512" 00:24:58.341 ], 00:24:58.341 "dhchap_dhgroups": [ 00:24:58.341 "null", 00:24:58.341 "ffdhe2048", 00:24:58.341 "ffdhe3072", 00:24:58.341 "ffdhe4096", 00:24:58.341 "ffdhe6144", 00:24:58.341 "ffdhe8192" 00:24:58.341 ] 00:24:58.341 } 00:24:58.341 }, 00:24:58.341 { 00:24:58.341 "method": "nvmf_set_max_subsystems", 00:24:58.341 "params": { 00:24:58.341 "max_subsystems": 1024 00:24:58.341 } 00:24:58.341 }, 00:24:58.341 { 00:24:58.341 "method": "nvmf_set_crdt", 00:24:58.341 "params": { 00:24:58.341 "crdt1": 0, 00:24:58.341 "crdt2": 0, 00:24:58.341 "crdt3": 0 00:24:58.341 } 00:24:58.341 }, 00:24:58.341 { 00:24:58.341 "method": "nvmf_create_transport", 00:24:58.341 "params": { 00:24:58.341 "trtype": "TCP", 00:24:58.341 "max_queue_depth": 128, 00:24:58.341 "max_io_qpairs_per_ctrlr": 127, 00:24:58.341 "in_capsule_data_size": 4096, 00:24:58.341 "max_io_size": 131072, 00:24:58.341 "io_unit_size": 131072, 00:24:58.341 "max_aq_depth": 128, 00:24:58.341 "num_shared_buffers": 511, 00:24:58.341 "buf_cache_size": 4294967295, 00:24:58.341 "dif_insert_or_strip": false, 00:24:58.341 "zcopy": false, 00:24:58.341 "c2h_success": false, 00:24:58.341 "sock_priority": 0, 00:24:58.341 "abort_timeout_sec": 1, 00:24:58.341 "ack_timeout": 0, 00:24:58.341 "data_wr_pool_size": 0 00:24:58.341 } 00:24:58.341 }, 00:24:58.341 { 00:24:58.341 "method": "nvmf_create_subsystem", 00:24:58.341 "params": { 00:24:58.341 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:58.341 "allow_any_host": false, 00:24:58.341 "serial_number": "00000000000000000000", 00:24:58.341 "model_number": "SPDK bdev Controller", 00:24:58.341 "max_namespaces": 32, 00:24:58.341 "min_cntlid": 1, 00:24:58.341 "max_cntlid": 65519, 00:24:58.341 "ana_reporting": false 00:24:58.341 } 00:24:58.341 }, 00:24:58.341 { 00:24:58.341 "method": "nvmf_subsystem_add_host", 00:24:58.341 "params": { 00:24:58.341 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:58.341 "host": "nqn.2016-06.io.spdk:host1", 00:24:58.341 "psk": "key0" 00:24:58.341 } 00:24:58.341 }, 00:24:58.341 { 00:24:58.341 "method": "nvmf_subsystem_add_ns", 00:24:58.341 "params": { 00:24:58.341 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:58.341 "namespace": { 00:24:58.341 "nsid": 1, 00:24:58.341 "bdev_name": "malloc0", 00:24:58.341 "nguid": "EB1618FAA0AC41B28DA61B39BFE663D9", 00:24:58.341 "uuid": "eb1618fa-a0ac-41b2-8da6-1b39bfe663d9", 00:24:58.341 "no_auto_visible": false 00:24:58.341 } 00:24:58.341 } 00:24:58.341 }, 00:24:58.341 { 00:24:58.341 "method": "nvmf_subsystem_add_listener", 00:24:58.341 "params": { 00:24:58.341 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:58.341 "listen_address": { 00:24:58.341 "trtype": "TCP", 00:24:58.341 "adrfam": "IPv4", 00:24:58.341 "traddr": "10.0.0.2", 00:24:58.341 "trsvcid": "4420" 00:24:58.341 }, 00:24:58.341 "secure_channel": false, 00:24:58.341 "sock_impl": "ssl" 00:24:58.341 } 00:24:58.341 } 00:24:58.341 ] 00:24:58.341 } 00:24:58.341 ] 00:24:58.341 }' 00:24:58.341 17:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:58.602 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:24:58.602 "subsystems": [ 00:24:58.602 { 00:24:58.602 "subsystem": "keyring", 00:24:58.602 "config": [ 00:24:58.602 { 00:24:58.602 "method": "keyring_file_add_key", 00:24:58.602 "params": { 00:24:58.602 "name": "key0", 00:24:58.602 "path": "/tmp/tmp.Ce8noBp1z6" 00:24:58.602 } 00:24:58.602 } 00:24:58.602 ] 00:24:58.602 }, 00:24:58.602 { 00:24:58.602 "subsystem": "iobuf", 00:24:58.602 "config": [ 00:24:58.602 { 00:24:58.602 "method": "iobuf_set_options", 00:24:58.602 "params": { 00:24:58.602 "small_pool_count": 8192, 00:24:58.602 "large_pool_count": 1024, 00:24:58.602 "small_bufsize": 8192, 00:24:58.602 "large_bufsize": 135168 00:24:58.602 } 00:24:58.602 } 00:24:58.602 ] 00:24:58.602 }, 00:24:58.602 { 00:24:58.602 "subsystem": "sock", 00:24:58.602 "config": [ 00:24:58.602 { 00:24:58.602 "method": "sock_set_default_impl", 00:24:58.602 "params": { 00:24:58.602 "impl_name": "posix" 00:24:58.602 } 00:24:58.602 }, 00:24:58.602 { 00:24:58.602 "method": "sock_impl_set_options", 00:24:58.602 "params": { 00:24:58.602 "impl_name": "ssl", 00:24:58.602 "recv_buf_size": 4096, 00:24:58.602 "send_buf_size": 4096, 00:24:58.602 "enable_recv_pipe": true, 00:24:58.602 "enable_quickack": false, 00:24:58.602 "enable_placement_id": 0, 00:24:58.602 "enable_zerocopy_send_server": true, 00:24:58.602 "enable_zerocopy_send_client": false, 00:24:58.602 "zerocopy_threshold": 0, 00:24:58.602 "tls_version": 0, 00:24:58.602 "enable_ktls": false 00:24:58.602 } 00:24:58.602 }, 00:24:58.602 { 00:24:58.602 "method": "sock_impl_set_options", 00:24:58.602 "params": { 00:24:58.602 "impl_name": "posix", 00:24:58.602 "recv_buf_size": 2097152, 00:24:58.602 "send_buf_size": 2097152, 00:24:58.602 "enable_recv_pipe": true, 00:24:58.602 "enable_quickack": false, 00:24:58.602 "enable_placement_id": 0, 00:24:58.602 "enable_zerocopy_send_server": true, 00:24:58.602 "enable_zerocopy_send_client": false, 00:24:58.602 "zerocopy_threshold": 0, 00:24:58.602 "tls_version": 0, 00:24:58.602 "enable_ktls": false 00:24:58.602 } 00:24:58.602 } 00:24:58.602 ] 00:24:58.602 }, 00:24:58.602 { 00:24:58.602 "subsystem": "vmd", 00:24:58.602 "config": [] 00:24:58.602 }, 00:24:58.602 { 00:24:58.602 "subsystem": "accel", 00:24:58.602 "config": [ 00:24:58.602 { 00:24:58.602 "method": "accel_set_options", 00:24:58.602 "params": { 00:24:58.602 "small_cache_size": 128, 00:24:58.602 "large_cache_size": 16, 00:24:58.602 "task_count": 2048, 00:24:58.602 "sequence_count": 2048, 00:24:58.602 "buf_count": 2048 00:24:58.602 } 00:24:58.602 } 00:24:58.602 ] 00:24:58.602 }, 00:24:58.602 { 00:24:58.602 "subsystem": "bdev", 00:24:58.602 "config": [ 00:24:58.602 { 00:24:58.602 "method": "bdev_set_options", 00:24:58.602 "params": { 00:24:58.602 "bdev_io_pool_size": 65535, 00:24:58.602 "bdev_io_cache_size": 256, 00:24:58.602 "bdev_auto_examine": true, 00:24:58.602 "iobuf_small_cache_size": 128, 00:24:58.602 "iobuf_large_cache_size": 16 00:24:58.602 } 00:24:58.602 }, 00:24:58.602 { 00:24:58.602 "method": "bdev_raid_set_options", 00:24:58.602 "params": { 00:24:58.603 "process_window_size_kb": 1024, 00:24:58.603 "process_max_bandwidth_mb_sec": 0 00:24:58.603 } 00:24:58.603 }, 00:24:58.603 { 00:24:58.603 "method": "bdev_iscsi_set_options", 00:24:58.603 "params": { 00:24:58.603 "timeout_sec": 30 00:24:58.603 } 00:24:58.603 }, 00:24:58.603 { 00:24:58.603 "method": "bdev_nvme_set_options", 00:24:58.603 "params": { 00:24:58.603 "action_on_timeout": "none", 00:24:58.603 "timeout_us": 0, 00:24:58.603 "timeout_admin_us": 0, 00:24:58.603 "keep_alive_timeout_ms": 10000, 00:24:58.603 "arbitration_burst": 0, 00:24:58.603 "low_priority_weight": 0, 00:24:58.603 "medium_priority_weight": 0, 00:24:58.603 "high_priority_weight": 0, 00:24:58.603 "nvme_adminq_poll_period_us": 10000, 00:24:58.603 "nvme_ioq_poll_period_us": 0, 00:24:58.603 "io_queue_requests": 512, 00:24:58.603 "delay_cmd_submit": true, 00:24:58.603 "transport_retry_count": 4, 00:24:58.603 "bdev_retry_count": 3, 00:24:58.603 "transport_ack_timeout": 0, 00:24:58.603 "ctrlr_loss_timeout_sec": 0, 00:24:58.603 "reconnect_delay_sec": 0, 00:24:58.603 "fast_io_fail_timeout_sec": 0, 00:24:58.603 "disable_auto_failback": false, 00:24:58.603 "generate_uuids": false, 00:24:58.603 "transport_tos": 0, 00:24:58.603 "nvme_error_stat": false, 00:24:58.603 "rdma_srq_size": 0, 00:24:58.603 "io_path_stat": false, 00:24:58.603 "allow_accel_sequence": false, 00:24:58.603 "rdma_max_cq_size": 0, 00:24:58.603 "rdma_cm_event_timeout_ms": 0, 00:24:58.603 "dhchap_digests": [ 00:24:58.603 "sha256", 00:24:58.603 "sha384", 00:24:58.603 "sha512" 00:24:58.603 ], 00:24:58.603 "dhchap_dhgroups": [ 00:24:58.603 "null", 00:24:58.603 "ffdhe2048", 00:24:58.603 "ffdhe3072", 00:24:58.603 "ffdhe4096", 00:24:58.603 "ffdhe6144", 00:24:58.603 "ffdhe8192" 00:24:58.603 ] 00:24:58.603 } 00:24:58.603 }, 00:24:58.603 { 00:24:58.603 "method": "bdev_nvme_attach_controller", 00:24:58.603 "params": { 00:24:58.603 "name": "nvme0", 00:24:58.603 "trtype": "TCP", 00:24:58.603 "adrfam": "IPv4", 00:24:58.603 "traddr": "10.0.0.2", 00:24:58.603 "trsvcid": "4420", 00:24:58.603 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:58.603 "prchk_reftag": false, 00:24:58.603 "prchk_guard": false, 00:24:58.603 "ctrlr_loss_timeout_sec": 0, 00:24:58.603 "reconnect_delay_sec": 0, 00:24:58.603 "fast_io_fail_timeout_sec": 0, 00:24:58.603 "psk": "key0", 00:24:58.603 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:58.603 "hdgst": false, 00:24:58.603 "ddgst": false 00:24:58.603 } 00:24:58.603 }, 00:24:58.603 { 00:24:58.603 "method": "bdev_nvme_set_hotplug", 00:24:58.603 "params": { 00:24:58.603 "period_us": 100000, 00:24:58.603 "enable": false 00:24:58.603 } 00:24:58.603 }, 00:24:58.603 { 00:24:58.603 "method": "bdev_enable_histogram", 00:24:58.603 "params": { 00:24:58.603 "name": "nvme0n1", 00:24:58.603 "enable": true 00:24:58.603 } 00:24:58.603 }, 00:24:58.603 { 00:24:58.603 "method": "bdev_wait_for_examine" 00:24:58.603 } 00:24:58.603 ] 00:24:58.603 }, 00:24:58.603 { 00:24:58.603 "subsystem": "nbd", 00:24:58.603 "config": [] 00:24:58.603 } 00:24:58.603 ] 00:24:58.603 }' 00:24:58.603 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3080458 00:24:58.603 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3080458 ']' 00:24:58.603 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3080458 00:24:58.603 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:58.603 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:58.603 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3080458 00:24:58.603 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:58.603 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:58.603 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3080458' 00:24:58.603 killing process with pid 3080458 00:24:58.603 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3080458 00:24:58.603 Received shutdown signal, test time was about 1.000000 seconds 00:24:58.603 00:24:58.603 Latency(us) 00:24:58.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:58.603 =================================================================================================================== 00:24:58.603 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:58.603 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3080458 00:24:58.863 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3080309 00:24:58.864 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3080309 ']' 00:24:58.864 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3080309 00:24:58.864 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:58.864 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:58.864 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3080309 00:24:58.864 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:58.864 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:58.864 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3080309' 00:24:58.864 killing process with pid 3080309 00:24:58.864 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3080309 00:24:58.864 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3080309 00:24:58.864 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:58.864 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:58.864 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:58.864 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:58.864 "subsystems": [ 00:24:58.864 { 00:24:58.864 "subsystem": "keyring", 00:24:58.864 "config": [ 00:24:58.864 { 00:24:58.864 "method": "keyring_file_add_key", 00:24:58.864 "params": { 00:24:58.864 "name": "key0", 00:24:58.864 "path": "/tmp/tmp.Ce8noBp1z6" 00:24:58.864 } 00:24:58.864 } 00:24:58.864 ] 00:24:58.864 }, 00:24:58.864 { 00:24:58.864 "subsystem": "iobuf", 00:24:58.864 "config": [ 00:24:58.864 { 00:24:58.864 "method": "iobuf_set_options", 00:24:58.864 "params": { 00:24:58.864 "small_pool_count": 8192, 00:24:58.864 "large_pool_count": 1024, 00:24:58.864 "small_bufsize": 8192, 00:24:58.864 "large_bufsize": 135168 00:24:58.864 } 00:24:58.864 } 00:24:58.864 ] 00:24:58.864 }, 00:24:58.864 { 00:24:58.864 "subsystem": "sock", 00:24:58.864 "config": [ 00:24:58.864 { 00:24:58.864 "method": "sock_set_default_impl", 00:24:58.864 "params": { 00:24:58.864 "impl_name": "posix" 00:24:58.864 } 00:24:58.864 }, 00:24:58.864 { 00:24:58.864 "method": "sock_impl_set_options", 00:24:58.864 "params": { 00:24:58.864 "impl_name": "ssl", 00:24:58.864 "recv_buf_size": 4096, 00:24:58.864 "send_buf_size": 4096, 00:24:58.864 "enable_recv_pipe": true, 00:24:58.864 "enable_quickack": false, 00:24:58.864 "enable_placement_id": 0, 00:24:58.864 "enable_zerocopy_send_server": true, 00:24:58.864 "enable_zerocopy_send_client": false, 00:24:58.864 "zerocopy_threshold": 0, 00:24:58.864 "tls_version": 0, 00:24:58.864 "enable_ktls": false 00:24:58.864 } 00:24:58.864 }, 00:24:58.864 { 00:24:58.864 "method": "sock_impl_set_options", 00:24:58.864 "params": { 00:24:58.864 "impl_name": "posix", 00:24:58.864 "recv_buf_size": 2097152, 00:24:58.864 "send_buf_size": 2097152, 00:24:58.864 "enable_recv_pipe": true, 00:24:58.864 "enable_quickack": false, 00:24:58.864 "enable_placement_id": 0, 00:24:58.864 "enable_zerocopy_send_server": true, 00:24:58.864 "enable_zerocopy_send_client": false, 00:24:58.864 "zerocopy_threshold": 0, 00:24:58.864 "tls_version": 0, 00:24:58.864 "enable_ktls": false 00:24:58.864 } 00:24:58.864 } 00:24:58.864 ] 00:24:58.864 }, 00:24:58.864 { 00:24:58.864 "subsystem": "vmd", 00:24:58.864 "config": [] 00:24:58.864 }, 00:24:58.864 { 00:24:58.864 "subsystem": "accel", 00:24:58.864 "config": [ 00:24:58.864 { 00:24:58.864 "method": "accel_set_options", 00:24:58.864 "params": { 00:24:58.864 "small_cache_size": 128, 00:24:58.864 "large_cache_size": 16, 00:24:58.864 "task_count": 2048, 00:24:58.864 "sequence_count": 2048, 00:24:58.864 "buf_count": 2048 00:24:58.864 } 00:24:58.864 } 00:24:58.864 ] 00:24:58.864 }, 00:24:58.864 { 00:24:58.864 "subsystem": "bdev", 00:24:58.864 "config": [ 00:24:58.864 { 00:24:58.864 "method": "bdev_set_options", 00:24:58.864 "params": { 00:24:58.864 "bdev_io_pool_size": 65535, 00:24:58.864 "bdev_io_cache_size": 256, 00:24:58.864 "bdev_auto_examine": true, 00:24:58.864 "iobuf_small_cache_size": 128, 00:24:58.864 "iobuf_large_cache_size": 16 00:24:58.864 } 00:24:58.864 }, 00:24:58.864 { 00:24:58.864 "method": "bdev_raid_set_options", 00:24:58.864 "params": { 00:24:58.864 "process_window_size_kb": 1024, 00:24:58.864 "process_max_bandwidth_mb_sec": 0 00:24:58.864 } 00:24:58.864 }, 00:24:58.864 { 00:24:58.864 "method": "bdev_iscsi_set_options", 00:24:58.864 "params": { 00:24:58.864 "timeout_sec": 30 00:24:58.864 } 00:24:58.864 }, 00:24:58.864 { 00:24:58.864 "method": "bdev_nvme_set_options", 00:24:58.864 "params": { 00:24:58.864 "action_on_timeout": "none", 00:24:58.864 "timeout_us": 0, 00:24:58.864 "timeout_admin_us": 0, 00:24:58.864 "keep_alive_timeout_ms": 10000, 00:24:58.864 "arbitration_burst": 0, 00:24:58.864 "low_priority_weight": 0, 00:24:58.864 "medium_priority_weight": 0, 00:24:58.864 "high_priority_weight": 0, 00:24:58.864 "nvme_adminq_poll_period_us": 10000, 00:24:58.864 "nvme_ioq_poll_period_us": 0, 00:24:58.864 "io_queue_requests": 0, 00:24:58.864 "delay_cmd_submit": true, 00:24:58.864 "transport_retry_count": 4, 00:24:58.864 "bdev_retry_count": 3, 00:24:58.864 "transport_ack_timeout": 0, 00:24:58.864 "ctrlr_loss_timeout_sec": 0, 00:24:58.864 "reconnect_delay_sec": 0, 00:24:58.864 "fast_io_fail_timeout_sec": 0, 00:24:58.864 "disable_auto_failback": false, 00:24:58.864 "generate_uuids": false, 00:24:58.864 "transport_tos": 0, 00:24:58.864 "nvme_error_stat": false, 00:24:58.864 "rdma_srq_size": 0, 00:24:58.864 "io_path_stat": false, 00:24:58.864 "allow_accel_sequence": false, 00:24:58.864 "rdma_max_cq_size": 0, 00:24:58.864 "rdma_cm_event_timeout_ms": 0, 00:24:58.864 "dhchap_digests": [ 00:24:58.864 "sha256", 00:24:58.864 "sha384", 00:24:58.864 "sha512" 00:24:58.864 ], 00:24:58.864 "dhchap_dhgroups": [ 00:24:58.864 "null", 00:24:58.864 "ffdhe2048", 00:24:58.864 "ffdhe3072", 00:24:58.864 "ffdhe4096", 00:24:58.864 "ffdhe6144", 00:24:58.864 "ffdhe8192" 00:24:58.864 ] 00:24:58.864 } 00:24:58.864 }, 00:24:58.864 { 00:24:58.864 "method": "bdev_nvme_set_hotplug", 00:24:58.864 "params": { 00:24:58.864 "period_us": 100000, 00:24:58.864 "enable": false 00:24:58.864 } 00:24:58.864 }, 00:24:58.864 { 00:24:58.864 "method": "bdev_malloc_create", 00:24:58.864 "params": { 00:24:58.864 "name": "malloc0", 00:24:58.864 "num_blocks": 8192, 00:24:58.864 "block_size": 4096, 00:24:58.864 "physical_block_size": 4096, 00:24:58.864 "uuid": "eb1618fa-a0ac-41b2-8da6-1b39bfe663d9", 00:24:58.864 "optimal_io_boundary": 0, 00:24:58.864 "md_size": 0, 00:24:58.864 "dif_type": 0, 00:24:58.864 "dif_is_head_of_md": false, 00:24:58.864 "dif_pi_format": 0 00:24:58.864 } 00:24:58.864 }, 00:24:58.864 { 00:24:58.864 "method": "bdev_wait_for_examine" 00:24:58.864 } 00:24:58.864 ] 00:24:58.864 }, 00:24:58.864 { 00:24:58.864 "subsystem": "nbd", 00:24:58.864 "config": [] 00:24:58.864 }, 00:24:58.864 { 00:24:58.864 "subsystem": "scheduler", 00:24:58.864 "config": [ 00:24:58.864 { 00:24:58.864 "method": "framework_set_scheduler", 00:24:58.864 "params": { 00:24:58.864 "name": "static" 00:24:58.864 } 00:24:58.864 } 00:24:58.864 ] 00:24:58.864 }, 00:24:58.864 { 00:24:58.864 "subsystem": "nvmf", 00:24:58.864 "config": [ 00:24:58.864 { 00:24:58.864 "method": "nvmf_set_config", 00:24:58.864 "params": { 00:24:58.864 "discovery_filter": "match_any", 00:24:58.864 "admin_cmd_passthru": { 00:24:58.864 "identify_ctrlr": false 00:24:58.864 }, 00:24:58.864 "dhchap_digests": [ 00:24:58.864 "sha256", 00:24:58.864 "sha384", 00:24:58.864 "sha512" 00:24:58.864 ], 00:24:58.864 "dhchap_dhgroups": [ 00:24:58.864 "null", 00:24:58.864 "ffdhe2048", 00:24:58.864 "ffdhe3072", 00:24:58.864 "ffdhe4096", 00:24:58.864 "ffdhe6144", 00:24:58.864 "ffdhe8192" 00:24:58.864 ] 00:24:58.864 } 00:24:58.864 }, 00:24:58.864 { 00:24:58.864 "method": "nvmf_set_max_subsystems", 00:24:58.864 "params": { 00:24:58.864 "max_subsystems": 1024 00:24:58.864 } 00:24:58.864 }, 00:24:58.864 { 00:24:58.864 "method": "nvmf_set_crdt", 00:24:58.864 "params": { 00:24:58.864 "crdt1": 0, 00:24:58.864 "crdt2": 0, 00:24:58.864 "crdt3": 0 00:24:58.864 } 00:24:58.864 }, 00:24:58.864 { 00:24:58.864 "method": "nvmf_create_transport", 00:24:58.864 "params": { 00:24:58.864 "trtype": "TCP", 00:24:58.864 "max_queue_depth": 128, 00:24:58.864 "max_io_qpairs_per_ctrlr": 127, 00:24:58.864 "in_capsule_data_size": 4096, 00:24:58.864 "max_io_size": 131072, 00:24:58.864 "io_unit_size": 131072, 00:24:58.864 "max_aq_depth": 128, 00:24:58.864 "num_shared_buffers": 511, 00:24:58.864 "buf_cache_size": 4294967295, 00:24:58.865 "dif_insert_or_strip": false, 00:24:58.865 "zcopy": false, 00:24:58.865 "c2h_success": false, 00:24:58.865 "sock_priority": 0, 00:24:58.865 "abort_timeout_sec": 1, 00:24:58.865 "ack_timeout": 0, 00:24:58.865 "data_wr_pool_size": 0 00:24:58.865 } 00:24:58.865 }, 00:24:58.865 { 00:24:58.865 "method": "nvmf_create_subsystem", 00:24:58.865 "params": { 00:24:58.865 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:58.865 "allow_any_host": false, 00:24:58.865 "serial_number": "00000000000000000000", 00:24:58.865 "model_number": "SPDK bdev Controller", 00:24:58.865 "max_namespaces": 32, 00:24:58.865 "min_cntlid": 1, 00:24:58.865 "max_cntlid": 65519, 00:24:58.865 "ana_reporting": false 00:24:58.865 } 00:24:58.865 }, 00:24:58.865 { 00:24:58.865 "method": "nvmf_subsystem_add_host", 00:24:58.865 "params": { 00:24:58.865 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:58.865 "host": "nqn.2016-06.io.spdk:host1", 00:24:58.865 "psk": "key0" 00:24:58.865 } 00:24:58.865 }, 00:24:58.865 { 00:24:58.865 "method": "nvmf_subsystem_add_ns", 00:24:58.865 "params": { 00:24:58.865 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:58.865 "namespace": { 00:24:58.865 "nsid": 1, 00:24:58.865 "bdev_name": "malloc0", 00:24:58.865 "nguid": "EB1618FAA0AC41B28DA61B39BFE663D9", 00:24:58.865 "uuid": "eb1618fa-a0ac-41b2-8da6-1b39bfe663d9", 00:24:58.865 "no_auto_visible": false 00:24:58.865 } 00:24:58.865 } 00:24:58.865 }, 00:24:58.865 { 00:24:58.865 "method": "nvmf_subsystem_add_listener", 00:24:58.865 "params": { 00:24:58.865 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:58.865 "listen_address": { 00:24:58.865 "trtype": "TCP", 00:24:58.865 "adrfam": "IPv4", 00:24:58.865 "traddr": "10.0.0.2", 00:24:58.865 "trsvcid": "4420" 00:24:58.865 }, 00:24:58.865 "secure_channel": false, 00:24:58.865 "sock_impl": "ssl" 00:24:58.865 } 00:24:58.865 } 00:24:58.865 ] 00:24:58.865 } 00:24:58.865 ] 00:24:58.865 }' 00:24:58.865 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:58.865 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3080820 00:24:58.865 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3080820 00:24:58.865 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:58.865 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3080820 ']' 00:24:58.865 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.865 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:58.865 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.865 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:58.865 17:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:59.126 [2024-10-01 17:24:57.457852] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:24:59.126 [2024-10-01 17:24:57.457910] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:59.126 [2024-10-01 17:24:57.525383] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.126 [2024-10-01 17:24:57.556801] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:59.126 [2024-10-01 17:24:57.556841] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:59.126 [2024-10-01 17:24:57.556849] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:59.126 [2024-10-01 17:24:57.556856] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:59.126 [2024-10-01 17:24:57.556862] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:59.126 [2024-10-01 17:24:57.556914] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:59.387 [2024-10-01 17:24:57.766456] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:59.387 [2024-10-01 17:24:57.798468] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:59.387 [2024-10-01 17:24:57.798711] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:59.958 17:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:59.958 17:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:59.958 17:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:59.958 17:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:59.958 17:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:59.958 17:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:59.958 17:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3081162 00:24:59.958 17:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3081162 /var/tmp/bdevperf.sock 00:24:59.958 17:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3081162 ']' 00:24:59.958 17:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:59.958 17:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:59.958 17:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:59.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:59.958 17:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:59.958 17:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:59.958 17:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:59.958 17:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:59.958 "subsystems": [ 00:24:59.958 { 00:24:59.958 "subsystem": "keyring", 00:24:59.958 "config": [ 00:24:59.958 { 00:24:59.958 "method": "keyring_file_add_key", 00:24:59.958 "params": { 00:24:59.958 "name": "key0", 00:24:59.958 "path": "/tmp/tmp.Ce8noBp1z6" 00:24:59.958 } 00:24:59.958 } 00:24:59.958 ] 00:24:59.958 }, 00:24:59.958 { 00:24:59.958 "subsystem": "iobuf", 00:24:59.958 "config": [ 00:24:59.958 { 00:24:59.958 "method": "iobuf_set_options", 00:24:59.958 "params": { 00:24:59.958 "small_pool_count": 8192, 00:24:59.958 "large_pool_count": 1024, 00:24:59.958 "small_bufsize": 8192, 00:24:59.958 "large_bufsize": 135168 00:24:59.958 } 00:24:59.958 } 00:24:59.958 ] 00:24:59.958 }, 00:24:59.958 { 00:24:59.958 "subsystem": "sock", 00:24:59.958 "config": [ 00:24:59.958 { 00:24:59.958 "method": "sock_set_default_impl", 00:24:59.958 "params": { 00:24:59.958 "impl_name": "posix" 00:24:59.958 } 00:24:59.958 }, 00:24:59.958 { 00:24:59.958 "method": "sock_impl_set_options", 00:24:59.958 "params": { 00:24:59.958 "impl_name": "ssl", 00:24:59.958 "recv_buf_size": 4096, 00:24:59.958 "send_buf_size": 4096, 00:24:59.958 "enable_recv_pipe": true, 00:24:59.959 "enable_quickack": false, 00:24:59.959 "enable_placement_id": 0, 00:24:59.959 "enable_zerocopy_send_server": true, 00:24:59.959 "enable_zerocopy_send_client": false, 00:24:59.959 "zerocopy_threshold": 0, 00:24:59.959 "tls_version": 0, 00:24:59.959 "enable_ktls": false 00:24:59.959 } 00:24:59.959 }, 00:24:59.959 { 00:24:59.959 "method": "sock_impl_set_options", 00:24:59.959 "params": { 00:24:59.959 "impl_name": "posix", 00:24:59.959 "recv_buf_size": 2097152, 00:24:59.959 "send_buf_size": 2097152, 00:24:59.959 "enable_recv_pipe": true, 00:24:59.959 "enable_quickack": false, 00:24:59.959 "enable_placement_id": 0, 00:24:59.959 "enable_zerocopy_send_server": true, 00:24:59.959 "enable_zerocopy_send_client": false, 00:24:59.959 "zerocopy_threshold": 0, 00:24:59.959 "tls_version": 0, 00:24:59.959 "enable_ktls": false 00:24:59.959 } 00:24:59.959 } 00:24:59.959 ] 00:24:59.959 }, 00:24:59.959 { 00:24:59.959 "subsystem": "vmd", 00:24:59.959 "config": [] 00:24:59.959 }, 00:24:59.959 { 00:24:59.959 "subsystem": "accel", 00:24:59.959 "config": [ 00:24:59.959 { 00:24:59.959 "method": "accel_set_options", 00:24:59.959 "params": { 00:24:59.959 "small_cache_size": 128, 00:24:59.959 "large_cache_size": 16, 00:24:59.959 "task_count": 2048, 00:24:59.959 "sequence_count": 2048, 00:24:59.959 "buf_count": 2048 00:24:59.959 } 00:24:59.959 } 00:24:59.959 ] 00:24:59.959 }, 00:24:59.959 { 00:24:59.959 "subsystem": "bdev", 00:24:59.959 "config": [ 00:24:59.959 { 00:24:59.959 "method": "bdev_set_options", 00:24:59.959 "params": { 00:24:59.959 "bdev_io_pool_size": 65535, 00:24:59.959 "bdev_io_cache_size": 256, 00:24:59.959 "bdev_auto_examine": true, 00:24:59.959 "iobuf_small_cache_size": 128, 00:24:59.959 "iobuf_large_cache_size": 16 00:24:59.959 } 00:24:59.959 }, 00:24:59.959 { 00:24:59.959 "method": "bdev_raid_set_options", 00:24:59.959 "params": { 00:24:59.959 "process_window_size_kb": 1024, 00:24:59.959 "process_max_bandwidth_mb_sec": 0 00:24:59.959 } 00:24:59.959 }, 00:24:59.959 { 00:24:59.959 "method": "bdev_iscsi_set_options", 00:24:59.959 "params": { 00:24:59.959 "timeout_sec": 30 00:24:59.959 } 00:24:59.959 }, 00:24:59.959 { 00:24:59.959 "method": "bdev_nvme_set_options", 00:24:59.959 "params": { 00:24:59.959 "action_on_timeout": "none", 00:24:59.959 "timeout_us": 0, 00:24:59.959 "timeout_admin_us": 0, 00:24:59.959 "keep_alive_timeout_ms": 10000, 00:24:59.959 "arbitration_burst": 0, 00:24:59.959 "low_priority_weight": 0, 00:24:59.959 "medium_priority_weight": 0, 00:24:59.959 "high_priority_weight": 0, 00:24:59.959 "nvme_adminq_poll_period_us": 10000, 00:24:59.959 "nvme_ioq_poll_period_us": 0, 00:24:59.959 "io_queue_requests": 512, 00:24:59.959 "delay_cmd_submit": true, 00:24:59.959 "transport_retry_count": 4, 00:24:59.959 "bdev_retry_count": 3, 00:24:59.959 "transport_ack_timeout": 0, 00:24:59.959 "ctrlr_loss_timeout_sec": 0, 00:24:59.959 "reconnect_delay_sec": 0, 00:24:59.959 "fast_io_fail_timeout_sec": 0, 00:24:59.959 "disable_auto_failback": false, 00:24:59.959 "generate_uuids": false, 00:24:59.959 "transport_tos": 0, 00:24:59.959 "nvme_error_stat": false, 00:24:59.959 "rdma_srq_size": 0, 00:24:59.959 "io_path_stat": false, 00:24:59.959 "allow_accel_sequence": false, 00:24:59.959 "rdma_max_cq_size": 0, 00:24:59.959 "rdma_cm_event_timeout_ms": 0, 00:24:59.959 "dhchap_digests": [ 00:24:59.959 "sha256", 00:24:59.959 "sha384", 00:24:59.959 "sha512" 00:24:59.959 ], 00:24:59.959 "dhchap_dhgroups": [ 00:24:59.959 "null", 00:24:59.959 "ffdhe2048", 00:24:59.959 "ffdhe3072", 00:24:59.959 "ffdhe4096", 00:24:59.959 "ffdhe6144", 00:24:59.959 "ffdhe8192" 00:24:59.959 ] 00:24:59.959 } 00:24:59.959 }, 00:24:59.959 { 00:24:59.959 "method": "bdev_nvme_attach_controller", 00:24:59.959 "params": { 00:24:59.959 "name": "nvme0", 00:24:59.959 "trtype": "TCP", 00:24:59.959 "adrfam": "IPv4", 00:24:59.959 "traddr": "10.0.0.2", 00:24:59.959 "trsvcid": "4420", 00:24:59.959 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:59.959 "prchk_reftag": false, 00:24:59.959 "prchk_guard": false, 00:24:59.959 "ctrlr_loss_timeout_sec": 0, 00:24:59.959 "reconnect_delay_sec": 0, 00:24:59.959 "fast_io_fail_timeout_sec": 0, 00:24:59.959 "psk": "key0", 00:24:59.959 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:59.959 "hdgst": false, 00:24:59.959 "ddgst": false 00:24:59.959 } 00:24:59.959 }, 00:24:59.959 { 00:24:59.959 "method": "bdev_nvme_set_hotplug", 00:24:59.959 "params": { 00:24:59.959 "period_us": 100000, 00:24:59.959 "enable": false 00:24:59.959 } 00:24:59.959 }, 00:24:59.959 { 00:24:59.959 "method": "bdev_enable_histogram", 00:24:59.959 "params": { 00:24:59.959 "name": "nvme0n1", 00:24:59.959 "enable": true 00:24:59.959 } 00:24:59.959 }, 00:24:59.959 { 00:24:59.959 "method": "bdev_wait_for_examine" 00:24:59.959 } 00:24:59.959 ] 00:24:59.959 }, 00:24:59.959 { 00:24:59.959 "subsystem": "nbd", 00:24:59.959 "config": [] 00:24:59.959 } 00:24:59.959 ] 00:24:59.959 }' 00:24:59.959 [2024-10-01 17:24:58.327521] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:24:59.959 [2024-10-01 17:24:58.327576] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3081162 ] 00:24:59.959 [2024-10-01 17:24:58.401212] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.959 [2024-10-01 17:24:58.429688] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:00.219 [2024-10-01 17:24:58.558833] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:00.788 17:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:00.788 17:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:00.788 17:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:00.788 17:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:25:00.788 17:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.788 17:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:01.049 Running I/O for 1 seconds... 00:25:01.991 4749.00 IOPS, 18.55 MiB/s 00:25:01.991 Latency(us) 00:25:01.991 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.991 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:01.991 Verification LBA range: start 0x0 length 0x2000 00:25:01.991 nvme0n1 : 1.02 4801.82 18.76 0.00 0.00 26478.33 4778.67 80827.73 00:25:01.991 =================================================================================================================== 00:25:01.991 Total : 4801.82 18.76 0.00 0.00 26478.33 4778.67 80827.73 00:25:01.991 { 00:25:01.991 "results": [ 00:25:01.991 { 00:25:01.991 "job": "nvme0n1", 00:25:01.991 "core_mask": "0x2", 00:25:01.991 "workload": "verify", 00:25:01.991 "status": "finished", 00:25:01.991 "verify_range": { 00:25:01.991 "start": 0, 00:25:01.991 "length": 8192 00:25:01.991 }, 00:25:01.991 "queue_depth": 128, 00:25:01.991 "io_size": 4096, 00:25:01.991 "runtime": 1.015656, 00:25:01.991 "iops": 4801.822664366676, 00:25:01.991 "mibps": 18.757119782682327, 00:25:01.991 "io_failed": 0, 00:25:01.991 "io_timeout": 0, 00:25:01.991 "avg_latency_us": 26478.326184129586, 00:25:01.991 "min_latency_us": 4778.666666666667, 00:25:01.991 "max_latency_us": 80827.73333333334 00:25:01.991 } 00:25:01.991 ], 00:25:01.991 "core_count": 1 00:25:01.991 } 00:25:01.991 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:25:01.991 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:25:01.991 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:25:01.991 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:25:01.991 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:25:01.991 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:25:01.991 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:01.991 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:25:01.991 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:25:01.991 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:25:01.991 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:01.991 nvmf_trace.0 00:25:01.991 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:25:01.991 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3081162 00:25:01.991 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3081162 ']' 00:25:01.991 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3081162 00:25:01.991 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:01.991 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:01.991 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3081162 00:25:02.252 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:02.252 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:02.252 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3081162' 00:25:02.252 killing process with pid 3081162 00:25:02.252 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3081162 00:25:02.252 Received shutdown signal, test time was about 1.000000 seconds 00:25:02.252 00:25:02.252 Latency(us) 00:25:02.252 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:02.252 =================================================================================================================== 00:25:02.252 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:02.252 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3081162 00:25:02.252 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:25:02.252 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:02.252 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:25:02.252 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:02.253 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:25:02.253 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:02.253 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:02.253 rmmod nvme_tcp 00:25:02.253 rmmod nvme_fabrics 00:25:02.253 rmmod nvme_keyring 00:25:02.253 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:02.253 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:25:02.253 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:25:02.253 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@515 -- # '[' -n 3080820 ']' 00:25:02.253 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # killprocess 3080820 00:25:02.253 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3080820 ']' 00:25:02.253 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3080820 00:25:02.253 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:02.253 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:02.253 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3080820 00:25:02.514 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:02.514 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:02.514 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3080820' 00:25:02.514 killing process with pid 3080820 00:25:02.514 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3080820 00:25:02.514 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3080820 00:25:02.514 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:02.514 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:02.514 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:02.514 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:25:02.514 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-save 00:25:02.514 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:02.514 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-restore 00:25:02.514 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:02.514 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:02.514 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.514 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:02.514 17:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.058 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:05.058 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.pDI60gb5u1 /tmp/tmp.0xcd3EZDTM /tmp/tmp.Ce8noBp1z6 00:25:05.058 00:25:05.058 real 1m20.567s 00:25:05.058 user 2m4.184s 00:25:05.058 sys 0m26.734s 00:25:05.058 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:05.058 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:05.058 ************************************ 00:25:05.058 END TEST nvmf_tls 00:25:05.058 ************************************ 00:25:05.058 17:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:05.058 17:25:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:05.058 17:25:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:05.058 17:25:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:05.058 ************************************ 00:25:05.058 START TEST nvmf_fips 00:25:05.058 ************************************ 00:25:05.058 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:05.058 * Looking for test storage... 00:25:05.058 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:25:05.058 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:05.058 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:05.058 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:25:05.058 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:05.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.059 --rc genhtml_branch_coverage=1 00:25:05.059 --rc genhtml_function_coverage=1 00:25:05.059 --rc genhtml_legend=1 00:25:05.059 --rc geninfo_all_blocks=1 00:25:05.059 --rc geninfo_unexecuted_blocks=1 00:25:05.059 00:25:05.059 ' 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:05.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.059 --rc genhtml_branch_coverage=1 00:25:05.059 --rc genhtml_function_coverage=1 00:25:05.059 --rc genhtml_legend=1 00:25:05.059 --rc geninfo_all_blocks=1 00:25:05.059 --rc geninfo_unexecuted_blocks=1 00:25:05.059 00:25:05.059 ' 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:05.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.059 --rc genhtml_branch_coverage=1 00:25:05.059 --rc genhtml_function_coverage=1 00:25:05.059 --rc genhtml_legend=1 00:25:05.059 --rc geninfo_all_blocks=1 00:25:05.059 --rc geninfo_unexecuted_blocks=1 00:25:05.059 00:25:05.059 ' 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:05.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.059 --rc genhtml_branch_coverage=1 00:25:05.059 --rc genhtml_function_coverage=1 00:25:05.059 --rc genhtml_legend=1 00:25:05.059 --rc geninfo_all_blocks=1 00:25:05.059 --rc geninfo_unexecuted_blocks=1 00:25:05.059 00:25:05.059 ' 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:05.059 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:25:05.059 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:25:05.060 Error setting digest 00:25:05.060 40D2D2F7917F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:25:05.060 40D2D2F7917F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:05.060 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.061 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:05.061 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:05.061 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:25:05.061 17:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:13.202 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:13.202 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:13.202 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:13.202 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:13.203 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # is_hw=yes 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:13.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:13.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.549 ms 00:25:13.203 00:25:13.203 --- 10.0.0.2 ping statistics --- 00:25:13.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.203 rtt min/avg/max/mdev = 0.549/0.549/0.549/0.000 ms 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:13.203 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:13.203 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:25:13.203 00:25:13.203 --- 10.0.0.1 ping statistics --- 00:25:13.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.203 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # return 0 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # nvmfpid=3085846 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # waitforlisten 3085846 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 3085846 ']' 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:13.203 17:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:13.203 [2024-10-01 17:25:10.874618] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:25:13.203 [2024-10-01 17:25:10.874694] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:13.203 [2024-10-01 17:25:10.962875] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.203 [2024-10-01 17:25:11.009405] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:13.203 [2024-10-01 17:25:11.009462] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:13.203 [2024-10-01 17:25:11.009471] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:13.203 [2024-10-01 17:25:11.009479] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:13.203 [2024-10-01 17:25:11.009485] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:13.203 [2024-10-01 17:25:11.009509] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:13.203 17:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:13.203 17:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:25:13.203 17:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:13.203 17:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:13.203 17:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:13.203 17:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:13.203 17:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:25:13.203 17:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:13.203 17:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:25:13.203 17:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.51Q 00:25:13.203 17:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:13.203 17:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.51Q 00:25:13.203 17:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.51Q 00:25:13.203 17:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.51Q 00:25:13.203 17:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:13.464 [2024-10-01 17:25:11.879474] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:13.464 [2024-10-01 17:25:11.895469] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:13.464 [2024-10-01 17:25:11.895806] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:13.464 malloc0 00:25:13.464 17:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:13.465 17:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3085930 00:25:13.465 17:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3085930 /var/tmp/bdevperf.sock 00:25:13.465 17:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:13.465 17:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 3085930 ']' 00:25:13.465 17:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:13.465 17:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:13.465 17:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:13.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:13.465 17:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:13.465 17:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:13.725 [2024-10-01 17:25:12.043972] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:25:13.725 [2024-10-01 17:25:12.044046] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3085930 ] 00:25:13.725 [2024-10-01 17:25:12.097707] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.725 [2024-10-01 17:25:12.130604] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:25:13.725 17:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:13.725 17:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:25:13.725 17:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.51Q 00:25:13.986 17:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:13.986 [2024-10-01 17:25:12.509416] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:14.247 TLSTESTn1 00:25:14.247 17:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:14.247 Running I/O for 10 seconds... 00:25:24.541 4669.00 IOPS, 18.24 MiB/s 5151.00 IOPS, 20.12 MiB/s 5638.67 IOPS, 22.03 MiB/s 5719.75 IOPS, 22.34 MiB/s 5809.60 IOPS, 22.69 MiB/s 5853.33 IOPS, 22.86 MiB/s 5940.14 IOPS, 23.20 MiB/s 5929.25 IOPS, 23.16 MiB/s 6000.22 IOPS, 23.44 MiB/s 6048.40 IOPS, 23.63 MiB/s 00:25:24.541 Latency(us) 00:25:24.541 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:24.541 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:24.541 Verification LBA range: start 0x0 length 0x2000 00:25:24.541 TLSTESTn1 : 10.01 6053.22 23.65 0.00 0.00 21114.12 4614.83 38010.88 00:25:24.541 =================================================================================================================== 00:25:24.541 Total : 6053.22 23.65 0.00 0.00 21114.12 4614.83 38010.88 00:25:24.541 { 00:25:24.541 "results": [ 00:25:24.541 { 00:25:24.541 "job": "TLSTESTn1", 00:25:24.541 "core_mask": "0x4", 00:25:24.541 "workload": "verify", 00:25:24.541 "status": "finished", 00:25:24.541 "verify_range": { 00:25:24.541 "start": 0, 00:25:24.541 "length": 8192 00:25:24.541 }, 00:25:24.541 "queue_depth": 128, 00:25:24.541 "io_size": 4096, 00:25:24.541 "runtime": 10.013025, 00:25:24.541 "iops": 6053.215686568245, 00:25:24.541 "mibps": 23.645373775657205, 00:25:24.541 "io_failed": 0, 00:25:24.541 "io_timeout": 0, 00:25:24.541 "avg_latency_us": 21114.11827776036, 00:25:24.541 "min_latency_us": 4614.826666666667, 00:25:24.541 "max_latency_us": 38010.88 00:25:24.541 } 00:25:24.541 ], 00:25:24.541 "core_count": 1 00:25:24.541 } 00:25:24.541 17:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:25:24.541 17:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:25:24.541 17:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:25:24.541 17:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:25:24.541 17:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:25:24.541 17:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:24.541 17:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:25:24.541 17:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:25:24.541 17:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:25:24.541 17:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:24.541 nvmf_trace.0 00:25:24.541 17:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:25:24.541 17:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3085930 00:25:24.541 17:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 3085930 ']' 00:25:24.541 17:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 3085930 00:25:24.541 17:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:25:24.541 17:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:24.541 17:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3085930 00:25:24.541 17:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:24.541 17:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:24.541 17:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3085930' 00:25:24.541 killing process with pid 3085930 00:25:24.541 17:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 3085930 00:25:24.541 Received shutdown signal, test time was about 10.000000 seconds 00:25:24.541 00:25:24.541 Latency(us) 00:25:24.541 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:24.541 =================================================================================================================== 00:25:24.541 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:24.541 17:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 3085930 00:25:24.541 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:25:24.541 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:24.541 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:25:24.541 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:24.541 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:25:24.541 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:24.541 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:24.541 rmmod nvme_tcp 00:25:24.541 rmmod nvme_fabrics 00:25:24.541 rmmod nvme_keyring 00:25:24.803 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:24.803 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:25:24.803 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:25:24.803 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@515 -- # '[' -n 3085846 ']' 00:25:24.803 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # killprocess 3085846 00:25:24.803 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 3085846 ']' 00:25:24.803 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 3085846 00:25:24.803 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:25:24.803 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:24.803 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3085846 00:25:24.803 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:24.803 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:24.803 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3085846' 00:25:24.803 killing process with pid 3085846 00:25:24.803 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 3085846 00:25:24.803 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 3085846 00:25:24.803 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:24.803 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:24.803 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:24.803 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:25:24.803 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-save 00:25:24.803 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:24.803 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-restore 00:25:24.803 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:24.803 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:24.803 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:24.803 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:24.803 17:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.51Q 00:25:27.351 00:25:27.351 real 0m22.266s 00:25:27.351 user 0m23.506s 00:25:27.351 sys 0m9.156s 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:27.351 ************************************ 00:25:27.351 END TEST nvmf_fips 00:25:27.351 ************************************ 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:27.351 ************************************ 00:25:27.351 START TEST nvmf_control_msg_list 00:25:27.351 ************************************ 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:27.351 * Looking for test storage... 00:25:27.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:27.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:27.351 --rc genhtml_branch_coverage=1 00:25:27.351 --rc genhtml_function_coverage=1 00:25:27.351 --rc genhtml_legend=1 00:25:27.351 --rc geninfo_all_blocks=1 00:25:27.351 --rc geninfo_unexecuted_blocks=1 00:25:27.351 00:25:27.351 ' 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:27.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:27.351 --rc genhtml_branch_coverage=1 00:25:27.351 --rc genhtml_function_coverage=1 00:25:27.351 --rc genhtml_legend=1 00:25:27.351 --rc geninfo_all_blocks=1 00:25:27.351 --rc geninfo_unexecuted_blocks=1 00:25:27.351 00:25:27.351 ' 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:27.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:27.351 --rc genhtml_branch_coverage=1 00:25:27.351 --rc genhtml_function_coverage=1 00:25:27.351 --rc genhtml_legend=1 00:25:27.351 --rc geninfo_all_blocks=1 00:25:27.351 --rc geninfo_unexecuted_blocks=1 00:25:27.351 00:25:27.351 ' 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:27.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:27.351 --rc genhtml_branch_coverage=1 00:25:27.351 --rc genhtml_function_coverage=1 00:25:27.351 --rc genhtml_legend=1 00:25:27.351 --rc geninfo_all_blocks=1 00:25:27.351 --rc geninfo_unexecuted_blocks=1 00:25:27.351 00:25:27.351 ' 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:27.351 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:27.352 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:27.352 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:27.352 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:27.352 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:27.352 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:27.352 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:27.352 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:27.352 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:27.352 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:27.352 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:27.352 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:27.352 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:25:27.352 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:27.352 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:27.352 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:27.352 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.352 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.352 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.352 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:25:27.352 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.352 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:25:27.352 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:27.352 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:27.352 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:27.352 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:27.352 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:27.352 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:27.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:27.352 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:27.352 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:27.352 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:27.352 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:25:27.352 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:27.352 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:27.352 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:27.352 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:27.352 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:27.352 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:27.352 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:27.352 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:27.352 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:27.352 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:27.352 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:25:27.352 17:25:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:35.495 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:35.495 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:25:35.495 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:35.495 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:35.495 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:35.495 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:35.495 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:35.495 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:25:35.495 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:35.495 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:25:35.495 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:25:35.495 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:25:35.495 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:25:35.495 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:25:35.495 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:25:35.495 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:35.495 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:35.495 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:35.495 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:35.495 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:35.495 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:35.495 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:35.495 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:35.495 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:35.495 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:35.495 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:35.496 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:35.496 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:35.496 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:35.496 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # is_hw=yes 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:35.496 17:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:35.496 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:35.496 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:35.496 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:35.496 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:35.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:35.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:25:35.496 00:25:35.496 --- 10.0.0.2 ping statistics --- 00:25:35.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:35.496 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:25:35.496 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:35.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:35.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:25:35.496 00:25:35.496 --- 10.0.0.1 ping statistics --- 00:25:35.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:35.496 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:25:35.496 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:35.496 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@448 -- # return 0 00:25:35.496 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:35.496 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:35.496 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:35.496 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:35.496 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:35.496 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:35.496 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:35.496 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:25:35.496 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:35.496 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:35.496 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:35.496 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # nvmfpid=3092247 00:25:35.496 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # waitforlisten 3092247 00:25:35.496 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:35.496 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 3092247 ']' 00:25:35.496 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:35.496 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:35.496 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:35.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:35.496 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:35.496 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:35.496 [2024-10-01 17:25:33.145713] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:25:35.497 [2024-10-01 17:25:33.145781] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:35.497 [2024-10-01 17:25:33.217587] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.497 [2024-10-01 17:25:33.255481] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:35.497 [2024-10-01 17:25:33.255529] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:35.497 [2024-10-01 17:25:33.255542] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:35.497 [2024-10-01 17:25:33.255548] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:35.497 [2024-10-01 17:25:33.255554] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:35.497 [2024-10-01 17:25:33.255573] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.497 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:35.497 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:25:35.497 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:35.497 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:35.497 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:35.497 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:35.497 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:35.497 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:35.497 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:25:35.497 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.497 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:35.497 [2024-10-01 17:25:33.983377] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:35.497 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.497 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:25:35.497 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.497 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:35.497 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.497 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:35.497 17:25:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.497 17:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:35.497 Malloc0 00:25:35.497 17:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.497 17:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:35.497 17:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.497 17:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:35.497 17:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.497 17:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:35.497 17:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.497 17:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:35.758 [2024-10-01 17:25:34.044531] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:35.758 17:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.758 17:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3092593 00:25:35.758 17:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:35.758 17:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3092594 00:25:35.758 17:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:35.758 17:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3092595 00:25:35.758 17:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3092593 00:25:35.758 17:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:35.758 [2024-10-01 17:25:34.114913] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:35.758 [2024-10-01 17:25:34.125044] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:35.758 [2024-10-01 17:25:34.125332] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:36.699 Initializing NVMe Controllers 00:25:36.699 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:36.699 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:25:36.699 Initialization complete. Launching workers. 00:25:36.699 ======================================================== 00:25:36.699 Latency(us) 00:25:36.699 Device Information : IOPS MiB/s Average min max 00:25:36.699 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 1622.00 6.34 616.32 227.84 830.14 00:25:36.699 ======================================================== 00:25:36.699 Total : 1622.00 6.34 616.32 227.84 830.14 00:25:36.699 00:25:36.699 [2024-10-01 17:25:35.219059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24574a0 is same with the state(6) to be set 00:25:36.699 Initializing NVMe Controllers 00:25:36.699 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:36.699 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:25:36.699 Initialization complete. Launching workers. 00:25:36.699 ======================================================== 00:25:36.699 Latency(us) 00:25:36.699 Device Information : IOPS MiB/s Average min max 00:25:36.699 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 1584.00 6.19 631.36 164.97 808.88 00:25:36.699 ======================================================== 00:25:36.699 Total : 1584.00 6.19 631.36 164.97 808.88 00:25:36.699 00:25:36.960 Initializing NVMe Controllers 00:25:36.960 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:36.960 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:25:36.960 Initialization complete. Launching workers. 00:25:36.960 ======================================================== 00:25:36.960 Latency(us) 00:25:36.960 Device Information : IOPS MiB/s Average min max 00:25:36.960 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40918.28 40781.39 41320.65 00:25:36.960 ======================================================== 00:25:36.960 Total : 25.00 0.10 40918.28 40781.39 41320.65 00:25:36.960 00:25:36.960 17:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3092594 00:25:36.960 17:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3092595 00:25:36.960 17:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:36.960 17:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:25:36.960 17:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:36.960 17:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:25:36.960 17:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:36.960 17:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:25:36.960 17:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:36.960 17:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:36.960 rmmod nvme_tcp 00:25:36.960 rmmod nvme_fabrics 00:25:36.960 rmmod nvme_keyring 00:25:36.960 17:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:37.259 17:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:25:37.259 17:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:25:37.259 17:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@515 -- # '[' -n 3092247 ']' 00:25:37.259 17:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # killprocess 3092247 00:25:37.259 17:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 3092247 ']' 00:25:37.259 17:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 3092247 00:25:37.259 17:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:25:37.259 17:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:37.259 17:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3092247 00:25:37.259 17:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:37.259 17:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:37.259 17:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3092247' 00:25:37.259 killing process with pid 3092247 00:25:37.259 17:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 3092247 00:25:37.259 17:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 3092247 00:25:37.259 17:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:37.259 17:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:37.259 17:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:37.259 17:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:25:37.259 17:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-save 00:25:37.259 17:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:37.259 17:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-restore 00:25:37.259 17:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:37.259 17:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:37.259 17:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:37.259 17:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:37.259 17:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.271 17:25:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:39.271 00:25:39.271 real 0m12.348s 00:25:39.271 user 0m8.076s 00:25:39.271 sys 0m6.420s 00:25:39.271 17:25:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:39.271 17:25:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:39.271 ************************************ 00:25:39.271 END TEST nvmf_control_msg_list 00:25:39.271 ************************************ 00:25:39.532 17:25:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:39.532 17:25:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:39.532 17:25:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:39.532 17:25:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:39.532 ************************************ 00:25:39.532 START TEST nvmf_wait_for_buf 00:25:39.532 ************************************ 00:25:39.532 17:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:39.532 * Looking for test storage... 00:25:39.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:39.532 17:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:39.532 17:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:25:39.532 17:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:39.532 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:39.532 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:39.532 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:39.532 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:39.532 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:25:39.532 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:25:39.532 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:25:39.532 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:25:39.532 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:25:39.532 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:25:39.532 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:25:39.532 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:39.532 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:25:39.532 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:25:39.532 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:39.532 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:39.532 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:25:39.532 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:25:39.532 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:39.532 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:25:39.532 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:39.532 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:25:39.532 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:25:39.532 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:39.532 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:39.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.794 --rc genhtml_branch_coverage=1 00:25:39.794 --rc genhtml_function_coverage=1 00:25:39.794 --rc genhtml_legend=1 00:25:39.794 --rc geninfo_all_blocks=1 00:25:39.794 --rc geninfo_unexecuted_blocks=1 00:25:39.794 00:25:39.794 ' 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:39.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.794 --rc genhtml_branch_coverage=1 00:25:39.794 --rc genhtml_function_coverage=1 00:25:39.794 --rc genhtml_legend=1 00:25:39.794 --rc geninfo_all_blocks=1 00:25:39.794 --rc geninfo_unexecuted_blocks=1 00:25:39.794 00:25:39.794 ' 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:39.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.794 --rc genhtml_branch_coverage=1 00:25:39.794 --rc genhtml_function_coverage=1 00:25:39.794 --rc genhtml_legend=1 00:25:39.794 --rc geninfo_all_blocks=1 00:25:39.794 --rc geninfo_unexecuted_blocks=1 00:25:39.794 00:25:39.794 ' 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:39.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.794 --rc genhtml_branch_coverage=1 00:25:39.794 --rc genhtml_function_coverage=1 00:25:39.794 --rc genhtml_legend=1 00:25:39.794 --rc geninfo_all_blocks=1 00:25:39.794 --rc geninfo_unexecuted_blocks=1 00:25:39.794 00:25:39.794 ' 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:39.794 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:39.794 17:25:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:47.929 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:47.929 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:47.929 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:47.929 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # is_hw=yes 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:47.929 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:47.930 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:47.930 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:47.930 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:47.930 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:47.930 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:47.930 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:47.930 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:47.930 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:47.930 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:47.930 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:47.930 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:47.930 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.660 ms 00:25:47.930 00:25:47.930 --- 10.0.0.2 ping statistics --- 00:25:47.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:47.930 rtt min/avg/max/mdev = 0.660/0.660/0.660/0.000 ms 00:25:47.930 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:47.930 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:47.930 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:25:47.930 00:25:47.930 --- 10.0.0.1 ping statistics --- 00:25:47.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:47.930 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:25:47.930 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:47.930 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@448 -- # return 0 00:25:47.930 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:47.930 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:47.930 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:47.930 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:47.930 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:47.930 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:47.930 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:47.930 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:25:47.930 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:47.930 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:47.930 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:47.930 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # nvmfpid=3096938 00:25:47.930 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # waitforlisten 3096938 00:25:47.930 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:47.930 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 3096938 ']' 00:25:47.930 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:47.930 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:47.930 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:47.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:47.930 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:47.930 17:25:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:47.930 [2024-10-01 17:25:45.499524] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:25:47.930 [2024-10-01 17:25:45.499612] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:47.930 [2024-10-01 17:25:45.573290] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.930 [2024-10-01 17:25:45.610220] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:47.930 [2024-10-01 17:25:45.610266] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:47.930 [2024-10-01 17:25:45.610275] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:47.930 [2024-10-01 17:25:45.610281] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:47.930 [2024-10-01 17:25:45.610287] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:47.930 [2024-10-01 17:25:45.610314] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:47.930 17:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:47.930 17:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:25:47.930 17:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:47.930 17:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:47.930 17:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:47.930 17:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:47.930 17:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:47.930 17:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:47.930 17:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:25:47.930 17:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.930 17:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:47.930 17:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.930 17:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:25:47.930 17:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.930 17:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:47.930 17:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.930 17:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:25:47.930 17:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.930 17:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:47.930 17:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.930 17:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:47.930 17:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.930 17:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:47.930 Malloc0 00:25:47.930 17:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.930 17:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:25:47.930 17:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.930 17:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:47.930 [2024-10-01 17:25:46.418286] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:47.930 17:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.930 17:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:25:47.930 17:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.930 17:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:47.930 17:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.930 17:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:47.930 17:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.930 17:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:47.930 17:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.930 17:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:47.930 17:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.930 17:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:47.930 [2024-10-01 17:25:46.454529] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:47.930 17:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.930 17:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:48.191 [2024-10-01 17:25:46.535075] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:49.571 Initializing NVMe Controllers 00:25:49.571 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:49.571 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:25:49.571 Initialization complete. Launching workers. 00:25:49.571 ======================================================== 00:25:49.571 Latency(us) 00:25:49.571 Device Information : IOPS MiB/s Average min max 00:25:49.571 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32264.34 7989.62 63852.79 00:25:49.571 ======================================================== 00:25:49.571 Total : 129.00 16.12 32264.34 7989.62 63852.79 00:25:49.571 00:25:49.571 17:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:25:49.571 17:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:25:49.571 17:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.571 17:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:49.571 17:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.571 17:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:25:49.571 17:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:25:49.571 17:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:49.571 17:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:25:49.571 17:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:49.572 17:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:25:49.572 17:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:49.572 17:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:25:49.572 17:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:49.572 17:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:49.572 rmmod nvme_tcp 00:25:49.572 rmmod nvme_fabrics 00:25:49.572 rmmod nvme_keyring 00:25:49.572 17:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:49.572 17:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:25:49.572 17:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:25:49.572 17:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@515 -- # '[' -n 3096938 ']' 00:25:49.572 17:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # killprocess 3096938 00:25:49.572 17:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 3096938 ']' 00:25:49.572 17:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 3096938 00:25:49.572 17:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:25:49.572 17:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:49.572 17:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3096938 00:25:49.832 17:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:49.832 17:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:49.832 17:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3096938' 00:25:49.832 killing process with pid 3096938 00:25:49.832 17:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 3096938 00:25:49.832 17:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 3096938 00:25:49.832 17:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:49.832 17:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:49.832 17:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:49.832 17:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:25:49.832 17:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-save 00:25:49.832 17:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:49.832 17:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-restore 00:25:49.832 17:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:49.832 17:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:49.832 17:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:49.832 17:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:49.832 17:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:52.374 00:25:52.374 real 0m12.447s 00:25:52.374 user 0m5.040s 00:25:52.374 sys 0m5.964s 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:52.374 ************************************ 00:25:52.374 END TEST nvmf_wait_for_buf 00:25:52.374 ************************************ 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:52.374 ************************************ 00:25:52.374 START TEST nvmf_fuzz 00:25:52.374 ************************************ 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:52.374 * Looking for test storage... 00:25:52.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:52.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:52.374 --rc genhtml_branch_coverage=1 00:25:52.374 --rc genhtml_function_coverage=1 00:25:52.374 --rc genhtml_legend=1 00:25:52.374 --rc geninfo_all_blocks=1 00:25:52.374 --rc geninfo_unexecuted_blocks=1 00:25:52.374 00:25:52.374 ' 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:52.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:52.374 --rc genhtml_branch_coverage=1 00:25:52.374 --rc genhtml_function_coverage=1 00:25:52.374 --rc genhtml_legend=1 00:25:52.374 --rc geninfo_all_blocks=1 00:25:52.374 --rc geninfo_unexecuted_blocks=1 00:25:52.374 00:25:52.374 ' 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:52.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:52.374 --rc genhtml_branch_coverage=1 00:25:52.374 --rc genhtml_function_coverage=1 00:25:52.374 --rc genhtml_legend=1 00:25:52.374 --rc geninfo_all_blocks=1 00:25:52.374 --rc geninfo_unexecuted_blocks=1 00:25:52.374 00:25:52.374 ' 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:52.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:52.374 --rc genhtml_branch_coverage=1 00:25:52.374 --rc genhtml_function_coverage=1 00:25:52.374 --rc genhtml_legend=1 00:25:52.374 --rc geninfo_all_blocks=1 00:25:52.374 --rc geninfo_unexecuted_blocks=1 00:25:52.374 00:25:52.374 ' 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:52.374 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:52.375 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:52.375 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:52.375 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:52.375 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:52.375 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:52.375 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:52.375 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:52.375 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:52.375 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:52.375 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:52.375 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:52.375 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:52.375 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:52.375 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:25:52.375 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:52.375 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:52.375 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:52.375 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.375 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.375 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.375 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:25:52.375 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.375 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:25:52.375 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:52.375 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:52.375 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:52.375 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:52.375 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:52.375 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:52.375 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:52.375 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:52.375 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:52.375 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:52.375 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:25:52.375 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:52.375 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:52.375 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:52.375 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:52.375 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:52.375 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:52.375 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:52.375 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.375 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:52.375 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:52.375 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:25:52.375 17:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:00.517 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:00.517 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:00.517 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:00.517 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # is_hw=yes 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:00.517 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:00.517 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.551 ms 00:26:00.517 00:26:00.517 --- 10.0.0.2 ping statistics --- 00:26:00.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.517 rtt min/avg/max/mdev = 0.551/0.551/0.551/0.000 ms 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:00.517 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:00.517 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:26:00.517 00:26:00.517 --- 10.0.0.1 ping statistics --- 00:26:00.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.517 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:00.517 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # return 0 00:26:00.518 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:00.518 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:00.518 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:00.518 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:00.518 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:00.518 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:00.518 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:00.518 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3101623 00:26:00.518 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:00.518 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:00.518 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3101623 00:26:00.518 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 3101623 ']' 00:26:00.518 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:00.518 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:00.518 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:00.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:00.518 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:00.518 17:25:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:00.518 17:25:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:00.518 17:25:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:26:00.518 17:25:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:00.518 17:25:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.518 17:25:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:00.518 17:25:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.518 17:25:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:26:00.518 17:25:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.518 17:25:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:00.518 Malloc0 00:26:00.518 17:25:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.518 17:25:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:00.518 17:25:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.518 17:25:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:00.518 17:25:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.518 17:25:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:00.518 17:25:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.518 17:25:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:00.518 17:25:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.518 17:25:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:00.518 17:25:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.518 17:25:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:00.518 17:25:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.518 17:25:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:26:00.518 17:25:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:26:32.631 Fuzzing completed. Shutting down the fuzz application 00:26:32.631 00:26:32.631 Dumping successful admin opcodes: 00:26:32.631 8, 9, 10, 24, 00:26:32.631 Dumping successful io opcodes: 00:26:32.631 0, 9, 00:26:32.631 NS: 0x200003aeff00 I/O qp, Total commands completed: 901620, total successful commands: 5250, random_seed: 2911934272 00:26:32.631 NS: 0x200003aeff00 admin qp, Total commands completed: 113932, total successful commands: 931, random_seed: 2952632064 00:26:32.631 17:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:26:32.631 Fuzzing completed. Shutting down the fuzz application 00:26:32.631 00:26:32.631 Dumping successful admin opcodes: 00:26:32.631 24, 00:26:32.631 Dumping successful io opcodes: 00:26:32.631 00:26:32.631 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2777220797 00:26:32.631 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 2777296081 00:26:32.631 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:32.631 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.631 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:32.631 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.631 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:26:32.631 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:26:32.631 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:32.631 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:26:32.631 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:32.632 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:26:32.632 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:32.632 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:32.632 rmmod nvme_tcp 00:26:32.632 rmmod nvme_fabrics 00:26:32.632 rmmod nvme_keyring 00:26:32.632 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:32.632 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:26:32.632 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:26:32.632 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@515 -- # '[' -n 3101623 ']' 00:26:32.632 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # killprocess 3101623 00:26:32.632 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 3101623 ']' 00:26:32.632 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 3101623 00:26:32.632 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:26:32.632 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:32.632 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3101623 00:26:32.632 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:32.632 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:32.632 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3101623' 00:26:32.632 killing process with pid 3101623 00:26:32.632 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 3101623 00:26:32.632 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 3101623 00:26:32.632 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:32.632 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:32.632 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:32.632 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:26:32.632 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@789 -- # iptables-save 00:26:32.632 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:32.632 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@789 -- # iptables-restore 00:26:32.632 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:32.632 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:32.632 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:32.632 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:32.632 17:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:34.543 17:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:34.543 17:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:26:34.543 00:26:34.543 real 0m42.506s 00:26:34.543 user 0m56.158s 00:26:34.543 sys 0m15.695s 00:26:34.543 17:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:34.543 17:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:34.543 ************************************ 00:26:34.543 END TEST nvmf_fuzz 00:26:34.543 ************************************ 00:26:34.543 17:26:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:34.543 17:26:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:34.543 17:26:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:34.543 17:26:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:34.543 ************************************ 00:26:34.543 START TEST nvmf_multiconnection 00:26:34.543 ************************************ 00:26:34.543 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:34.805 * Looking for test storage... 00:26:34.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lcov --version 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:34.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:34.805 --rc genhtml_branch_coverage=1 00:26:34.805 --rc genhtml_function_coverage=1 00:26:34.805 --rc genhtml_legend=1 00:26:34.805 --rc geninfo_all_blocks=1 00:26:34.805 --rc geninfo_unexecuted_blocks=1 00:26:34.805 00:26:34.805 ' 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:34.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:34.805 --rc genhtml_branch_coverage=1 00:26:34.805 --rc genhtml_function_coverage=1 00:26:34.805 --rc genhtml_legend=1 00:26:34.805 --rc geninfo_all_blocks=1 00:26:34.805 --rc geninfo_unexecuted_blocks=1 00:26:34.805 00:26:34.805 ' 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:34.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:34.805 --rc genhtml_branch_coverage=1 00:26:34.805 --rc genhtml_function_coverage=1 00:26:34.805 --rc genhtml_legend=1 00:26:34.805 --rc geninfo_all_blocks=1 00:26:34.805 --rc geninfo_unexecuted_blocks=1 00:26:34.805 00:26:34.805 ' 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:34.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:34.805 --rc genhtml_branch_coverage=1 00:26:34.805 --rc genhtml_function_coverage=1 00:26:34.805 --rc genhtml_legend=1 00:26:34.805 --rc geninfo_all_blocks=1 00:26:34.805 --rc geninfo_unexecuted_blocks=1 00:26:34.805 00:26:34.805 ' 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:34.805 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.806 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.806 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.806 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:26:34.806 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.806 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:26:34.806 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:34.806 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:34.806 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:34.806 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:34.806 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:34.806 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:34.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:34.806 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:34.806 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:34.806 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:34.806 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:34.806 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:34.806 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:26:34.806 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:26:34.806 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:34.806 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:34.806 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:34.806 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:34.806 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:34.806 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:34.806 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:34.806 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:34.806 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:34.806 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:34.806 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:26:34.806 17:26:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:42.947 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:42.947 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:42.947 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:42.948 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:42.948 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # is_hw=yes 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:42.948 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:42.948 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.569 ms 00:26:42.948 00:26:42.948 --- 10.0.0.2 ping statistics --- 00:26:42.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:42.948 rtt min/avg/max/mdev = 0.569/0.569/0.569/0.000 ms 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:42.948 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:42.948 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:26:42.948 00:26:42.948 --- 10.0.0.1 ping statistics --- 00:26:42.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:42.948 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # return 0 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # nvmfpid=3111961 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # waitforlisten 3111961 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 3111961 ']' 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:42.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:42.948 17:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:42.948 [2024-10-01 17:26:40.609980] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:26:42.948 [2024-10-01 17:26:40.610043] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:42.948 [2024-10-01 17:26:40.677336] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:42.948 [2024-10-01 17:26:40.711081] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:42.948 [2024-10-01 17:26:40.711120] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:42.948 [2024-10-01 17:26:40.711129] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:42.948 [2024-10-01 17:26:40.711138] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:42.948 [2024-10-01 17:26:40.711144] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:42.948 [2024-10-01 17:26:40.711289] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:42.948 [2024-10-01 17:26:40.711403] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:42.948 [2024-10-01 17:26:40.711562] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:42.948 [2024-10-01 17:26:40.711563] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:26:42.948 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:42.948 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:26:42.948 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:42.948 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:42.948 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:42.948 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:42.948 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:42.948 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.948 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:42.948 [2024-10-01 17:26:41.446742] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:42.948 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.948 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:26:42.948 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:42.948 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:42.948 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.948 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:42.948 Malloc1 00:26:42.948 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.948 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:26:42.948 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.949 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.211 [2024-10-01 17:26:41.514013] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.211 Malloc2 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.211 Malloc3 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.211 Malloc4 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.211 Malloc5 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.211 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.212 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.212 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:26:43.212 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.212 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.212 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.212 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:26:43.212 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.212 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.212 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.212 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:43.212 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:26:43.212 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.212 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.212 Malloc6 00:26:43.212 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.474 Malloc7 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.474 Malloc8 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.474 Malloc9 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.474 Malloc10 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.474 17:26:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.474 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.474 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:26:43.474 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.474 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.474 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.474 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:43.474 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:26:43.474 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.474 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.735 Malloc11 00:26:43.735 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.735 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:26:43.735 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.735 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.735 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.735 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:26:43.735 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.735 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.735 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.736 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:26:43.736 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.736 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.736 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.736 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:26:43.736 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:43.736 17:26:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:45.119 17:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:26:45.119 17:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:45.119 17:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:45.119 17:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:45.119 17:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:47.660 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:47.660 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:47.660 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:26:47.660 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:47.660 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:47.660 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:47.660 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:47.660 17:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:26:49.043 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:26:49.043 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:49.043 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:49.043 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:49.043 17:26:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:51.001 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:51.001 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:51.001 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:26:51.001 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:51.001 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:51.001 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:51.001 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:51.001 17:26:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:26:52.383 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:26:52.383 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:52.383 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:52.383 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:52.383 17:26:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:54.923 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:54.923 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:54.923 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:26:54.923 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:54.923 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:54.923 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:54.923 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:54.923 17:26:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:26:56.305 17:26:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:26:56.305 17:26:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:56.305 17:26:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:56.305 17:26:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:56.305 17:26:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:58.215 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:58.215 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:58.215 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:26:58.215 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:58.215 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:58.215 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:58.215 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:58.215 17:26:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:27:00.125 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:27:00.125 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:00.125 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:00.125 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:00.125 17:26:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:02.035 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:02.035 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:02.035 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:27:02.035 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:02.035 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:02.035 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:02.035 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:02.035 17:27:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:27:03.419 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:27:03.419 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:03.419 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:03.419 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:03.419 17:27:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:05.962 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:05.962 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:05.962 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:27:05.962 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:05.962 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:05.962 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:05.962 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:05.962 17:27:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:27:07.346 17:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:27:07.347 17:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:07.347 17:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:07.347 17:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:07.347 17:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:09.401 17:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:09.401 17:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:09.401 17:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:27:09.401 17:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:09.401 17:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:09.401 17:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:09.401 17:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:09.401 17:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:27:11.310 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:27:11.310 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:11.310 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:11.310 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:11.310 17:27:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:13.223 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:13.223 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:13.223 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:27:13.223 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:13.223 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:13.223 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:13.223 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:13.223 17:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:27:15.136 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:27:15.137 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:15.137 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:15.137 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:15.137 17:27:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:17.050 17:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:17.050 17:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:17.050 17:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:27:17.050 17:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:17.050 17:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:17.050 17:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:17.050 17:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:17.050 17:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:27:18.963 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:27:18.963 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:18.963 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:18.963 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:18.963 17:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:20.878 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:20.878 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:20.878 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:27:20.878 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:20.878 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:20.878 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:20.878 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:20.878 17:27:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:27:22.792 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:27:22.792 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:22.792 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:22.792 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:22.792 17:27:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:24.701 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:24.701 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:24.701 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:27:24.701 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:24.701 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:24.701 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:24.701 17:27:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:27:24.701 [global] 00:27:24.701 thread=1 00:27:24.701 invalidate=1 00:27:24.701 rw=read 00:27:24.701 time_based=1 00:27:24.701 runtime=10 00:27:24.701 ioengine=libaio 00:27:24.701 direct=1 00:27:24.701 bs=262144 00:27:24.701 iodepth=64 00:27:24.701 norandommap=1 00:27:24.701 numjobs=1 00:27:24.701 00:27:24.701 [job0] 00:27:24.701 filename=/dev/nvme0n1 00:27:24.701 [job1] 00:27:24.701 filename=/dev/nvme10n1 00:27:24.701 [job2] 00:27:24.701 filename=/dev/nvme1n1 00:27:24.701 [job3] 00:27:24.701 filename=/dev/nvme2n1 00:27:24.701 [job4] 00:27:24.701 filename=/dev/nvme3n1 00:27:24.701 [job5] 00:27:24.701 filename=/dev/nvme4n1 00:27:24.701 [job6] 00:27:24.701 filename=/dev/nvme5n1 00:27:24.701 [job7] 00:27:24.701 filename=/dev/nvme6n1 00:27:24.701 [job8] 00:27:24.701 filename=/dev/nvme7n1 00:27:24.701 [job9] 00:27:24.701 filename=/dev/nvme8n1 00:27:24.701 [job10] 00:27:24.701 filename=/dev/nvme9n1 00:27:24.963 Could not set queue depth (nvme0n1) 00:27:24.963 Could not set queue depth (nvme10n1) 00:27:24.963 Could not set queue depth (nvme1n1) 00:27:24.963 Could not set queue depth (nvme2n1) 00:27:24.963 Could not set queue depth (nvme3n1) 00:27:24.963 Could not set queue depth (nvme4n1) 00:27:24.963 Could not set queue depth (nvme5n1) 00:27:24.963 Could not set queue depth (nvme6n1) 00:27:24.963 Could not set queue depth (nvme7n1) 00:27:24.963 Could not set queue depth (nvme8n1) 00:27:24.963 Could not set queue depth (nvme9n1) 00:27:25.223 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:25.223 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:25.223 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:25.223 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:25.223 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:25.223 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:25.223 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:25.223 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:25.223 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:25.223 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:25.223 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:25.223 fio-3.35 00:27:25.223 Starting 11 threads 00:27:37.504 00:27:37.504 job0: (groupid=0, jobs=1): err= 0: pid=3121096: Tue Oct 1 17:27:34 2024 00:27:37.504 read: IOPS=159, BW=39.9MiB/s (41.9MB/s)(400MiB/10022msec) 00:27:37.504 slat (usec): min=12, max=445408, avg=5726.05, stdev=29260.38 00:27:37.504 clat (msec): min=18, max=1125, avg=394.15, stdev=290.23 00:27:37.504 lat (msec): min=22, max=1155, avg=399.87, stdev=294.28 00:27:37.504 clat percentiles (msec): 00:27:37.504 | 1.00th=[ 30], 5.00th=[ 50], 10.00th=[ 58], 20.00th=[ 81], 00:27:37.504 | 30.00th=[ 108], 40.00th=[ 230], 50.00th=[ 401], 60.00th=[ 472], 00:27:37.504 | 70.00th=[ 600], 80.00th=[ 684], 90.00th=[ 768], 95.00th=[ 860], 00:27:37.504 | 99.00th=[ 1053], 99.50th=[ 1053], 99.90th=[ 1133], 99.95th=[ 1133], 00:27:37.504 | 99.99th=[ 1133] 00:27:37.504 bw ( KiB/s): min= 7168, max=222720, per=4.56%, avg=39372.80, stdev=46667.57, samples=20 00:27:37.504 iops : min= 28, max= 870, avg=153.80, stdev=182.30, samples=20 00:27:37.504 lat (msec) : 20=0.06%, 50=6.06%, 100=22.42%, 250=11.87%, 500=22.42% 00:27:37.504 lat (msec) : 750=24.98%, 1000=10.43%, 2000=1.75% 00:27:37.504 cpu : usr=0.07%, sys=0.59%, ctx=237, majf=0, minf=4097 00:27:37.504 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.1% 00:27:37.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.504 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:37.504 issued rwts: total=1601,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.504 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:37.504 job1: (groupid=0, jobs=1): err= 0: pid=3121122: Tue Oct 1 17:27:34 2024 00:27:37.504 read: IOPS=600, BW=150MiB/s (157MB/s)(1508MiB/10053msec) 00:27:37.504 slat (usec): min=11, max=636025, avg=1183.42, stdev=9072.40 00:27:37.504 clat (msec): min=3, max=887, avg=105.26, stdev=116.74 00:27:37.504 lat (msec): min=3, max=887, avg=106.44, stdev=117.07 00:27:37.504 clat percentiles (msec): 00:27:37.504 | 1.00th=[ 10], 5.00th=[ 25], 10.00th=[ 42], 20.00th=[ 55], 00:27:37.504 | 30.00th=[ 62], 40.00th=[ 66], 50.00th=[ 78], 60.00th=[ 89], 00:27:37.504 | 70.00th=[ 109], 80.00th=[ 124], 90.00th=[ 176], 95.00th=[ 224], 00:27:37.504 | 99.00th=[ 810], 99.50th=[ 877], 99.90th=[ 885], 99.95th=[ 885], 00:27:37.504 | 99.99th=[ 885] 00:27:37.504 bw ( KiB/s): min=12288, max=259584, per=17.70%, avg=152832.00, stdev=67422.02, samples=20 00:27:37.504 iops : min= 48, max= 1014, avg=597.00, stdev=263.37, samples=20 00:27:37.504 lat (msec) : 4=0.03%, 10=2.01%, 20=2.27%, 50=11.74%, 100=49.79% 00:27:37.504 lat (msec) : 250=30.13%, 500=1.62%, 750=1.18%, 1000=1.23% 00:27:37.504 cpu : usr=0.23%, sys=1.87%, ctx=1282, majf=0, minf=4098 00:27:37.504 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:27:37.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.504 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:37.504 issued rwts: total=6033,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.504 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:37.505 job2: (groupid=0, jobs=1): err= 0: pid=3121152: Tue Oct 1 17:27:34 2024 00:27:37.505 read: IOPS=282, BW=70.5MiB/s (73.9MB/s)(717MiB/10160msec) 00:27:37.505 slat (usec): min=8, max=593077, avg=2573.42, stdev=19971.45 00:27:37.505 clat (msec): min=6, max=1104, avg=223.92, stdev=245.41 00:27:37.505 lat (msec): min=7, max=1241, avg=226.49, stdev=248.20 00:27:37.505 clat percentiles (msec): 00:27:37.505 | 1.00th=[ 9], 5.00th=[ 22], 10.00th=[ 41], 20.00th=[ 55], 00:27:37.505 | 30.00th=[ 87], 40.00th=[ 100], 50.00th=[ 118], 60.00th=[ 142], 00:27:37.505 | 70.00th=[ 207], 80.00th=[ 376], 90.00th=[ 634], 95.00th=[ 735], 00:27:37.505 | 99.00th=[ 1028], 99.50th=[ 1045], 99.90th=[ 1053], 99.95th=[ 1053], 00:27:37.505 | 99.99th=[ 1099] 00:27:37.505 bw ( KiB/s): min= 3072, max=201216, per=8.31%, avg=71756.80, stdev=57791.28, samples=20 00:27:37.505 iops : min= 12, max= 786, avg=280.30, stdev=225.75, samples=20 00:27:37.505 lat (msec) : 10=1.67%, 20=3.00%, 50=13.47%, 100=22.96%, 250=32.80% 00:27:37.505 lat (msec) : 500=9.28%, 750=12.00%, 1000=2.65%, 2000=2.16% 00:27:37.505 cpu : usr=0.17%, sys=0.94%, ctx=577, majf=0, minf=4097 00:27:37.505 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:27:37.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.505 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:37.505 issued rwts: total=2866,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.505 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:37.505 job3: (groupid=0, jobs=1): err= 0: pid=3121166: Tue Oct 1 17:27:34 2024 00:27:37.505 read: IOPS=171, BW=42.8MiB/s (44.8MB/s)(429MiB/10020msec) 00:27:37.505 slat (usec): min=11, max=535563, avg=4625.42, stdev=22143.79 00:27:37.505 clat (msec): min=14, max=1155, avg=368.90, stdev=265.48 00:27:37.505 lat (msec): min=16, max=1212, avg=373.53, stdev=268.26 00:27:37.505 clat percentiles (msec): 00:27:37.505 | 1.00th=[ 22], 5.00th=[ 50], 10.00th=[ 61], 20.00th=[ 75], 00:27:37.505 | 30.00th=[ 109], 40.00th=[ 268], 50.00th=[ 347], 60.00th=[ 477], 00:27:37.505 | 70.00th=[ 558], 80.00th=[ 609], 90.00th=[ 693], 95.00th=[ 743], 00:27:37.505 | 99.00th=[ 1116], 99.50th=[ 1150], 99.90th=[ 1150], 99.95th=[ 1150], 00:27:37.505 | 99.99th=[ 1150] 00:27:37.505 bw ( KiB/s): min= 9728, max=207872, per=4.90%, avg=42271.30, stdev=43604.74, samples=20 00:27:37.505 iops : min= 38, max= 812, avg=165.10, stdev=170.32, samples=20 00:27:37.505 lat (msec) : 20=0.29%, 50=4.96%, 100=21.70%, 250=12.31%, 500=25.15% 00:27:37.505 lat (msec) : 750=30.75%, 1000=2.45%, 2000=2.39% 00:27:37.505 cpu : usr=0.05%, sys=0.63%, ctx=294, majf=0, minf=4097 00:27:37.505 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.9%, >=64=96.3% 00:27:37.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.505 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:37.505 issued rwts: total=1714,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.505 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:37.505 job4: (groupid=0, jobs=1): err= 0: pid=3121175: Tue Oct 1 17:27:34 2024 00:27:37.505 read: IOPS=164, BW=41.1MiB/s (43.1MB/s)(414MiB/10073msec) 00:27:37.505 slat (usec): min=12, max=412736, avg=3605.97, stdev=22371.88 00:27:37.505 clat (usec): min=1900, max=1049.7k, avg=385356.46, stdev=292324.34 00:27:37.505 lat (usec): min=1947, max=1049.8k, avg=388962.43, stdev=294758.34 00:27:37.505 clat percentiles (msec): 00:27:37.505 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 10], 20.00th=[ 26], 00:27:37.505 | 30.00th=[ 132], 40.00th=[ 296], 50.00th=[ 443], 60.00th=[ 518], 00:27:37.505 | 70.00th=[ 584], 80.00th=[ 651], 90.00th=[ 735], 95.00th=[ 844], 00:27:37.505 | 99.00th=[ 1028], 99.50th=[ 1036], 99.90th=[ 1036], 99.95th=[ 1053], 00:27:37.505 | 99.99th=[ 1053] 00:27:37.505 bw ( KiB/s): min= 9728, max=131072, per=4.72%, avg=40783.25, stdev=29817.60, samples=20 00:27:37.505 iops : min= 38, max= 512, avg=159.30, stdev=116.48, samples=20 00:27:37.505 lat (msec) : 2=0.18%, 4=8.21%, 10=1.69%, 20=5.19%, 50=13.10% 00:27:37.505 lat (msec) : 100=0.85%, 250=8.70%, 500=18.96%, 750=34.18%, 1000=7.91% 00:27:37.505 lat (msec) : 2000=1.03% 00:27:37.505 cpu : usr=0.09%, sys=0.75%, ctx=525, majf=0, minf=4097 00:27:37.505 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=1.9%, >=64=96.2% 00:27:37.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.505 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:37.505 issued rwts: total=1656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.505 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:37.505 job5: (groupid=0, jobs=1): err= 0: pid=3121199: Tue Oct 1 17:27:34 2024 00:27:37.505 read: IOPS=255, BW=63.8MiB/s (66.8MB/s)(646MiB/10137msec) 00:27:37.505 slat (usec): min=11, max=241780, avg=2829.50, stdev=14887.66 00:27:37.505 clat (msec): min=3, max=997, avg=247.90, stdev=227.13 00:27:37.505 lat (msec): min=3, max=997, avg=250.73, stdev=229.89 00:27:37.505 clat percentiles (msec): 00:27:37.505 | 1.00th=[ 11], 5.00th=[ 15], 10.00th=[ 34], 20.00th=[ 50], 00:27:37.505 | 30.00th=[ 77], 40.00th=[ 110], 50.00th=[ 169], 60.00th=[ 220], 00:27:37.505 | 70.00th=[ 338], 80.00th=[ 456], 90.00th=[ 600], 95.00th=[ 718], 00:27:37.505 | 99.00th=[ 877], 99.50th=[ 885], 99.90th=[ 995], 99.95th=[ 995], 00:27:37.505 | 99.99th=[ 995] 00:27:37.505 bw ( KiB/s): min=15872, max=231424, per=7.48%, avg=64537.60, stdev=54940.42, samples=20 00:27:37.505 iops : min= 62, max= 904, avg=252.10, stdev=214.61, samples=20 00:27:37.505 lat (msec) : 4=0.04%, 10=0.39%, 20=6.89%, 50=13.23%, 100=17.33% 00:27:37.505 lat (msec) : 250=26.65%, 500=18.38%, 750=13.11%, 1000=3.98% 00:27:37.505 cpu : usr=0.08%, sys=1.12%, ctx=793, majf=0, minf=4097 00:27:37.505 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:27:37.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.505 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:37.505 issued rwts: total=2585,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.505 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:37.505 job6: (groupid=0, jobs=1): err= 0: pid=3121210: Tue Oct 1 17:27:34 2024 00:27:37.505 read: IOPS=206, BW=51.6MiB/s (54.1MB/s)(524MiB/10143msec) 00:27:37.505 slat (usec): min=12, max=384633, avg=3353.84, stdev=16677.79 00:27:37.505 clat (msec): min=2, max=1079, avg=306.09, stdev=254.22 00:27:37.505 lat (msec): min=2, max=1103, avg=309.44, stdev=256.97 00:27:37.505 clat percentiles (msec): 00:27:37.505 | 1.00th=[ 4], 5.00th=[ 11], 10.00th=[ 21], 20.00th=[ 41], 00:27:37.505 | 30.00th=[ 126], 40.00th=[ 188], 50.00th=[ 241], 60.00th=[ 334], 00:27:37.505 | 70.00th=[ 447], 80.00th=[ 518], 90.00th=[ 651], 95.00th=[ 852], 00:27:37.505 | 99.00th=[ 978], 99.50th=[ 995], 99.90th=[ 1083], 99.95th=[ 1083], 00:27:37.505 | 99.99th=[ 1083] 00:27:37.505 bw ( KiB/s): min=14848, max=223232, per=6.02%, avg=52000.20, stdev=46373.27, samples=20 00:27:37.505 iops : min= 58, max= 872, avg=203.10, stdev=181.14, samples=20 00:27:37.505 lat (msec) : 4=2.82%, 10=2.05%, 20=4.92%, 50=12.60%, 100=5.01% 00:27:37.505 lat (msec) : 250=25.39%, 500=24.49%, 750=15.47%, 1000=6.83%, 2000=0.43% 00:27:37.505 cpu : usr=0.08%, sys=0.89%, ctx=823, majf=0, minf=4097 00:27:37.505 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:27:37.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.505 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:37.505 issued rwts: total=2095,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.505 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:37.505 job7: (groupid=0, jobs=1): err= 0: pid=3121220: Tue Oct 1 17:27:34 2024 00:27:37.505 read: IOPS=304, BW=76.0MiB/s (79.7MB/s)(772MiB/10155msec) 00:27:37.505 slat (usec): min=5, max=601902, avg=2800.17, stdev=21890.89 00:27:37.505 clat (msec): min=3, max=1247, avg=207.29, stdev=278.20 00:27:37.505 lat (msec): min=3, max=1247, avg=210.09, stdev=281.57 00:27:37.505 clat percentiles (msec): 00:27:37.505 | 1.00th=[ 6], 5.00th=[ 14], 10.00th=[ 18], 20.00th=[ 30], 00:27:37.505 | 30.00th=[ 39], 40.00th=[ 51], 50.00th=[ 58], 60.00th=[ 78], 00:27:37.505 | 70.00th=[ 153], 80.00th=[ 472], 90.00th=[ 642], 95.00th=[ 785], 00:27:37.505 | 99.00th=[ 1167], 99.50th=[ 1183], 99.90th=[ 1250], 99.95th=[ 1250], 00:27:37.505 | 99.99th=[ 1250] 00:27:37.506 bw ( KiB/s): min= 5632, max=390144, per=9.44%, avg=81515.79, stdev=104858.60, samples=19 00:27:37.506 iops : min= 22, max= 1524, avg=318.42, stdev=409.60, samples=19 00:27:37.506 lat (msec) : 4=0.06%, 10=4.05%, 20=9.68%, 50=25.83%, 100=25.70% 00:27:37.506 lat (msec) : 250=7.64%, 500=8.74%, 750=13.08%, 1000=2.14%, 2000=3.08% 00:27:37.506 cpu : usr=0.13%, sys=1.05%, ctx=855, majf=0, minf=4097 00:27:37.506 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:27:37.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.506 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:37.506 issued rwts: total=3089,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.506 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:37.506 job8: (groupid=0, jobs=1): err= 0: pid=3121247: Tue Oct 1 17:27:34 2024 00:27:37.506 read: IOPS=130, BW=32.6MiB/s (34.2MB/s)(331MiB/10149msec) 00:27:37.506 slat (usec): min=11, max=550093, avg=6116.58, stdev=31666.37 00:27:37.506 clat (msec): min=6, max=974, avg=483.51, stdev=240.51 00:27:37.506 lat (msec): min=6, max=1120, avg=489.63, stdev=241.83 00:27:37.506 clat percentiles (msec): 00:27:37.506 | 1.00th=[ 12], 5.00th=[ 43], 10.00th=[ 62], 20.00th=[ 257], 00:27:37.506 | 30.00th=[ 380], 40.00th=[ 477], 50.00th=[ 558], 60.00th=[ 584], 00:27:37.506 | 70.00th=[ 634], 80.00th=[ 701], 90.00th=[ 760], 95.00th=[ 810], 00:27:37.506 | 99.00th=[ 902], 99.50th=[ 902], 99.90th=[ 978], 99.95th=[ 978], 00:27:37.506 | 99.99th=[ 978] 00:27:37.506 bw ( KiB/s): min= 3584, max=97792, per=3.74%, avg=32256.00, stdev=18758.47, samples=20 00:27:37.506 iops : min= 14, max= 382, avg=126.00, stdev=73.28, samples=20 00:27:37.506 lat (msec) : 10=0.68%, 20=0.76%, 50=7.33%, 100=3.63%, 250=7.02% 00:27:37.506 lat (msec) : 500=23.11%, 750=47.13%, 1000=10.35% 00:27:37.506 cpu : usr=0.06%, sys=0.52%, ctx=202, majf=0, minf=3534 00:27:37.506 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.2% 00:27:37.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.506 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:37.506 issued rwts: total=1324,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.506 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:37.506 job9: (groupid=0, jobs=1): err= 0: pid=3121260: Tue Oct 1 17:27:34 2024 00:27:37.506 read: IOPS=777, BW=194MiB/s (204MB/s)(1954MiB/10048msec) 00:27:37.506 slat (usec): min=6, max=118297, avg=1183.96, stdev=4096.85 00:27:37.506 clat (msec): min=12, max=502, avg=81.00, stdev=48.09 00:27:37.506 lat (msec): min=13, max=502, avg=82.18, stdev=48.71 00:27:37.506 clat percentiles (msec): 00:27:37.506 | 1.00th=[ 27], 5.00th=[ 30], 10.00th=[ 32], 20.00th=[ 36], 00:27:37.506 | 30.00th=[ 52], 40.00th=[ 69], 50.00th=[ 77], 60.00th=[ 84], 00:27:37.506 | 70.00th=[ 92], 80.00th=[ 112], 90.00th=[ 138], 95.00th=[ 159], 00:27:37.506 | 99.00th=[ 264], 99.50th=[ 284], 99.90th=[ 477], 99.95th=[ 481], 00:27:37.506 | 99.99th=[ 502] 00:27:37.506 bw ( KiB/s): min=59904, max=500736, per=22.98%, avg=198400.00, stdev=107885.03, samples=20 00:27:37.506 iops : min= 234, max= 1956, avg=775.00, stdev=421.43, samples=20 00:27:37.506 lat (msec) : 20=0.10%, 50=29.49%, 100=46.01%, 250=23.00%, 500=1.37% 00:27:37.506 lat (msec) : 750=0.04% 00:27:37.506 cpu : usr=0.23%, sys=2.53%, ctx=1287, majf=0, minf=4097 00:27:37.506 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:27:37.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.506 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:37.506 issued rwts: total=7814,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.506 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:37.506 job10: (groupid=0, jobs=1): err= 0: pid=3121270: Tue Oct 1 17:27:34 2024 00:27:37.506 read: IOPS=343, BW=85.9MiB/s (90.1MB/s)(872MiB/10143msec) 00:27:37.506 slat (usec): min=5, max=545729, avg=2557.52, stdev=16056.13 00:27:37.506 clat (msec): min=17, max=978, avg=183.34, stdev=239.62 00:27:37.506 lat (msec): min=18, max=985, avg=185.90, stdev=242.84 00:27:37.506 clat percentiles (msec): 00:27:37.506 | 1.00th=[ 27], 5.00th=[ 29], 10.00th=[ 31], 20.00th=[ 33], 00:27:37.506 | 30.00th=[ 35], 40.00th=[ 37], 50.00th=[ 40], 60.00th=[ 43], 00:27:37.506 | 70.00th=[ 150], 80.00th=[ 464], 90.00th=[ 567], 95.00th=[ 709], 00:27:37.506 | 99.00th=[ 852], 99.50th=[ 852], 99.90th=[ 894], 99.95th=[ 978], 00:27:37.506 | 99.99th=[ 978] 00:27:37.506 bw ( KiB/s): min= 8704, max=494592, per=10.15%, avg=87628.80, stdev=139818.02, samples=20 00:27:37.506 iops : min= 34, max= 1932, avg=342.30, stdev=546.16, samples=20 00:27:37.506 lat (msec) : 20=0.09%, 50=64.15%, 100=4.24%, 250=4.30%, 500=11.36% 00:27:37.506 lat (msec) : 750=12.91%, 1000=2.95% 00:27:37.506 cpu : usr=0.06%, sys=1.08%, ctx=570, majf=0, minf=4098 00:27:37.506 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:27:37.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.506 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:37.506 issued rwts: total=3487,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.506 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:37.506 00:27:37.506 Run status group 0 (all jobs): 00:27:37.506 READ: bw=843MiB/s (884MB/s), 32.6MiB/s-194MiB/s (34.2MB/s-204MB/s), io=8566MiB (8982MB), run=10020-10160msec 00:27:37.506 00:27:37.506 Disk stats (read/write): 00:27:37.506 nvme0n1: ios=2694/0, merge=0/0, ticks=1219447/0, in_queue=1219447, util=95.65% 00:27:37.506 nvme10n1: ios=11971/0, merge=0/0, ticks=1243369/0, in_queue=1243369, util=96.26% 00:27:37.506 nvme1n1: ios=5615/0, merge=0/0, ticks=1243227/0, in_queue=1243227, util=96.89% 00:27:37.506 nvme2n1: ios=2952/0, merge=0/0, ticks=1220224/0, in_queue=1220224, util=97.19% 00:27:37.506 nvme3n1: ios=3194/0, merge=0/0, ticks=1240340/0, in_queue=1240340, util=97.33% 00:27:37.506 nvme4n1: ios=5045/0, merge=0/0, ticks=1200746/0, in_queue=1200746, util=97.80% 00:27:37.506 nvme5n1: ios=4062/0, merge=0/0, ticks=1226357/0, in_queue=1226357, util=98.03% 00:27:37.506 nvme6n1: ios=6051/0, merge=0/0, ticks=1206008/0, in_queue=1206008, util=98.22% 00:27:37.506 nvme7n1: ios=2524/0, merge=0/0, ticks=1186792/0, in_queue=1186792, util=98.84% 00:27:37.506 nvme8n1: ios=15500/0, merge=0/0, ticks=1232813/0, in_queue=1232813, util=99.00% 00:27:37.506 nvme9n1: ios=6854/0, merge=0/0, ticks=1209129/0, in_queue=1209129, util=99.24% 00:27:37.506 17:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:27:37.506 [global] 00:27:37.506 thread=1 00:27:37.506 invalidate=1 00:27:37.506 rw=randwrite 00:27:37.506 time_based=1 00:27:37.506 runtime=10 00:27:37.506 ioengine=libaio 00:27:37.506 direct=1 00:27:37.506 bs=262144 00:27:37.506 iodepth=64 00:27:37.506 norandommap=1 00:27:37.506 numjobs=1 00:27:37.506 00:27:37.506 [job0] 00:27:37.506 filename=/dev/nvme0n1 00:27:37.506 [job1] 00:27:37.506 filename=/dev/nvme10n1 00:27:37.506 [job2] 00:27:37.506 filename=/dev/nvme1n1 00:27:37.506 [job3] 00:27:37.506 filename=/dev/nvme2n1 00:27:37.506 [job4] 00:27:37.506 filename=/dev/nvme3n1 00:27:37.506 [job5] 00:27:37.506 filename=/dev/nvme4n1 00:27:37.506 [job6] 00:27:37.506 filename=/dev/nvme5n1 00:27:37.506 [job7] 00:27:37.506 filename=/dev/nvme6n1 00:27:37.506 [job8] 00:27:37.506 filename=/dev/nvme7n1 00:27:37.506 [job9] 00:27:37.506 filename=/dev/nvme8n1 00:27:37.506 [job10] 00:27:37.506 filename=/dev/nvme9n1 00:27:37.506 Could not set queue depth (nvme0n1) 00:27:37.506 Could not set queue depth (nvme10n1) 00:27:37.506 Could not set queue depth (nvme1n1) 00:27:37.506 Could not set queue depth (nvme2n1) 00:27:37.506 Could not set queue depth (nvme3n1) 00:27:37.506 Could not set queue depth (nvme4n1) 00:27:37.506 Could not set queue depth (nvme5n1) 00:27:37.506 Could not set queue depth (nvme6n1) 00:27:37.506 Could not set queue depth (nvme7n1) 00:27:37.506 Could not set queue depth (nvme8n1) 00:27:37.506 Could not set queue depth (nvme9n1) 00:27:37.506 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:37.506 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:37.506 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:37.506 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:37.506 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:37.506 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:37.506 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:37.506 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:37.506 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:37.507 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:37.507 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:37.507 fio-3.35 00:27:37.507 Starting 11 threads 00:27:47.535 00:27:47.535 job0: (groupid=0, jobs=1): err= 0: pid=3123052: Tue Oct 1 17:27:45 2024 00:27:47.535 write: IOPS=419, BW=105MiB/s (110MB/s)(1057MiB/10078msec); 0 zone resets 00:27:47.535 slat (usec): min=24, max=23734, avg=2360.46, stdev=4645.06 00:27:47.535 clat (msec): min=3, max=292, avg=150.11, stdev=69.28 00:27:47.535 lat (msec): min=3, max=292, avg=152.47, stdev=70.23 00:27:47.535 clat percentiles (msec): 00:27:47.535 | 1.00th=[ 46], 5.00th=[ 53], 10.00th=[ 59], 20.00th=[ 85], 00:27:47.535 | 30.00th=[ 89], 40.00th=[ 93], 50.00th=[ 138], 60.00th=[ 201], 00:27:47.535 | 70.00th=[ 215], 80.00th=[ 222], 90.00th=[ 230], 95.00th=[ 241], 00:27:47.535 | 99.00th=[ 275], 99.50th=[ 288], 99.90th=[ 292], 99.95th=[ 292], 00:27:47.535 | 99.99th=[ 292] 00:27:47.535 bw ( KiB/s): min=63488, max=210432, per=9.70%, avg=106630.90, stdev=51638.95, samples=20 00:27:47.535 iops : min= 248, max= 822, avg=416.50, stdev=201.67, samples=20 00:27:47.535 lat (msec) : 4=0.02%, 20=0.09%, 50=2.62%, 100=38.61%, 250=55.40% 00:27:47.535 lat (msec) : 500=3.24% 00:27:47.535 cpu : usr=1.07%, sys=1.21%, ctx=1040, majf=0, minf=1 00:27:47.535 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:27:47.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.535 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:47.535 issued rwts: total=0,4229,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:47.535 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:47.535 job1: (groupid=0, jobs=1): err= 0: pid=3123064: Tue Oct 1 17:27:45 2024 00:27:47.535 write: IOPS=297, BW=74.5MiB/s (78.1MB/s)(754MiB/10115msec); 0 zone resets 00:27:47.535 slat (usec): min=24, max=80877, avg=2990.26, stdev=6521.30 00:27:47.535 clat (msec): min=22, max=394, avg=211.72, stdev=78.87 00:27:47.535 lat (msec): min=22, max=394, avg=214.71, stdev=80.00 00:27:47.535 clat percentiles (msec): 00:27:47.535 | 1.00th=[ 74], 5.00th=[ 84], 10.00th=[ 94], 20.00th=[ 125], 00:27:47.535 | 30.00th=[ 178], 40.00th=[ 207], 50.00th=[ 224], 60.00th=[ 234], 00:27:47.535 | 70.00th=[ 251], 80.00th=[ 279], 90.00th=[ 317], 95.00th=[ 338], 00:27:47.535 | 99.00th=[ 376], 99.50th=[ 380], 99.90th=[ 388], 99.95th=[ 397], 00:27:47.535 | 99.99th=[ 397] 00:27:47.535 bw ( KiB/s): min=45056, max=158720, per=6.87%, avg=75511.55, stdev=28340.35, samples=20 00:27:47.535 iops : min= 176, max= 620, avg=294.95, stdev=110.70, samples=20 00:27:47.536 lat (msec) : 50=0.40%, 100=14.70%, 250=54.81%, 500=30.09% 00:27:47.536 cpu : usr=0.78%, sys=1.01%, ctx=994, majf=0, minf=1 00:27:47.536 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:27:47.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.536 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:47.536 issued rwts: total=0,3014,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:47.536 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:47.536 job2: (groupid=0, jobs=1): err= 0: pid=3123065: Tue Oct 1 17:27:45 2024 00:27:47.536 write: IOPS=354, BW=88.6MiB/s (92.9MB/s)(896MiB/10108msec); 0 zone resets 00:27:47.536 slat (usec): min=21, max=153993, avg=2552.26, stdev=6240.47 00:27:47.536 clat (usec): min=1917, max=493973, avg=177989.63, stdev=95643.50 00:27:47.536 lat (msec): min=2, max=494, avg=180.54, stdev=97.03 00:27:47.536 clat percentiles (msec): 00:27:47.536 | 1.00th=[ 5], 5.00th=[ 10], 10.00th=[ 31], 20.00th=[ 95], 00:27:47.536 | 30.00th=[ 142], 40.00th=[ 157], 50.00th=[ 174], 60.00th=[ 215], 00:27:47.536 | 70.00th=[ 232], 80.00th=[ 251], 90.00th=[ 309], 95.00th=[ 342], 00:27:47.536 | 99.00th=[ 380], 99.50th=[ 422], 99.90th=[ 472], 99.95th=[ 477], 00:27:47.536 | 99.99th=[ 493] 00:27:47.536 bw ( KiB/s): min=46080, max=218112, per=8.19%, avg=90076.35, stdev=42217.23, samples=20 00:27:47.536 iops : min= 180, max= 852, avg=351.85, stdev=164.91, samples=20 00:27:47.536 lat (msec) : 2=0.06%, 4=0.84%, 10=4.16%, 20=2.65%, 50=6.98% 00:27:47.536 lat (msec) : 100=6.03%, 250=59.21%, 500=20.07% 00:27:47.536 cpu : usr=0.94%, sys=1.07%, ctx=1415, majf=0, minf=1 00:27:47.536 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:27:47.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.536 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:47.536 issued rwts: total=0,3582,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:47.536 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:47.536 job3: (groupid=0, jobs=1): err= 0: pid=3123066: Tue Oct 1 17:27:45 2024 00:27:47.536 write: IOPS=331, BW=83.0MiB/s (87.0MB/s)(836MiB/10080msec); 0 zone resets 00:27:47.536 slat (usec): min=22, max=43487, avg=2771.95, stdev=5456.46 00:27:47.536 clat (msec): min=6, max=447, avg=190.03, stdev=62.01 00:27:47.536 lat (msec): min=6, max=453, avg=192.80, stdev=62.82 00:27:47.536 clat percentiles (msec): 00:27:47.536 | 1.00th=[ 20], 5.00th=[ 83], 10.00th=[ 92], 20.00th=[ 123], 00:27:47.536 | 30.00th=[ 190], 40.00th=[ 201], 50.00th=[ 209], 60.00th=[ 218], 00:27:47.536 | 70.00th=[ 222], 80.00th=[ 230], 90.00th=[ 241], 95.00th=[ 259], 00:27:47.536 | 99.00th=[ 300], 99.50th=[ 422], 99.90th=[ 439], 99.95th=[ 447], 00:27:47.536 | 99.99th=[ 447] 00:27:47.536 bw ( KiB/s): min=63488, max=169472, per=7.64%, avg=84010.95, stdev=24665.63, samples=20 00:27:47.536 iops : min= 248, max= 662, avg=328.15, stdev=96.35, samples=20 00:27:47.536 lat (msec) : 10=0.36%, 20=0.72%, 50=1.52%, 100=11.36%, 250=78.45% 00:27:47.536 lat (msec) : 500=7.59% 00:27:47.536 cpu : usr=0.81%, sys=0.93%, ctx=1093, majf=0, minf=1 00:27:47.536 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:27:47.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.536 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:47.536 issued rwts: total=0,3345,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:47.536 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:47.536 job4: (groupid=0, jobs=1): err= 0: pid=3123067: Tue Oct 1 17:27:45 2024 00:27:47.536 write: IOPS=443, BW=111MiB/s (116MB/s)(1117MiB/10072msec); 0 zone resets 00:27:47.536 slat (usec): min=24, max=50757, avg=2127.92, stdev=4140.31 00:27:47.536 clat (msec): min=15, max=369, avg=142.09, stdev=46.81 00:27:47.536 lat (msec): min=15, max=371, avg=144.21, stdev=47.39 00:27:47.536 clat percentiles (msec): 00:27:47.536 | 1.00th=[ 40], 5.00th=[ 83], 10.00th=[ 89], 20.00th=[ 120], 00:27:47.536 | 30.00th=[ 131], 40.00th=[ 138], 50.00th=[ 142], 60.00th=[ 146], 00:27:47.536 | 70.00th=[ 150], 80.00th=[ 155], 90.00th=[ 161], 95.00th=[ 184], 00:27:47.536 | 99.00th=[ 334], 99.50th=[ 334], 99.90th=[ 359], 99.95th=[ 363], 00:27:47.536 | 99.99th=[ 368] 00:27:47.536 bw ( KiB/s): min=47104, max=186880, per=10.25%, avg=112730.00, stdev=27517.52, samples=20 00:27:47.536 iops : min= 184, max= 730, avg=440.35, stdev=107.49, samples=20 00:27:47.536 lat (msec) : 20=0.11%, 50=1.41%, 100=10.30%, 250=83.70%, 500=4.48% 00:27:47.536 cpu : usr=0.92%, sys=1.27%, ctx=1308, majf=0, minf=1 00:27:47.536 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:27:47.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.536 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:47.536 issued rwts: total=0,4467,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:47.536 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:47.536 job5: (groupid=0, jobs=1): err= 0: pid=3123068: Tue Oct 1 17:27:45 2024 00:27:47.536 write: IOPS=323, BW=80.8MiB/s (84.7MB/s)(814MiB/10082msec); 0 zone resets 00:27:47.536 slat (usec): min=24, max=109557, avg=2850.20, stdev=6129.34 00:27:47.536 clat (msec): min=14, max=446, avg=195.20, stdev=86.17 00:27:47.536 lat (msec): min=16, max=446, avg=198.05, stdev=87.44 00:27:47.536 clat percentiles (msec): 00:27:47.536 | 1.00th=[ 47], 5.00th=[ 73], 10.00th=[ 85], 20.00th=[ 94], 00:27:47.536 | 30.00th=[ 117], 40.00th=[ 192], 50.00th=[ 215], 60.00th=[ 230], 00:27:47.536 | 70.00th=[ 243], 80.00th=[ 262], 90.00th=[ 305], 95.00th=[ 326], 00:27:47.536 | 99.00th=[ 409], 99.50th=[ 430], 99.90th=[ 447], 99.95th=[ 447], 00:27:47.536 | 99.99th=[ 447] 00:27:47.536 bw ( KiB/s): min=38912, max=185344, per=7.44%, avg=81758.20, stdev=39400.74, samples=20 00:27:47.536 iops : min= 152, max= 724, avg=319.35, stdev=153.91, samples=20 00:27:47.536 lat (msec) : 20=0.09%, 50=1.01%, 100=24.44%, 250=48.05%, 500=26.40% 00:27:47.536 cpu : usr=0.70%, sys=1.06%, ctx=1092, majf=0, minf=1 00:27:47.536 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:27:47.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.536 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:47.536 issued rwts: total=0,3257,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:47.536 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:47.536 job6: (groupid=0, jobs=1): err= 0: pid=3123069: Tue Oct 1 17:27:45 2024 00:27:47.536 write: IOPS=399, BW=99.9MiB/s (105MB/s)(1005MiB/10062msec); 0 zone resets 00:27:47.536 slat (usec): min=24, max=95566, avg=2250.72, stdev=5124.35 00:27:47.536 clat (msec): min=2, max=450, avg=157.36, stdev=73.92 00:27:47.536 lat (msec): min=2, max=450, avg=159.61, stdev=74.86 00:27:47.536 clat percentiles (msec): 00:27:47.536 | 1.00th=[ 5], 5.00th=[ 33], 10.00th=[ 81], 20.00th=[ 86], 00:27:47.536 | 30.00th=[ 90], 40.00th=[ 121], 50.00th=[ 190], 60.00th=[ 205], 00:27:47.536 | 70.00th=[ 218], 80.00th=[ 224], 90.00th=[ 232], 95.00th=[ 247], 00:27:47.536 | 99.00th=[ 292], 99.50th=[ 342], 99.90th=[ 435], 99.95th=[ 443], 00:27:47.536 | 99.99th=[ 451] 00:27:47.536 bw ( KiB/s): min=63488, max=185344, per=9.21%, avg=101280.70, stdev=41844.07, samples=20 00:27:47.536 iops : min= 248, max= 724, avg=395.60, stdev=163.40, samples=20 00:27:47.536 lat (msec) : 4=0.40%, 10=1.39%, 20=1.12%, 50=3.83%, 100=30.57% 00:27:47.536 lat (msec) : 250=58.21%, 500=4.48% 00:27:47.536 cpu : usr=1.00%, sys=1.09%, ctx=1426, majf=0, minf=1 00:27:47.536 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:27:47.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.536 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:47.536 issued rwts: total=0,4020,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:47.536 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:47.536 job7: (groupid=0, jobs=1): err= 0: pid=3123070: Tue Oct 1 17:27:45 2024 00:27:47.536 write: IOPS=416, BW=104MiB/s (109MB/s)(1049MiB/10079msec); 0 zone resets 00:27:47.536 slat (usec): min=25, max=19168, avg=2313.62, stdev=4556.08 00:27:47.536 clat (msec): min=16, max=287, avg=151.33, stdev=65.93 00:27:47.536 lat (msec): min=16, max=289, avg=153.64, stdev=66.82 00:27:47.536 clat percentiles (msec): 00:27:47.536 | 1.00th=[ 57], 5.00th=[ 62], 10.00th=[ 74], 20.00th=[ 86], 00:27:47.536 | 30.00th=[ 90], 40.00th=[ 100], 50.00th=[ 142], 60.00th=[ 201], 00:27:47.536 | 70.00th=[ 213], 80.00th=[ 220], 90.00th=[ 230], 95.00th=[ 236], 00:27:47.536 | 99.00th=[ 259], 99.50th=[ 264], 99.90th=[ 284], 99.95th=[ 284], 00:27:47.536 | 99.99th=[ 288] 00:27:47.536 bw ( KiB/s): min=63488, max=194436, per=9.63%, avg=105831.10, stdev=47523.50, samples=20 00:27:47.536 iops : min= 248, max= 759, avg=413.35, stdev=185.54, samples=20 00:27:47.536 lat (msec) : 20=0.10%, 50=0.29%, 100=39.79%, 250=57.68%, 500=2.14% 00:27:47.536 cpu : usr=1.06%, sys=1.17%, ctx=1109, majf=0, minf=1 00:27:47.536 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:27:47.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.536 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:47.536 issued rwts: total=0,4197,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:47.536 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:47.536 job8: (groupid=0, jobs=1): err= 0: pid=3123074: Tue Oct 1 17:27:45 2024 00:27:47.536 write: IOPS=531, BW=133MiB/s (139MB/s)(1337MiB/10071msec); 0 zone resets 00:27:47.536 slat (usec): min=21, max=33957, avg=1842.53, stdev=3461.16 00:27:47.536 clat (msec): min=15, max=176, avg=118.59, stdev=35.63 00:27:47.536 lat (msec): min=15, max=176, avg=120.43, stdev=36.13 00:27:47.536 clat percentiles (msec): 00:27:47.536 | 1.00th=[ 56], 5.00th=[ 58], 10.00th=[ 59], 20.00th=[ 78], 00:27:47.536 | 30.00th=[ 104], 40.00th=[ 127], 50.00th=[ 134], 60.00th=[ 140], 00:27:47.536 | 70.00th=[ 144], 80.00th=[ 148], 90.00th=[ 155], 95.00th=[ 157], 00:27:47.536 | 99.00th=[ 167], 99.50th=[ 171], 99.90th=[ 176], 99.95th=[ 176], 00:27:47.536 | 99.99th=[ 178] 00:27:47.536 bw ( KiB/s): min=103936, max=276480, per=12.31%, avg=135308.95, stdev=50221.83, samples=20 00:27:47.536 iops : min= 406, max= 1080, avg=528.55, stdev=196.18, samples=20 00:27:47.536 lat (msec) : 20=0.07%, 50=0.15%, 100=28.66%, 250=71.12% 00:27:47.536 cpu : usr=1.30%, sys=1.70%, ctx=1348, majf=0, minf=1 00:27:47.536 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:27:47.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.536 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:47.536 issued rwts: total=0,5349,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:47.536 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:47.536 job9: (groupid=0, jobs=1): err= 0: pid=3123078: Tue Oct 1 17:27:45 2024 00:27:47.536 write: IOPS=306, BW=76.6MiB/s (80.4MB/s)(773MiB/10081msec); 0 zone resets 00:27:47.536 slat (usec): min=22, max=90556, avg=3216.61, stdev=6726.41 00:27:47.537 clat (msec): min=73, max=468, avg=205.45, stdev=88.05 00:27:47.537 lat (msec): min=73, max=469, avg=208.67, stdev=89.18 00:27:47.537 clat percentiles (msec): 00:27:47.537 | 1.00th=[ 80], 5.00th=[ 87], 10.00th=[ 90], 20.00th=[ 97], 00:27:47.537 | 30.00th=[ 132], 40.00th=[ 201], 50.00th=[ 222], 60.00th=[ 232], 00:27:47.537 | 70.00th=[ 247], 80.00th=[ 271], 90.00th=[ 326], 95.00th=[ 351], 00:27:47.537 | 99.00th=[ 422], 99.50th=[ 426], 99.90th=[ 447], 99.95th=[ 468], 00:27:47.537 | 99.99th=[ 468] 00:27:47.537 bw ( KiB/s): min=40960, max=169472, per=7.05%, avg=77483.00, stdev=35180.48, samples=20 00:27:47.537 iops : min= 160, max= 662, avg=302.65, stdev=137.42, samples=20 00:27:47.537 lat (msec) : 100=22.10%, 250=49.09%, 500=28.80% 00:27:47.537 cpu : usr=0.77%, sys=0.96%, ctx=776, majf=0, minf=1 00:27:47.537 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:27:47.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:47.537 issued rwts: total=0,3090,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:47.537 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:47.537 job10: (groupid=0, jobs=1): err= 0: pid=3123085: Tue Oct 1 17:27:45 2024 00:27:47.537 write: IOPS=483, BW=121MiB/s (127MB/s)(1224MiB/10115msec); 0 zone resets 00:27:47.537 slat (usec): min=25, max=80073, avg=1855.77, stdev=4794.90 00:27:47.537 clat (msec): min=2, max=492, avg=130.37, stdev=94.35 00:27:47.537 lat (msec): min=2, max=492, avg=132.23, stdev=95.55 00:27:47.537 clat percentiles (msec): 00:27:47.537 | 1.00th=[ 4], 5.00th=[ 23], 10.00th=[ 60], 20.00th=[ 63], 00:27:47.537 | 30.00th=[ 71], 40.00th=[ 73], 50.00th=[ 78], 60.00th=[ 140], 00:27:47.537 | 70.00th=[ 163], 80.00th=[ 197], 90.00th=[ 266], 95.00th=[ 334], 00:27:47.537 | 99.00th=[ 451], 99.50th=[ 481], 99.90th=[ 489], 99.95th=[ 493], 00:27:47.537 | 99.99th=[ 493] 00:27:47.537 bw ( KiB/s): min=34304, max=318976, per=11.25%, avg=123663.70, stdev=82590.66, samples=20 00:27:47.537 iops : min= 134, max= 1246, avg=483.05, stdev=322.62, samples=20 00:27:47.537 lat (msec) : 4=1.70%, 10=1.19%, 20=1.68%, 50=3.45%, 100=46.83% 00:27:47.537 lat (msec) : 250=31.30%, 500=13.85% 00:27:47.537 cpu : usr=1.04%, sys=1.71%, ctx=1706, majf=0, minf=1 00:27:47.537 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:27:47.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:47.537 issued rwts: total=0,4894,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:47.537 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:47.537 00:27:47.537 Run status group 0 (all jobs): 00:27:47.537 WRITE: bw=1074MiB/s (1126MB/s), 74.5MiB/s-133MiB/s (78.1MB/s-139MB/s), io=10.6GiB (11.4GB), run=10062-10115msec 00:27:47.537 00:27:47.537 Disk stats (read/write): 00:27:47.537 nvme0n1: ios=49/8119, merge=0/0, ticks=81/1196323, in_queue=1196404, util=96.77% 00:27:47.537 nvme10n1: ios=48/5979, merge=0/0, ticks=370/1229091, in_queue=1229461, util=99.78% 00:27:47.537 nvme1n1: ios=0/7129, merge=0/0, ticks=0/1231147, in_queue=1231147, util=97.00% 00:27:47.537 nvme2n1: ios=47/6350, merge=0/0, ticks=475/1201853, in_queue=1202328, util=98.47% 00:27:47.537 nvme3n1: ios=43/8569, merge=0/0, ticks=845/1200537, in_queue=1201382, util=99.99% 00:27:47.537 nvme4n1: ios=0/6144, merge=0/0, ticks=0/1200090, in_queue=1200090, util=97.69% 00:27:47.537 nvme5n1: ios=43/7662, merge=0/0, ticks=2080/1191717, in_queue=1193797, util=100.00% 00:27:47.537 nvme6n1: ios=45/8055, merge=0/0, ticks=234/1197211, in_queue=1197445, util=99.68% 00:27:47.537 nvme7n1: ios=39/10323, merge=0/0, ticks=1413/1197294, in_queue=1198707, util=99.86% 00:27:47.537 nvme8n1: ios=40/5840, merge=0/0, ticks=2870/1197117, in_queue=1199987, util=100.00% 00:27:47.537 nvme9n1: ios=43/9740, merge=0/0, ticks=1175/1230661, in_queue=1231836, util=100.00% 00:27:47.537 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:27:47.537 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:27:47.537 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:47.537 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:47.537 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:47.537 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:27:47.537 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:47.537 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:47.537 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:27:47.537 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:47.537 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:27:47.537 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:47.537 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:47.537 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.537 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:47.537 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.537 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:47.537 17:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:27:47.797 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:27:47.797 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:27:47.797 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:47.797 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:47.797 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:27:47.797 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:47.797 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:27:48.057 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:48.057 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:48.057 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.057 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.057 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.057 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:48.057 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:27:48.057 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:27:48.057 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:27:48.057 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:48.057 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:48.057 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:27:48.057 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:27:48.057 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:48.318 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:48.318 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:48.318 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.318 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.318 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.318 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:48.318 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:27:48.578 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:27:48.578 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:27:48.578 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:48.578 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:48.578 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:27:48.578 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:27:48.578 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:48.578 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:48.578 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:27:48.578 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.578 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.578 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.578 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:48.578 17:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:27:48.838 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:27:48.838 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:27:48.838 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:48.838 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:48.838 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:27:48.838 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:48.838 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:27:48.838 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:48.838 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:27:48.838 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.838 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.838 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.838 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:48.838 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:27:48.838 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:27:48.838 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:27:48.838 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:48.838 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:48.838 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:27:48.838 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:48.838 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:27:48.838 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:48.839 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:27:48.839 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.839 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.839 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.839 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:48.839 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:27:49.099 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:27:49.099 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:27:49.099 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:49.099 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:49.099 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:27:49.099 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:49.099 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:27:49.099 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:49.099 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:27:49.099 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.099 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:49.099 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.099 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:49.099 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:27:49.359 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:27:49.359 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:27:49.359 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:49.359 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:49.359 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:27:49.359 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:49.359 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:27:49.359 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:49.359 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:27:49.359 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.359 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:49.359 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.359 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:49.359 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:27:49.620 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:27:49.620 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:27:49.620 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:49.620 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:49.620 17:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:27:49.620 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:49.620 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:27:49.620 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:49.620 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:27:49.620 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.620 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:49.620 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.620 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:49.620 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:27:49.620 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:27:49.620 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:27:49.620 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:49.620 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:49.620 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:27:49.620 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:49.620 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:27:49.620 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:49.620 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:27:49.620 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.620 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:49.620 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.620 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:49.620 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:27:49.881 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:27:49.881 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:27:49.881 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:49.881 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:49.881 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:27:49.881 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:49.881 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:27:49.882 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:49.882 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:27:49.882 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.882 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:49.882 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.882 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:27:49.882 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:27:49.882 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:27:49.882 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:49.882 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:27:49.882 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:49.882 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:27:49.882 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:49.882 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:49.882 rmmod nvme_tcp 00:27:49.882 rmmod nvme_fabrics 00:27:49.882 rmmod nvme_keyring 00:27:49.882 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:49.882 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:27:49.882 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:27:49.882 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@515 -- # '[' -n 3111961 ']' 00:27:49.882 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # killprocess 3111961 00:27:49.882 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 3111961 ']' 00:27:49.882 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 3111961 00:27:49.882 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:27:49.882 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:49.882 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3111961 00:27:49.882 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:49.882 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:49.882 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3111961' 00:27:49.882 killing process with pid 3111961 00:27:49.882 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 3111961 00:27:49.882 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 3111961 00:27:50.142 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:50.142 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:50.142 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:50.142 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:27:50.402 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@789 -- # iptables-save 00:27:50.402 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:50.402 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@789 -- # iptables-restore 00:27:50.402 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:50.402 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:50.402 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:50.402 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:50.402 17:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.316 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:52.316 00:27:52.316 real 1m17.757s 00:27:52.316 user 5m2.664s 00:27:52.316 sys 0m15.966s 00:27:52.316 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:52.316 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:52.316 ************************************ 00:27:52.316 END TEST nvmf_multiconnection 00:27:52.316 ************************************ 00:27:52.316 17:27:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:52.316 17:27:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:52.316 17:27:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:52.316 17:27:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:52.316 ************************************ 00:27:52.316 START TEST nvmf_initiator_timeout 00:27:52.316 ************************************ 00:27:52.316 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:52.578 * Looking for test storage... 00:27:52.578 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:52.578 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:52.578 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lcov --version 00:27:52.578 17:27:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:52.578 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:52.578 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:52.578 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:52.578 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:52.578 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:27:52.578 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:27:52.578 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:27:52.578 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:27:52.578 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:27:52.578 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:27:52.578 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:27:52.578 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:52.578 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:27:52.578 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:27:52.578 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:52.578 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:52.578 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:27:52.578 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:27:52.578 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:52.578 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:27:52.578 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:27:52.578 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:27:52.578 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:27:52.578 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:52.578 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:27:52.578 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:52.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:52.579 --rc genhtml_branch_coverage=1 00:27:52.579 --rc genhtml_function_coverage=1 00:27:52.579 --rc genhtml_legend=1 00:27:52.579 --rc geninfo_all_blocks=1 00:27:52.579 --rc geninfo_unexecuted_blocks=1 00:27:52.579 00:27:52.579 ' 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:52.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:52.579 --rc genhtml_branch_coverage=1 00:27:52.579 --rc genhtml_function_coverage=1 00:27:52.579 --rc genhtml_legend=1 00:27:52.579 --rc geninfo_all_blocks=1 00:27:52.579 --rc geninfo_unexecuted_blocks=1 00:27:52.579 00:27:52.579 ' 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:52.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:52.579 --rc genhtml_branch_coverage=1 00:27:52.579 --rc genhtml_function_coverage=1 00:27:52.579 --rc genhtml_legend=1 00:27:52.579 --rc geninfo_all_blocks=1 00:27:52.579 --rc geninfo_unexecuted_blocks=1 00:27:52.579 00:27:52.579 ' 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:52.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:52.579 --rc genhtml_branch_coverage=1 00:27:52.579 --rc genhtml_function_coverage=1 00:27:52.579 --rc genhtml_legend=1 00:27:52.579 --rc geninfo_all_blocks=1 00:27:52.579 --rc geninfo_unexecuted_blocks=1 00:27:52.579 00:27:52.579 ' 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:52.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:27:52.579 17:27:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:00.712 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:00.712 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:28:00.712 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:00.712 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:00.712 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:00.712 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:00.712 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:00.712 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:28:00.712 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:00.712 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:28:00.712 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:28:00.712 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:28:00.712 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:28:00.712 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:28:00.712 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:28:00.712 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:00.712 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:00.712 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:00.712 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:00.712 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:00.712 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:00.712 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:00.712 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:00.712 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:00.712 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:00.712 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:00.712 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:00.712 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:00.712 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:00.712 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:00.712 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:00.712 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:00.712 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:00.713 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:00.713 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:00.713 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:00.713 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # is_hw=yes 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:00.713 17:27:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:00.713 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:00.713 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:00.713 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:00.713 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:00.713 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:00.713 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:00.713 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:00.713 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:00.713 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:00.713 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:28:00.713 00:28:00.713 --- 10.0.0.2 ping statistics --- 00:28:00.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.713 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:28:00.713 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:00.713 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:00.713 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:28:00.713 00:28:00.713 --- 10.0.0.1 ping statistics --- 00:28:00.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.713 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:28:00.713 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:00.713 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # return 0 00:28:00.713 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:00.713 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:00.713 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:00.713 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:00.713 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:00.713 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:00.713 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:00.713 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:28:00.713 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:00.713 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:00.713 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:00.713 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # nvmfpid=3129348 00:28:00.713 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # waitforlisten 3129348 00:28:00.713 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 3129348 ']' 00:28:00.713 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:00.713 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:00.713 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:00.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:00.713 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:00.713 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:00.713 17:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:00.713 [2024-10-01 17:27:58.285038] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:28:00.713 [2024-10-01 17:27:58.285107] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:00.713 [2024-10-01 17:27:58.357957] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:00.713 [2024-10-01 17:27:58.397669] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:00.713 [2024-10-01 17:27:58.397716] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:00.713 [2024-10-01 17:27:58.397724] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:00.713 [2024-10-01 17:27:58.397731] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:00.713 [2024-10-01 17:27:58.397737] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:00.713 [2024-10-01 17:27:58.397883] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:00.713 [2024-10-01 17:27:58.398031] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:28:00.713 [2024-10-01 17:27:58.398134] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:00.713 [2024-10-01 17:27:58.398135] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:28:00.713 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:00.713 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:28:00.713 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:00.713 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:00.713 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:00.713 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:00.713 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:28:00.713 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:00.713 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.713 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:00.713 Malloc0 00:28:00.713 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.713 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:28:00.713 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.713 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:00.713 Delay0 00:28:00.713 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.713 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:00.713 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.713 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:00.713 [2024-10-01 17:27:59.166861] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:00.713 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.713 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:28:00.713 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.713 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:00.713 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.713 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:00.713 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.713 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:00.713 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.713 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:00.713 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.714 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:00.714 [2024-10-01 17:27:59.207154] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:00.714 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.714 17:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:28:02.639 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:28:02.639 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:28:02.639 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:28:02.639 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:28:02.639 17:28:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:28:04.552 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:28:04.552 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:28:04.552 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:28:04.552 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:28:04.552 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:28:04.552 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:28:04.552 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=3130335 00:28:04.552 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:28:04.552 17:28:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:28:04.552 [global] 00:28:04.552 thread=1 00:28:04.552 invalidate=1 00:28:04.552 rw=write 00:28:04.552 time_based=1 00:28:04.552 runtime=60 00:28:04.552 ioengine=libaio 00:28:04.552 direct=1 00:28:04.552 bs=4096 00:28:04.552 iodepth=1 00:28:04.552 norandommap=0 00:28:04.552 numjobs=1 00:28:04.552 00:28:04.552 verify_dump=1 00:28:04.552 verify_backlog=512 00:28:04.552 verify_state_save=0 00:28:04.552 do_verify=1 00:28:04.552 verify=crc32c-intel 00:28:04.552 [job0] 00:28:04.552 filename=/dev/nvme0n1 00:28:04.552 Could not set queue depth (nvme0n1) 00:28:04.813 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:04.813 fio-3.35 00:28:04.813 Starting 1 thread 00:28:07.357 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:28:07.357 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.357 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:07.357 true 00:28:07.357 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.357 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:28:07.357 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.357 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:07.357 true 00:28:07.357 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.357 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:28:07.357 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.357 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:07.357 true 00:28:07.357 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.357 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:28:07.357 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.357 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:07.357 true 00:28:07.357 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.357 17:28:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:28:10.658 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:28:10.658 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.658 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:10.658 true 00:28:10.658 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.658 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:28:10.658 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.658 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:10.658 true 00:28:10.658 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.658 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:28:10.658 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.658 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:10.658 true 00:28:10.658 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.658 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:28:10.658 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.658 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:10.658 true 00:28:10.658 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.658 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:28:10.658 17:28:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 3130335 00:29:06.995 00:29:06.995 job0: (groupid=0, jobs=1): err= 0: pid=3130547: Tue Oct 1 17:29:03 2024 00:29:06.995 read: IOPS=28, BW=113KiB/s (116kB/s)(6772KiB/60014msec) 00:29:06.995 slat (usec): min=6, max=9623, avg=32.68, stdev=233.25 00:29:06.995 clat (usec): min=589, max=42055, avg=9957.17, stdev=16903.83 00:29:06.995 lat (usec): min=616, max=51107, avg=9989.85, stdev=16915.78 00:29:06.995 clat percentiles (usec): 00:29:06.995 | 1.00th=[ 824], 5.00th=[ 898], 10.00th=[ 938], 20.00th=[ 996], 00:29:06.995 | 30.00th=[ 1012], 40.00th=[ 1029], 50.00th=[ 1057], 60.00th=[ 1074], 00:29:06.995 | 70.00th=[ 1090], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:29:06.995 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:29:06.995 | 99.99th=[42206] 00:29:06.995 write: IOPS=34, BW=137KiB/s (140kB/s)(8192KiB/60014msec); 0 zone resets 00:29:06.995 slat (nsec): min=8996, max=69907, avg=29349.44, stdev=10260.18 00:29:06.995 clat (usec): min=226, max=41816k, avg=21001.35, stdev=924000.90 00:29:06.995 lat (usec): min=238, max=41816k, avg=21030.70, stdev=924000.99 00:29:06.995 clat percentiles (usec): 00:29:06.995 | 1.00th=[ 347], 5.00th=[ 400], 10.00th=[ 449], 00:29:06.995 | 20.00th=[ 498], 30.00th=[ 545], 40.00th=[ 570], 00:29:06.995 | 50.00th=[ 586], 60.00th=[ 611], 70.00th=[ 652], 00:29:06.995 | 80.00th=[ 676], 90.00th=[ 701], 95.00th=[ 734], 00:29:06.995 | 99.00th=[ 799], 99.50th=[ 832], 99.90th=[ 898], 00:29:06.996 | 99.95th=[ 906], 99.99th=[17112761] 00:29:06.996 bw ( KiB/s): min= 128, max= 4096, per=100.00%, avg=2340.57, stdev=1649.69, samples=7 00:29:06.996 iops : min= 32, max= 1024, avg=585.14, stdev=412.42, samples=7 00:29:06.996 lat (usec) : 250=0.03%, 500=11.09%, 750=42.05%, 1000=12.32% 00:29:06.996 lat (msec) : 2=24.59%, 50=9.89%, >=2000=0.03% 00:29:06.996 cpu : usr=0.13%, sys=0.21%, ctx=3743, majf=0, minf=1 00:29:06.996 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:06.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.996 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.996 issued rwts: total=1693,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.996 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:06.996 00:29:06.996 Run status group 0 (all jobs): 00:29:06.996 READ: bw=113KiB/s (116kB/s), 113KiB/s-113KiB/s (116kB/s-116kB/s), io=6772KiB (6935kB), run=60014-60014msec 00:29:06.996 WRITE: bw=137KiB/s (140kB/s), 137KiB/s-137KiB/s (140kB/s-140kB/s), io=8192KiB (8389kB), run=60014-60014msec 00:29:06.996 00:29:06.996 Disk stats (read/write): 00:29:06.996 nvme0n1: ios=1789/2048, merge=0/0, ticks=17916/1051, in_queue=18967, util=99.61% 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:06.996 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:29:06.996 nvmf hotplug test: fio successful as expected 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:06.996 rmmod nvme_tcp 00:29:06.996 rmmod nvme_fabrics 00:29:06.996 rmmod nvme_keyring 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@515 -- # '[' -n 3129348 ']' 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # killprocess 3129348 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 3129348 ']' 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 3129348 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3129348 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3129348' 00:29:06.996 killing process with pid 3129348 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 3129348 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 3129348 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@789 -- # iptables-save 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@789 -- # iptables-restore 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:06.996 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.301 17:29:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:07.301 00:29:07.301 real 1m14.976s 00:29:07.301 user 4m33.702s 00:29:07.301 sys 0m7.454s 00:29:07.301 17:29:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:07.301 17:29:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:07.301 ************************************ 00:29:07.301 END TEST nvmf_initiator_timeout 00:29:07.301 ************************************ 00:29:07.562 17:29:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:29:07.562 17:29:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:29:07.562 17:29:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:29:07.562 17:29:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:29:07.562 17:29:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:15.705 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:15.705 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:15.705 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:15.705 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:15.705 ************************************ 00:29:15.705 START TEST nvmf_perf_adq 00:29:15.705 ************************************ 00:29:15.705 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:29:15.705 * Looking for test storage... 00:29:15.705 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:15.705 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:15.705 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lcov --version 00:29:15.705 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:15.705 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:15.705 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:15.705 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:15.705 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:15.705 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:29:15.705 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:29:15.705 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:29:15.705 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:29:15.705 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:29:15.705 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:29:15.705 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:29:15.705 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:15.705 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:29:15.705 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:29:15.705 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:15.705 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:15.705 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:29:15.705 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:29:15.705 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:15.705 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:29:15.705 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:29:15.705 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:29:15.705 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:29:15.705 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:15.705 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:29:15.705 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:29:15.705 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:15.705 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:15.705 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:29:15.705 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:15.705 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:15.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.705 --rc genhtml_branch_coverage=1 00:29:15.705 --rc genhtml_function_coverage=1 00:29:15.705 --rc genhtml_legend=1 00:29:15.706 --rc geninfo_all_blocks=1 00:29:15.706 --rc geninfo_unexecuted_blocks=1 00:29:15.706 00:29:15.706 ' 00:29:15.706 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:15.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.706 --rc genhtml_branch_coverage=1 00:29:15.706 --rc genhtml_function_coverage=1 00:29:15.706 --rc genhtml_legend=1 00:29:15.706 --rc geninfo_all_blocks=1 00:29:15.706 --rc geninfo_unexecuted_blocks=1 00:29:15.706 00:29:15.706 ' 00:29:15.706 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:15.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.706 --rc genhtml_branch_coverage=1 00:29:15.706 --rc genhtml_function_coverage=1 00:29:15.706 --rc genhtml_legend=1 00:29:15.706 --rc geninfo_all_blocks=1 00:29:15.706 --rc geninfo_unexecuted_blocks=1 00:29:15.706 00:29:15.706 ' 00:29:15.706 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:15.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.706 --rc genhtml_branch_coverage=1 00:29:15.706 --rc genhtml_function_coverage=1 00:29:15.706 --rc genhtml_legend=1 00:29:15.706 --rc geninfo_all_blocks=1 00:29:15.706 --rc geninfo_unexecuted_blocks=1 00:29:15.706 00:29:15.706 ' 00:29:15.706 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:15.706 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:29:15.706 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:15.706 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:15.706 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:15.706 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:15.706 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:15.706 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:15.706 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:15.706 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:15.706 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:15.706 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:15.706 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:15.706 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:15.706 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:15.706 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:15.706 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:15.706 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:15.706 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:15.706 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:29:15.706 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:15.706 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:15.706 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:15.706 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.706 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.706 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.706 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:29:15.706 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.706 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:29:15.706 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:15.706 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:15.706 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:15.706 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:15.706 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:15.706 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:15.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:15.706 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:15.706 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:15.706 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:15.706 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:29:15.706 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:29:15.706 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:22.293 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:22.293 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:22.293 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:22.293 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:29:22.293 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:29:23.233 17:29:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:29:25.142 17:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:29:30.430 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:29:30.430 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:30.430 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:30.430 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:30.430 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:30.430 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:30.430 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.430 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:30.430 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:30.430 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:30.430 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:30.430 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:29:30.430 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:30.430 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:30.430 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:29:30.430 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:30.430 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:30.430 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:30.430 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:30.430 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:30.430 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:29:30.430 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:30.430 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:29:30.430 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:29:30.430 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:29:30.430 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:29:30.430 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:29:30.430 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:29:30.430 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:30.430 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:30.430 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:30.430 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:30.430 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:30.430 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:30.430 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:30.431 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:30.431 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:30.431 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:30.431 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:30.431 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:30.692 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:30.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:30.692 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:29:30.692 00:29:30.692 --- 10.0.0.2 ping statistics --- 00:29:30.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:30.692 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:29:30.692 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:30.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:30.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:29:30.692 00:29:30.692 --- 10.0.0.1 ping statistics --- 00:29:30.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:30.692 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:29:30.692 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:30.692 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:29:30.692 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:30.692 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:30.692 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:30.692 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:30.692 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:30.692 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:30.692 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:30.692 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:30.692 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:30.692 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:30.692 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:30.692 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=3151293 00:29:30.692 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 3151293 00:29:30.692 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:30.692 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 3151293 ']' 00:29:30.692 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:30.692 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:30.692 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:30.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:30.692 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:30.692 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:30.692 [2024-10-01 17:29:29.108788] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:29:30.692 [2024-10-01 17:29:29.108857] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:30.692 [2024-10-01 17:29:29.181099] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:30.692 [2024-10-01 17:29:29.221678] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:30.692 [2024-10-01 17:29:29.221723] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:30.692 [2024-10-01 17:29:29.221736] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:30.692 [2024-10-01 17:29:29.221743] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:30.692 [2024-10-01 17:29:29.221749] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:30.693 [2024-10-01 17:29:29.221929] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.693 [2024-10-01 17:29:29.222080] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:30.693 [2024-10-01 17:29:29.222138] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.693 [2024-10-01 17:29:29.222140] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:31.633 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:31.633 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:29:31.633 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:31.633 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:31.633 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:31.633 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:31.633 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:29:31.633 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:29:31.633 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:29:31.633 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.633 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:31.633 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.633 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:29:31.633 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:29:31.633 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.633 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:31.633 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.633 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:29:31.633 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.633 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:31.633 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.633 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:29:31.633 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.633 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:31.633 [2024-10-01 17:29:30.098832] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:31.633 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.633 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:31.633 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.633 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:31.633 Malloc1 00:29:31.633 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.633 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:31.633 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.633 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:31.633 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.633 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:31.633 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.633 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:31.633 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.633 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:31.633 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.633 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:31.633 [2024-10-01 17:29:30.158094] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:31.633 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.633 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3151584 00:29:31.633 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:31.633 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:29:34.176 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:29:34.176 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.176 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:34.176 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.176 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:29:34.176 "tick_rate": 2400000000, 00:29:34.176 "poll_groups": [ 00:29:34.176 { 00:29:34.176 "name": "nvmf_tgt_poll_group_000", 00:29:34.176 "admin_qpairs": 1, 00:29:34.176 "io_qpairs": 1, 00:29:34.176 "current_admin_qpairs": 1, 00:29:34.176 "current_io_qpairs": 1, 00:29:34.176 "pending_bdev_io": 0, 00:29:34.176 "completed_nvme_io": 19778, 00:29:34.176 "transports": [ 00:29:34.176 { 00:29:34.176 "trtype": "TCP" 00:29:34.176 } 00:29:34.176 ] 00:29:34.176 }, 00:29:34.176 { 00:29:34.176 "name": "nvmf_tgt_poll_group_001", 00:29:34.176 "admin_qpairs": 0, 00:29:34.176 "io_qpairs": 1, 00:29:34.176 "current_admin_qpairs": 0, 00:29:34.176 "current_io_qpairs": 1, 00:29:34.176 "pending_bdev_io": 0, 00:29:34.176 "completed_nvme_io": 28250, 00:29:34.176 "transports": [ 00:29:34.176 { 00:29:34.176 "trtype": "TCP" 00:29:34.176 } 00:29:34.176 ] 00:29:34.176 }, 00:29:34.176 { 00:29:34.176 "name": "nvmf_tgt_poll_group_002", 00:29:34.176 "admin_qpairs": 0, 00:29:34.176 "io_qpairs": 1, 00:29:34.176 "current_admin_qpairs": 0, 00:29:34.176 "current_io_qpairs": 1, 00:29:34.176 "pending_bdev_io": 0, 00:29:34.176 "completed_nvme_io": 22142, 00:29:34.176 "transports": [ 00:29:34.176 { 00:29:34.176 "trtype": "TCP" 00:29:34.176 } 00:29:34.176 ] 00:29:34.176 }, 00:29:34.176 { 00:29:34.176 "name": "nvmf_tgt_poll_group_003", 00:29:34.176 "admin_qpairs": 0, 00:29:34.176 "io_qpairs": 1, 00:29:34.176 "current_admin_qpairs": 0, 00:29:34.176 "current_io_qpairs": 1, 00:29:34.176 "pending_bdev_io": 0, 00:29:34.176 "completed_nvme_io": 20473, 00:29:34.176 "transports": [ 00:29:34.176 { 00:29:34.176 "trtype": "TCP" 00:29:34.176 } 00:29:34.176 ] 00:29:34.176 } 00:29:34.176 ] 00:29:34.176 }' 00:29:34.176 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:29:34.176 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:29:34.176 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:29:34.176 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:29:34.176 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3151584 00:29:42.310 Initializing NVMe Controllers 00:29:42.310 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:42.310 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:29:42.310 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:29:42.310 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:29:42.310 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:29:42.310 Initialization complete. Launching workers. 00:29:42.310 ======================================================== 00:29:42.310 Latency(us) 00:29:42.310 Device Information : IOPS MiB/s Average min max 00:29:42.310 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11264.32 44.00 5682.77 1343.95 9050.22 00:29:42.310 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14824.39 57.91 4317.59 1309.21 8926.96 00:29:42.310 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 14346.49 56.04 4460.81 1264.88 10667.55 00:29:42.310 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13840.30 54.06 4624.60 1221.57 10376.93 00:29:42.310 ======================================================== 00:29:42.310 Total : 54275.50 212.01 4717.06 1221.57 10667.55 00:29:42.310 00:29:42.310 17:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:29:42.310 17:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:42.310 17:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:29:42.310 17:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:42.310 17:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:29:42.310 17:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:42.310 17:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:42.310 rmmod nvme_tcp 00:29:42.310 rmmod nvme_fabrics 00:29:42.310 rmmod nvme_keyring 00:29:42.310 17:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:42.310 17:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:29:42.310 17:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:29:42.310 17:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 3151293 ']' 00:29:42.310 17:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 3151293 00:29:42.310 17:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 3151293 ']' 00:29:42.310 17:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 3151293 00:29:42.310 17:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:29:42.310 17:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:42.310 17:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3151293 00:29:42.310 17:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:42.310 17:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:42.310 17:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3151293' 00:29:42.310 killing process with pid 3151293 00:29:42.310 17:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 3151293 00:29:42.310 17:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 3151293 00:29:42.310 17:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:42.310 17:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:42.310 17:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:42.310 17:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:29:42.310 17:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:29:42.310 17:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:42.310 17:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:29:42.311 17:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:42.311 17:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:42.311 17:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:42.311 17:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:42.311 17:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:44.222 17:29:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:44.222 17:29:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:29:44.222 17:29:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:29:44.222 17:29:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:29:46.134 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:29:48.046 17:29:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:53.338 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:53.338 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:53.339 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:53.339 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:53.339 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:53.339 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:53.339 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:29:53.339 00:29:53.339 --- 10.0.0.2 ping statistics --- 00:29:53.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.339 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:53.339 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:53.339 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:29:53.339 00:29:53.339 --- 10.0.0.1 ping statistics --- 00:29:53.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.339 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:29:53.339 net.core.busy_poll = 1 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:29:53.339 net.core.busy_read = 1 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:29:53.339 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:29:53.599 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:53.599 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:53.599 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:53.599 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:53.599 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=3156038 00:29:53.599 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 3156038 00:29:53.599 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:53.599 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 3156038 ']' 00:29:53.599 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:53.599 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:53.600 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:53.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:53.600 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:53.600 17:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:53.600 [2024-10-01 17:29:51.959378] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:29:53.600 [2024-10-01 17:29:51.959465] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:53.600 [2024-10-01 17:29:52.033451] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:53.600 [2024-10-01 17:29:52.072645] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:53.600 [2024-10-01 17:29:52.072690] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:53.600 [2024-10-01 17:29:52.072698] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:53.600 [2024-10-01 17:29:52.072705] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:53.600 [2024-10-01 17:29:52.072711] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:53.600 [2024-10-01 17:29:52.072860] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:53.600 [2024-10-01 17:29:52.072979] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:53.600 [2024-10-01 17:29:52.073140] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:53.600 [2024-10-01 17:29:52.073140] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.544 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:54.544 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:29:54.544 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:54.544 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:54.544 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:54.544 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:54.544 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:29:54.544 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:29:54.544 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:29:54.544 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.544 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:54.544 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.544 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:29:54.544 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:29:54.544 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.544 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:54.544 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.544 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:29:54.544 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.544 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:54.544 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.544 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:29:54.544 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.544 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:54.544 [2024-10-01 17:29:52.940814] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:54.544 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.544 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:54.544 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.544 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:54.544 Malloc1 00:29:54.544 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.544 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:54.544 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.544 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:54.544 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.544 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:54.544 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.544 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:54.544 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.544 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:54.544 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.544 17:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:54.544 [2024-10-01 17:29:53.000101] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:54.544 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.544 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3156395 00:29:54.544 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:29:54.545 17:29:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:57.086 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:29:57.086 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.086 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:57.086 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.086 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:29:57.086 "tick_rate": 2400000000, 00:29:57.086 "poll_groups": [ 00:29:57.086 { 00:29:57.086 "name": "nvmf_tgt_poll_group_000", 00:29:57.086 "admin_qpairs": 1, 00:29:57.086 "io_qpairs": 1, 00:29:57.086 "current_admin_qpairs": 1, 00:29:57.086 "current_io_qpairs": 1, 00:29:57.086 "pending_bdev_io": 0, 00:29:57.086 "completed_nvme_io": 28061, 00:29:57.086 "transports": [ 00:29:57.086 { 00:29:57.086 "trtype": "TCP" 00:29:57.086 } 00:29:57.086 ] 00:29:57.086 }, 00:29:57.086 { 00:29:57.086 "name": "nvmf_tgt_poll_group_001", 00:29:57.086 "admin_qpairs": 0, 00:29:57.086 "io_qpairs": 3, 00:29:57.086 "current_admin_qpairs": 0, 00:29:57.086 "current_io_qpairs": 3, 00:29:57.086 "pending_bdev_io": 0, 00:29:57.086 "completed_nvme_io": 41958, 00:29:57.086 "transports": [ 00:29:57.086 { 00:29:57.086 "trtype": "TCP" 00:29:57.086 } 00:29:57.086 ] 00:29:57.086 }, 00:29:57.086 { 00:29:57.086 "name": "nvmf_tgt_poll_group_002", 00:29:57.086 "admin_qpairs": 0, 00:29:57.086 "io_qpairs": 0, 00:29:57.086 "current_admin_qpairs": 0, 00:29:57.086 "current_io_qpairs": 0, 00:29:57.086 "pending_bdev_io": 0, 00:29:57.086 "completed_nvme_io": 0, 00:29:57.086 "transports": [ 00:29:57.086 { 00:29:57.086 "trtype": "TCP" 00:29:57.086 } 00:29:57.086 ] 00:29:57.086 }, 00:29:57.086 { 00:29:57.087 "name": "nvmf_tgt_poll_group_003", 00:29:57.087 "admin_qpairs": 0, 00:29:57.087 "io_qpairs": 0, 00:29:57.087 "current_admin_qpairs": 0, 00:29:57.087 "current_io_qpairs": 0, 00:29:57.087 "pending_bdev_io": 0, 00:29:57.087 "completed_nvme_io": 0, 00:29:57.087 "transports": [ 00:29:57.087 { 00:29:57.087 "trtype": "TCP" 00:29:57.087 } 00:29:57.087 ] 00:29:57.087 } 00:29:57.087 ] 00:29:57.087 }' 00:29:57.087 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:29:57.087 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:29:57.087 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:29:57.087 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:29:57.087 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3156395 00:30:05.217 Initializing NVMe Controllers 00:30:05.217 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:05.217 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:30:05.217 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:30:05.217 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:30:05.217 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:30:05.217 Initialization complete. Launching workers. 00:30:05.217 ======================================================== 00:30:05.217 Latency(us) 00:30:05.217 Device Information : IOPS MiB/s Average min max 00:30:05.217 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 19068.19 74.49 3356.27 1111.25 45265.89 00:30:05.217 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6848.30 26.75 9376.20 1396.24 54537.03 00:30:05.217 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7957.60 31.08 8042.77 1403.43 53167.20 00:30:05.217 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6855.10 26.78 9339.29 1374.33 55656.58 00:30:05.217 ======================================================== 00:30:05.217 Total : 40729.19 159.10 6291.11 1111.25 55656.58 00:30:05.217 00:30:05.217 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:30:05.217 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:05.217 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:30:05.217 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:05.217 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:30:05.217 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:05.217 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:05.217 rmmod nvme_tcp 00:30:05.217 rmmod nvme_fabrics 00:30:05.217 rmmod nvme_keyring 00:30:05.217 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:05.217 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:30:05.218 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:30:05.218 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 3156038 ']' 00:30:05.218 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 3156038 00:30:05.218 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 3156038 ']' 00:30:05.218 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 3156038 00:30:05.218 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:30:05.218 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:05.218 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3156038 00:30:05.218 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:05.218 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:05.218 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3156038' 00:30:05.218 killing process with pid 3156038 00:30:05.218 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 3156038 00:30:05.218 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 3156038 00:30:05.218 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:05.218 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:05.218 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:05.218 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:30:05.218 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:30:05.218 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:05.218 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:30:05.218 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:05.218 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:05.218 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:05.218 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:05.218 17:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:30:08.520 00:30:08.520 real 0m53.670s 00:30:08.520 user 2m50.129s 00:30:08.520 sys 0m11.178s 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:08.520 ************************************ 00:30:08.520 END TEST nvmf_perf_adq 00:30:08.520 ************************************ 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:08.520 ************************************ 00:30:08.520 START TEST nvmf_shutdown 00:30:08.520 ************************************ 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:30:08.520 * Looking for test storage... 00:30:08.520 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:08.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.520 --rc genhtml_branch_coverage=1 00:30:08.520 --rc genhtml_function_coverage=1 00:30:08.520 --rc genhtml_legend=1 00:30:08.520 --rc geninfo_all_blocks=1 00:30:08.520 --rc geninfo_unexecuted_blocks=1 00:30:08.520 00:30:08.520 ' 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:08.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.520 --rc genhtml_branch_coverage=1 00:30:08.520 --rc genhtml_function_coverage=1 00:30:08.520 --rc genhtml_legend=1 00:30:08.520 --rc geninfo_all_blocks=1 00:30:08.520 --rc geninfo_unexecuted_blocks=1 00:30:08.520 00:30:08.520 ' 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:08.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.520 --rc genhtml_branch_coverage=1 00:30:08.520 --rc genhtml_function_coverage=1 00:30:08.520 --rc genhtml_legend=1 00:30:08.520 --rc geninfo_all_blocks=1 00:30:08.520 --rc geninfo_unexecuted_blocks=1 00:30:08.520 00:30:08.520 ' 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:08.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.520 --rc genhtml_branch_coverage=1 00:30:08.520 --rc genhtml_function_coverage=1 00:30:08.520 --rc genhtml_legend=1 00:30:08.520 --rc geninfo_all_blocks=1 00:30:08.520 --rc geninfo_unexecuted_blocks=1 00:30:08.520 00:30:08.520 ' 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:08.520 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:08.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:08.521 ************************************ 00:30:08.521 START TEST nvmf_shutdown_tc1 00:30:08.521 ************************************ 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:30:08.521 17:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:16.666 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:16.666 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:30:16.666 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:16.666 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:16.666 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:16.666 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:16.666 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:16.666 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:30:16.666 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:16.666 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:30:16.666 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:30:16.666 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:30:16.666 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:30:16.666 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:30:16.666 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:30:16.666 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:16.666 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:16.666 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:16.666 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:16.666 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:16.666 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:16.666 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:16.666 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:16.666 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:16.666 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:16.666 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:16.666 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:16.666 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:16.666 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:16.666 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:16.666 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:16.666 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:16.666 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:16.666 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:16.666 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:16.666 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:16.666 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:16.666 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:16.666 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:16.666 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:16.666 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:16.667 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:16.667 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:16.667 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:16.667 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:16.667 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:16.667 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:16.667 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:16.667 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:16.667 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:16.667 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:16.667 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:16.667 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:16.667 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:16.667 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:16.667 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:16.667 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:16.667 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:16.667 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:16.667 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:16.667 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:16.667 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:16.667 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:16.667 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:16.667 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:16.667 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:16.667 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:16.667 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:16.667 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:16.667 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:16.667 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:16.667 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:16.667 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:16.667 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # is_hw=yes 00:30:16.667 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:16.667 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:16.667 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:16.667 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:16.667 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:16.667 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:16.667 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:16.667 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:16.667 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:16.667 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:16.667 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:16.667 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:16.667 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:16.667 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:16.667 17:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:16.667 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:16.667 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:16.667 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:16.667 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:16.667 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:16.667 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:16.667 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:16.667 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:16.667 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:16.667 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:16.667 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:16.667 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:16.667 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.563 ms 00:30:16.667 00:30:16.667 --- 10.0.0.2 ping statistics --- 00:30:16.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.667 rtt min/avg/max/mdev = 0.563/0.563/0.563/0.000 ms 00:30:16.667 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:16.667 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:16.667 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:30:16.667 00:30:16.667 --- 10.0.0.1 ping statistics --- 00:30:16.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.667 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:30:16.667 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:16.667 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # return 0 00:30:16.667 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:16.667 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:16.667 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:16.667 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:16.667 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:16.667 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:16.667 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:16.667 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:30:16.667 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:16.667 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:16.667 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:16.667 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # nvmfpid=3163308 00:30:16.667 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # waitforlisten 3163308 00:30:16.668 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:16.668 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 3163308 ']' 00:30:16.668 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:16.668 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:16.668 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:16.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:16.668 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:16.668 17:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:16.668 [2024-10-01 17:30:14.402764] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:30:16.668 [2024-10-01 17:30:14.402829] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:16.668 [2024-10-01 17:30:14.493823] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:16.668 [2024-10-01 17:30:14.541975] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:16.668 [2024-10-01 17:30:14.542045] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:16.668 [2024-10-01 17:30:14.542054] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:16.668 [2024-10-01 17:30:14.542060] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:16.668 [2024-10-01 17:30:14.542067] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:16.668 [2024-10-01 17:30:14.542195] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:30:16.668 [2024-10-01 17:30:14.542360] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:30:16.668 [2024-10-01 17:30:14.542523] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:16.668 [2024-10-01 17:30:14.542523] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:30:16.929 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:16.929 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:30:16.929 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:16.929 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:16.929 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:16.929 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:16.929 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:16.929 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.929 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:16.929 [2024-10-01 17:30:15.256074] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:16.929 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.929 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:30:16.929 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:30:16.929 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:16.929 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:16.929 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:16.929 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:16.929 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:16.929 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:16.929 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:16.929 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:16.929 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:16.929 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:16.929 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:16.929 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:16.929 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:16.929 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:16.929 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:16.929 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:16.929 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:16.929 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:16.929 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:16.929 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:16.929 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:16.929 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:16.929 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:16.929 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:30:16.929 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.929 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:16.929 Malloc1 00:30:16.929 [2024-10-01 17:30:15.351422] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:16.929 Malloc2 00:30:16.929 Malloc3 00:30:16.929 Malloc4 00:30:17.190 Malloc5 00:30:17.190 Malloc6 00:30:17.190 Malloc7 00:30:17.190 Malloc8 00:30:17.190 Malloc9 00:30:17.190 Malloc10 00:30:17.190 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.190 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:30:17.190 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:17.190 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:17.452 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3163520 00:30:17.452 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3163520 /var/tmp/bdevperf.sock 00:30:17.452 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 3163520 ']' 00:30:17.452 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:17.452 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:17.452 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:17.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:17.452 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:17.452 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:17.452 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:30:17.452 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:17.452 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:30:17.452 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:30:17.452 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:17.452 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:17.452 { 00:30:17.452 "params": { 00:30:17.452 "name": "Nvme$subsystem", 00:30:17.452 "trtype": "$TEST_TRANSPORT", 00:30:17.452 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:17.452 "adrfam": "ipv4", 00:30:17.452 "trsvcid": "$NVMF_PORT", 00:30:17.452 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:17.452 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:17.452 "hdgst": ${hdgst:-false}, 00:30:17.452 "ddgst": ${ddgst:-false} 00:30:17.452 }, 00:30:17.452 "method": "bdev_nvme_attach_controller" 00:30:17.452 } 00:30:17.452 EOF 00:30:17.452 )") 00:30:17.452 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:17.452 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:17.452 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:17.452 { 00:30:17.452 "params": { 00:30:17.452 "name": "Nvme$subsystem", 00:30:17.452 "trtype": "$TEST_TRANSPORT", 00:30:17.452 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:17.452 "adrfam": "ipv4", 00:30:17.452 "trsvcid": "$NVMF_PORT", 00:30:17.452 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:17.452 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:17.452 "hdgst": ${hdgst:-false}, 00:30:17.452 "ddgst": ${ddgst:-false} 00:30:17.452 }, 00:30:17.452 "method": "bdev_nvme_attach_controller" 00:30:17.452 } 00:30:17.452 EOF 00:30:17.452 )") 00:30:17.452 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:17.452 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:17.452 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:17.452 { 00:30:17.452 "params": { 00:30:17.452 "name": "Nvme$subsystem", 00:30:17.452 "trtype": "$TEST_TRANSPORT", 00:30:17.452 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:17.452 "adrfam": "ipv4", 00:30:17.452 "trsvcid": "$NVMF_PORT", 00:30:17.452 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:17.452 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:17.452 "hdgst": ${hdgst:-false}, 00:30:17.452 "ddgst": ${ddgst:-false} 00:30:17.452 }, 00:30:17.452 "method": "bdev_nvme_attach_controller" 00:30:17.452 } 00:30:17.452 EOF 00:30:17.452 )") 00:30:17.452 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:17.452 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:17.452 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:17.452 { 00:30:17.452 "params": { 00:30:17.452 "name": "Nvme$subsystem", 00:30:17.452 "trtype": "$TEST_TRANSPORT", 00:30:17.452 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:17.452 "adrfam": "ipv4", 00:30:17.452 "trsvcid": "$NVMF_PORT", 00:30:17.452 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:17.452 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:17.452 "hdgst": ${hdgst:-false}, 00:30:17.452 "ddgst": ${ddgst:-false} 00:30:17.452 }, 00:30:17.452 "method": "bdev_nvme_attach_controller" 00:30:17.452 } 00:30:17.452 EOF 00:30:17.452 )") 00:30:17.452 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:17.452 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:17.452 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:17.452 { 00:30:17.452 "params": { 00:30:17.452 "name": "Nvme$subsystem", 00:30:17.453 "trtype": "$TEST_TRANSPORT", 00:30:17.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:17.453 "adrfam": "ipv4", 00:30:17.453 "trsvcid": "$NVMF_PORT", 00:30:17.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:17.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:17.453 "hdgst": ${hdgst:-false}, 00:30:17.453 "ddgst": ${ddgst:-false} 00:30:17.453 }, 00:30:17.453 "method": "bdev_nvme_attach_controller" 00:30:17.453 } 00:30:17.453 EOF 00:30:17.453 )") 00:30:17.453 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:17.453 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:17.453 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:17.453 { 00:30:17.453 "params": { 00:30:17.453 "name": "Nvme$subsystem", 00:30:17.453 "trtype": "$TEST_TRANSPORT", 00:30:17.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:17.453 "adrfam": "ipv4", 00:30:17.453 "trsvcid": "$NVMF_PORT", 00:30:17.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:17.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:17.453 "hdgst": ${hdgst:-false}, 00:30:17.453 "ddgst": ${ddgst:-false} 00:30:17.453 }, 00:30:17.453 "method": "bdev_nvme_attach_controller" 00:30:17.453 } 00:30:17.453 EOF 00:30:17.453 )") 00:30:17.453 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:17.453 [2024-10-01 17:30:15.797104] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:30:17.453 [2024-10-01 17:30:15.797159] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:30:17.453 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:17.453 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:17.453 { 00:30:17.453 "params": { 00:30:17.453 "name": "Nvme$subsystem", 00:30:17.453 "trtype": "$TEST_TRANSPORT", 00:30:17.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:17.453 "adrfam": "ipv4", 00:30:17.453 "trsvcid": "$NVMF_PORT", 00:30:17.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:17.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:17.453 "hdgst": ${hdgst:-false}, 00:30:17.453 "ddgst": ${ddgst:-false} 00:30:17.453 }, 00:30:17.453 "method": "bdev_nvme_attach_controller" 00:30:17.453 } 00:30:17.453 EOF 00:30:17.453 )") 00:30:17.453 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:17.453 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:17.453 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:17.453 { 00:30:17.453 "params": { 00:30:17.453 "name": "Nvme$subsystem", 00:30:17.453 "trtype": "$TEST_TRANSPORT", 00:30:17.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:17.453 "adrfam": "ipv4", 00:30:17.453 "trsvcid": "$NVMF_PORT", 00:30:17.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:17.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:17.453 "hdgst": ${hdgst:-false}, 00:30:17.453 "ddgst": ${ddgst:-false} 00:30:17.453 }, 00:30:17.453 "method": "bdev_nvme_attach_controller" 00:30:17.453 } 00:30:17.453 EOF 00:30:17.453 )") 00:30:17.453 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:17.453 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:17.453 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:17.453 { 00:30:17.453 "params": { 00:30:17.453 "name": "Nvme$subsystem", 00:30:17.453 "trtype": "$TEST_TRANSPORT", 00:30:17.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:17.453 "adrfam": "ipv4", 00:30:17.453 "trsvcid": "$NVMF_PORT", 00:30:17.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:17.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:17.453 "hdgst": ${hdgst:-false}, 00:30:17.453 "ddgst": ${ddgst:-false} 00:30:17.453 }, 00:30:17.453 "method": "bdev_nvme_attach_controller" 00:30:17.453 } 00:30:17.453 EOF 00:30:17.453 )") 00:30:17.453 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:17.453 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:17.453 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:17.453 { 00:30:17.453 "params": { 00:30:17.453 "name": "Nvme$subsystem", 00:30:17.453 "trtype": "$TEST_TRANSPORT", 00:30:17.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:17.453 "adrfam": "ipv4", 00:30:17.453 "trsvcid": "$NVMF_PORT", 00:30:17.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:17.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:17.453 "hdgst": ${hdgst:-false}, 00:30:17.453 "ddgst": ${ddgst:-false} 00:30:17.453 }, 00:30:17.453 "method": "bdev_nvme_attach_controller" 00:30:17.453 } 00:30:17.453 EOF 00:30:17.453 )") 00:30:17.453 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:17.453 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:30:17.453 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:30:17.453 17:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:17.453 "params": { 00:30:17.453 "name": "Nvme1", 00:30:17.453 "trtype": "tcp", 00:30:17.453 "traddr": "10.0.0.2", 00:30:17.453 "adrfam": "ipv4", 00:30:17.453 "trsvcid": "4420", 00:30:17.453 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:17.453 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:17.453 "hdgst": false, 00:30:17.453 "ddgst": false 00:30:17.453 }, 00:30:17.453 "method": "bdev_nvme_attach_controller" 00:30:17.453 },{ 00:30:17.453 "params": { 00:30:17.453 "name": "Nvme2", 00:30:17.453 "trtype": "tcp", 00:30:17.453 "traddr": "10.0.0.2", 00:30:17.453 "adrfam": "ipv4", 00:30:17.453 "trsvcid": "4420", 00:30:17.453 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:17.453 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:17.453 "hdgst": false, 00:30:17.453 "ddgst": false 00:30:17.453 }, 00:30:17.453 "method": "bdev_nvme_attach_controller" 00:30:17.453 },{ 00:30:17.453 "params": { 00:30:17.453 "name": "Nvme3", 00:30:17.453 "trtype": "tcp", 00:30:17.453 "traddr": "10.0.0.2", 00:30:17.453 "adrfam": "ipv4", 00:30:17.453 "trsvcid": "4420", 00:30:17.453 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:17.453 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:17.453 "hdgst": false, 00:30:17.453 "ddgst": false 00:30:17.453 }, 00:30:17.453 "method": "bdev_nvme_attach_controller" 00:30:17.453 },{ 00:30:17.453 "params": { 00:30:17.453 "name": "Nvme4", 00:30:17.453 "trtype": "tcp", 00:30:17.453 "traddr": "10.0.0.2", 00:30:17.453 "adrfam": "ipv4", 00:30:17.453 "trsvcid": "4420", 00:30:17.453 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:17.453 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:17.453 "hdgst": false, 00:30:17.453 "ddgst": false 00:30:17.453 }, 00:30:17.453 "method": "bdev_nvme_attach_controller" 00:30:17.453 },{ 00:30:17.453 "params": { 00:30:17.453 "name": "Nvme5", 00:30:17.453 "trtype": "tcp", 00:30:17.453 "traddr": "10.0.0.2", 00:30:17.453 "adrfam": "ipv4", 00:30:17.453 "trsvcid": "4420", 00:30:17.453 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:17.453 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:17.453 "hdgst": false, 00:30:17.453 "ddgst": false 00:30:17.453 }, 00:30:17.453 "method": "bdev_nvme_attach_controller" 00:30:17.453 },{ 00:30:17.453 "params": { 00:30:17.453 "name": "Nvme6", 00:30:17.453 "trtype": "tcp", 00:30:17.453 "traddr": "10.0.0.2", 00:30:17.453 "adrfam": "ipv4", 00:30:17.453 "trsvcid": "4420", 00:30:17.453 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:17.453 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:17.453 "hdgst": false, 00:30:17.453 "ddgst": false 00:30:17.453 }, 00:30:17.453 "method": "bdev_nvme_attach_controller" 00:30:17.453 },{ 00:30:17.453 "params": { 00:30:17.453 "name": "Nvme7", 00:30:17.453 "trtype": "tcp", 00:30:17.453 "traddr": "10.0.0.2", 00:30:17.453 "adrfam": "ipv4", 00:30:17.453 "trsvcid": "4420", 00:30:17.453 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:17.453 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:17.453 "hdgst": false, 00:30:17.453 "ddgst": false 00:30:17.453 }, 00:30:17.453 "method": "bdev_nvme_attach_controller" 00:30:17.453 },{ 00:30:17.453 "params": { 00:30:17.453 "name": "Nvme8", 00:30:17.453 "trtype": "tcp", 00:30:17.453 "traddr": "10.0.0.2", 00:30:17.453 "adrfam": "ipv4", 00:30:17.453 "trsvcid": "4420", 00:30:17.453 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:17.453 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:17.453 "hdgst": false, 00:30:17.453 "ddgst": false 00:30:17.453 }, 00:30:17.453 "method": "bdev_nvme_attach_controller" 00:30:17.453 },{ 00:30:17.453 "params": { 00:30:17.453 "name": "Nvme9", 00:30:17.453 "trtype": "tcp", 00:30:17.453 "traddr": "10.0.0.2", 00:30:17.453 "adrfam": "ipv4", 00:30:17.453 "trsvcid": "4420", 00:30:17.453 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:17.453 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:17.453 "hdgst": false, 00:30:17.453 "ddgst": false 00:30:17.453 }, 00:30:17.453 "method": "bdev_nvme_attach_controller" 00:30:17.454 },{ 00:30:17.454 "params": { 00:30:17.454 "name": "Nvme10", 00:30:17.454 "trtype": "tcp", 00:30:17.454 "traddr": "10.0.0.2", 00:30:17.454 "adrfam": "ipv4", 00:30:17.454 "trsvcid": "4420", 00:30:17.454 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:17.454 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:17.454 "hdgst": false, 00:30:17.454 "ddgst": false 00:30:17.454 }, 00:30:17.454 "method": "bdev_nvme_attach_controller" 00:30:17.454 }' 00:30:17.454 [2024-10-01 17:30:15.859345] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:17.454 [2024-10-01 17:30:15.890741] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:19.367 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:19.367 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:30:19.368 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:19.368 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.368 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:19.368 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.368 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3163520 00:30:19.368 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:30:19.368 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3163520 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:30:19.368 17:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:30:20.310 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3163308 00:30:20.310 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:30:20.310 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:20.310 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:30:20.310 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:30:20.310 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:20.310 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:20.310 { 00:30:20.310 "params": { 00:30:20.310 "name": "Nvme$subsystem", 00:30:20.310 "trtype": "$TEST_TRANSPORT", 00:30:20.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:20.310 "adrfam": "ipv4", 00:30:20.310 "trsvcid": "$NVMF_PORT", 00:30:20.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:20.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:20.310 "hdgst": ${hdgst:-false}, 00:30:20.310 "ddgst": ${ddgst:-false} 00:30:20.310 }, 00:30:20.310 "method": "bdev_nvme_attach_controller" 00:30:20.310 } 00:30:20.310 EOF 00:30:20.310 )") 00:30:20.310 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:20.310 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:20.310 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:20.310 { 00:30:20.310 "params": { 00:30:20.310 "name": "Nvme$subsystem", 00:30:20.310 "trtype": "$TEST_TRANSPORT", 00:30:20.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:20.310 "adrfam": "ipv4", 00:30:20.310 "trsvcid": "$NVMF_PORT", 00:30:20.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:20.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:20.310 "hdgst": ${hdgst:-false}, 00:30:20.310 "ddgst": ${ddgst:-false} 00:30:20.310 }, 00:30:20.310 "method": "bdev_nvme_attach_controller" 00:30:20.310 } 00:30:20.310 EOF 00:30:20.310 )") 00:30:20.310 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:20.310 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:20.310 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:20.310 { 00:30:20.310 "params": { 00:30:20.310 "name": "Nvme$subsystem", 00:30:20.310 "trtype": "$TEST_TRANSPORT", 00:30:20.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:20.310 "adrfam": "ipv4", 00:30:20.310 "trsvcid": "$NVMF_PORT", 00:30:20.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:20.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:20.310 "hdgst": ${hdgst:-false}, 00:30:20.310 "ddgst": ${ddgst:-false} 00:30:20.310 }, 00:30:20.310 "method": "bdev_nvme_attach_controller" 00:30:20.310 } 00:30:20.310 EOF 00:30:20.310 )") 00:30:20.310 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:20.310 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:20.310 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:20.310 { 00:30:20.310 "params": { 00:30:20.310 "name": "Nvme$subsystem", 00:30:20.310 "trtype": "$TEST_TRANSPORT", 00:30:20.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:20.310 "adrfam": "ipv4", 00:30:20.310 "trsvcid": "$NVMF_PORT", 00:30:20.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:20.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:20.310 "hdgst": ${hdgst:-false}, 00:30:20.310 "ddgst": ${ddgst:-false} 00:30:20.310 }, 00:30:20.310 "method": "bdev_nvme_attach_controller" 00:30:20.310 } 00:30:20.310 EOF 00:30:20.310 )") 00:30:20.310 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:20.310 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:20.310 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:20.310 { 00:30:20.310 "params": { 00:30:20.310 "name": "Nvme$subsystem", 00:30:20.310 "trtype": "$TEST_TRANSPORT", 00:30:20.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:20.310 "adrfam": "ipv4", 00:30:20.310 "trsvcid": "$NVMF_PORT", 00:30:20.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:20.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:20.310 "hdgst": ${hdgst:-false}, 00:30:20.310 "ddgst": ${ddgst:-false} 00:30:20.310 }, 00:30:20.310 "method": "bdev_nvme_attach_controller" 00:30:20.310 } 00:30:20.310 EOF 00:30:20.310 )") 00:30:20.310 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:20.310 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:20.310 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:20.310 { 00:30:20.310 "params": { 00:30:20.311 "name": "Nvme$subsystem", 00:30:20.311 "trtype": "$TEST_TRANSPORT", 00:30:20.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:20.311 "adrfam": "ipv4", 00:30:20.311 "trsvcid": "$NVMF_PORT", 00:30:20.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:20.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:20.311 "hdgst": ${hdgst:-false}, 00:30:20.311 "ddgst": ${ddgst:-false} 00:30:20.311 }, 00:30:20.311 "method": "bdev_nvme_attach_controller" 00:30:20.311 } 00:30:20.311 EOF 00:30:20.311 )") 00:30:20.311 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:20.311 [2024-10-01 17:30:18.689687] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:30:20.311 [2024-10-01 17:30:18.689744] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3164179 ] 00:30:20.311 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:20.311 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:20.311 { 00:30:20.311 "params": { 00:30:20.311 "name": "Nvme$subsystem", 00:30:20.311 "trtype": "$TEST_TRANSPORT", 00:30:20.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:20.311 "adrfam": "ipv4", 00:30:20.311 "trsvcid": "$NVMF_PORT", 00:30:20.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:20.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:20.311 "hdgst": ${hdgst:-false}, 00:30:20.311 "ddgst": ${ddgst:-false} 00:30:20.311 }, 00:30:20.311 "method": "bdev_nvme_attach_controller" 00:30:20.311 } 00:30:20.311 EOF 00:30:20.311 )") 00:30:20.311 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:20.311 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:20.311 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:20.311 { 00:30:20.311 "params": { 00:30:20.311 "name": "Nvme$subsystem", 00:30:20.311 "trtype": "$TEST_TRANSPORT", 00:30:20.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:20.311 "adrfam": "ipv4", 00:30:20.311 "trsvcid": "$NVMF_PORT", 00:30:20.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:20.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:20.311 "hdgst": ${hdgst:-false}, 00:30:20.311 "ddgst": ${ddgst:-false} 00:30:20.311 }, 00:30:20.311 "method": "bdev_nvme_attach_controller" 00:30:20.311 } 00:30:20.311 EOF 00:30:20.311 )") 00:30:20.311 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:20.311 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:20.311 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:20.311 { 00:30:20.311 "params": { 00:30:20.311 "name": "Nvme$subsystem", 00:30:20.311 "trtype": "$TEST_TRANSPORT", 00:30:20.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:20.311 "adrfam": "ipv4", 00:30:20.311 "trsvcid": "$NVMF_PORT", 00:30:20.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:20.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:20.311 "hdgst": ${hdgst:-false}, 00:30:20.311 "ddgst": ${ddgst:-false} 00:30:20.311 }, 00:30:20.311 "method": "bdev_nvme_attach_controller" 00:30:20.311 } 00:30:20.311 EOF 00:30:20.311 )") 00:30:20.311 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:20.311 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:20.311 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:20.311 { 00:30:20.311 "params": { 00:30:20.311 "name": "Nvme$subsystem", 00:30:20.311 "trtype": "$TEST_TRANSPORT", 00:30:20.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:20.311 "adrfam": "ipv4", 00:30:20.311 "trsvcid": "$NVMF_PORT", 00:30:20.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:20.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:20.311 "hdgst": ${hdgst:-false}, 00:30:20.311 "ddgst": ${ddgst:-false} 00:30:20.311 }, 00:30:20.311 "method": "bdev_nvme_attach_controller" 00:30:20.311 } 00:30:20.311 EOF 00:30:20.311 )") 00:30:20.311 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:20.311 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:30:20.311 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:30:20.311 17:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:20.311 "params": { 00:30:20.311 "name": "Nvme1", 00:30:20.311 "trtype": "tcp", 00:30:20.311 "traddr": "10.0.0.2", 00:30:20.311 "adrfam": "ipv4", 00:30:20.311 "trsvcid": "4420", 00:30:20.311 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:20.311 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:20.311 "hdgst": false, 00:30:20.311 "ddgst": false 00:30:20.311 }, 00:30:20.311 "method": "bdev_nvme_attach_controller" 00:30:20.311 },{ 00:30:20.311 "params": { 00:30:20.311 "name": "Nvme2", 00:30:20.311 "trtype": "tcp", 00:30:20.311 "traddr": "10.0.0.2", 00:30:20.311 "adrfam": "ipv4", 00:30:20.311 "trsvcid": "4420", 00:30:20.311 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:20.311 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:20.311 "hdgst": false, 00:30:20.311 "ddgst": false 00:30:20.311 }, 00:30:20.311 "method": "bdev_nvme_attach_controller" 00:30:20.311 },{ 00:30:20.311 "params": { 00:30:20.311 "name": "Nvme3", 00:30:20.311 "trtype": "tcp", 00:30:20.311 "traddr": "10.0.0.2", 00:30:20.311 "adrfam": "ipv4", 00:30:20.311 "trsvcid": "4420", 00:30:20.311 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:20.311 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:20.311 "hdgst": false, 00:30:20.311 "ddgst": false 00:30:20.311 }, 00:30:20.311 "method": "bdev_nvme_attach_controller" 00:30:20.311 },{ 00:30:20.311 "params": { 00:30:20.311 "name": "Nvme4", 00:30:20.311 "trtype": "tcp", 00:30:20.311 "traddr": "10.0.0.2", 00:30:20.311 "adrfam": "ipv4", 00:30:20.311 "trsvcid": "4420", 00:30:20.311 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:20.311 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:20.311 "hdgst": false, 00:30:20.311 "ddgst": false 00:30:20.311 }, 00:30:20.311 "method": "bdev_nvme_attach_controller" 00:30:20.311 },{ 00:30:20.311 "params": { 00:30:20.311 "name": "Nvme5", 00:30:20.311 "trtype": "tcp", 00:30:20.311 "traddr": "10.0.0.2", 00:30:20.311 "adrfam": "ipv4", 00:30:20.311 "trsvcid": "4420", 00:30:20.311 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:20.311 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:20.311 "hdgst": false, 00:30:20.311 "ddgst": false 00:30:20.311 }, 00:30:20.311 "method": "bdev_nvme_attach_controller" 00:30:20.311 },{ 00:30:20.311 "params": { 00:30:20.311 "name": "Nvme6", 00:30:20.311 "trtype": "tcp", 00:30:20.311 "traddr": "10.0.0.2", 00:30:20.311 "adrfam": "ipv4", 00:30:20.311 "trsvcid": "4420", 00:30:20.311 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:20.311 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:20.311 "hdgst": false, 00:30:20.311 "ddgst": false 00:30:20.311 }, 00:30:20.311 "method": "bdev_nvme_attach_controller" 00:30:20.311 },{ 00:30:20.311 "params": { 00:30:20.311 "name": "Nvme7", 00:30:20.311 "trtype": "tcp", 00:30:20.311 "traddr": "10.0.0.2", 00:30:20.311 "adrfam": "ipv4", 00:30:20.311 "trsvcid": "4420", 00:30:20.311 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:20.311 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:20.311 "hdgst": false, 00:30:20.311 "ddgst": false 00:30:20.311 }, 00:30:20.311 "method": "bdev_nvme_attach_controller" 00:30:20.311 },{ 00:30:20.311 "params": { 00:30:20.311 "name": "Nvme8", 00:30:20.311 "trtype": "tcp", 00:30:20.311 "traddr": "10.0.0.2", 00:30:20.311 "adrfam": "ipv4", 00:30:20.311 "trsvcid": "4420", 00:30:20.311 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:20.311 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:20.311 "hdgst": false, 00:30:20.311 "ddgst": false 00:30:20.311 }, 00:30:20.311 "method": "bdev_nvme_attach_controller" 00:30:20.311 },{ 00:30:20.311 "params": { 00:30:20.311 "name": "Nvme9", 00:30:20.311 "trtype": "tcp", 00:30:20.311 "traddr": "10.0.0.2", 00:30:20.311 "adrfam": "ipv4", 00:30:20.311 "trsvcid": "4420", 00:30:20.311 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:20.311 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:20.311 "hdgst": false, 00:30:20.311 "ddgst": false 00:30:20.311 }, 00:30:20.311 "method": "bdev_nvme_attach_controller" 00:30:20.311 },{ 00:30:20.311 "params": { 00:30:20.311 "name": "Nvme10", 00:30:20.311 "trtype": "tcp", 00:30:20.311 "traddr": "10.0.0.2", 00:30:20.311 "adrfam": "ipv4", 00:30:20.311 "trsvcid": "4420", 00:30:20.311 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:20.311 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:20.311 "hdgst": false, 00:30:20.311 "ddgst": false 00:30:20.311 }, 00:30:20.311 "method": "bdev_nvme_attach_controller" 00:30:20.311 }' 00:30:20.311 [2024-10-01 17:30:18.752222] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:20.311 [2024-10-01 17:30:18.783112] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:21.700 Running I/O for 1 seconds... 00:30:22.745 1856.00 IOPS, 116.00 MiB/s 00:30:22.745 Latency(us) 00:30:22.745 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:22.745 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:22.745 Verification LBA range: start 0x0 length 0x400 00:30:22.745 Nvme1n1 : 1.14 224.52 14.03 0.00 0.00 282153.60 20971.52 242920.11 00:30:22.745 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:22.745 Verification LBA range: start 0x0 length 0x400 00:30:22.745 Nvme2n1 : 1.15 222.18 13.89 0.00 0.00 280420.27 16820.91 260396.37 00:30:22.745 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:22.745 Verification LBA range: start 0x0 length 0x400 00:30:22.745 Nvme3n1 : 1.18 271.88 16.99 0.00 0.00 225413.63 7973.55 260396.37 00:30:22.746 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:22.746 Verification LBA range: start 0x0 length 0x400 00:30:22.746 Nvme4n1 : 1.10 232.42 14.53 0.00 0.00 258268.80 19879.25 244667.73 00:30:22.746 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:22.746 Verification LBA range: start 0x0 length 0x400 00:30:22.746 Nvme5n1 : 1.11 230.57 14.41 0.00 0.00 255697.28 17039.36 248162.99 00:30:22.746 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:22.746 Verification LBA range: start 0x0 length 0x400 00:30:22.746 Nvme6n1 : 1.15 223.51 13.97 0.00 0.00 259730.35 17367.04 267386.88 00:30:22.746 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:22.746 Verification LBA range: start 0x0 length 0x400 00:30:22.746 Nvme7n1 : 1.19 274.19 17.14 0.00 0.00 208039.58 3140.27 260396.37 00:30:22.746 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:22.746 Verification LBA range: start 0x0 length 0x400 00:30:22.746 Nvme8n1 : 1.19 268.96 16.81 0.00 0.00 208997.97 11304.96 225443.84 00:30:22.746 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:22.746 Verification LBA range: start 0x0 length 0x400 00:30:22.746 Nvme9n1 : 1.20 266.39 16.65 0.00 0.00 207427.18 10704.21 263891.63 00:30:22.746 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:22.746 Verification LBA range: start 0x0 length 0x400 00:30:22.746 Nvme10n1 : 1.18 216.18 13.51 0.00 0.00 250685.23 21736.11 270882.13 00:30:22.746 =================================================================================================================== 00:30:22.746 Total : 2430.80 151.92 0.00 0.00 240799.45 3140.27 270882.13 00:30:23.041 17:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:30:23.041 17:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:23.041 17:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:23.041 17:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:23.041 17:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:23.041 17:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:23.041 17:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:30:23.041 17:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:23.041 17:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:30:23.041 17:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:23.041 17:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:23.041 rmmod nvme_tcp 00:30:23.041 rmmod nvme_fabrics 00:30:23.041 rmmod nvme_keyring 00:30:23.041 17:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:23.041 17:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:30:23.041 17:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:30:23.041 17:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@515 -- # '[' -n 3163308 ']' 00:30:23.041 17:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # killprocess 3163308 00:30:23.041 17:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 3163308 ']' 00:30:23.041 17:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 3163308 00:30:23.041 17:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:30:23.041 17:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:23.041 17:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3163308 00:30:23.041 17:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:23.041 17:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:23.041 17:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3163308' 00:30:23.041 killing process with pid 3163308 00:30:23.041 17:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 3163308 00:30:23.041 17:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 3163308 00:30:23.302 17:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:23.302 17:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:23.302 17:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:23.302 17:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:30:23.302 17:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-save 00:30:23.302 17:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:23.302 17:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-restore 00:30:23.302 17:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:23.302 17:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:23.302 17:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:23.302 17:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:23.302 17:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:25.849 00:30:25.849 real 0m16.856s 00:30:25.849 user 0m35.056s 00:30:25.849 sys 0m6.633s 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:25.849 ************************************ 00:30:25.849 END TEST nvmf_shutdown_tc1 00:30:25.849 ************************************ 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:25.849 ************************************ 00:30:25.849 START TEST nvmf_shutdown_tc2 00:30:25.849 ************************************ 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:25.849 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:25.849 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:25.850 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:25.850 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:25.850 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # is_hw=yes 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:25.850 17:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:25.850 17:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:25.850 17:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:25.850 17:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:25.850 17:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:25.850 17:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:25.850 17:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:25.850 17:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:25.850 17:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:25.850 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:25.850 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.521 ms 00:30:25.850 00:30:25.850 --- 10.0.0.2 ping statistics --- 00:30:25.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:25.850 rtt min/avg/max/mdev = 0.521/0.521/0.521/0.000 ms 00:30:25.850 17:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:25.850 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:25.850 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:30:25.850 00:30:25.850 --- 10.0.0.1 ping statistics --- 00:30:25.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:25.850 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:30:25.850 17:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:25.850 17:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # return 0 00:30:25.851 17:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:25.851 17:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:25.851 17:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:25.851 17:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:25.851 17:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:25.851 17:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:25.851 17:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:25.851 17:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:30:25.851 17:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:25.851 17:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:25.851 17:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:25.851 17:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:25.851 17:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # nvmfpid=3165309 00:30:25.851 17:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # waitforlisten 3165309 00:30:25.851 17:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3165309 ']' 00:30:25.851 17:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:25.851 17:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:25.851 17:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:25.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:25.851 17:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:25.851 17:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:25.851 [2024-10-01 17:30:24.313557] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:30:25.851 [2024-10-01 17:30:24.313607] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:25.851 [2024-10-01 17:30:24.389543] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:26.111 [2024-10-01 17:30:24.419076] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:26.111 [2024-10-01 17:30:24.419106] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:26.111 [2024-10-01 17:30:24.419111] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:26.111 [2024-10-01 17:30:24.419120] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:26.111 [2024-10-01 17:30:24.419124] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:26.111 [2024-10-01 17:30:24.419266] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:30:26.111 [2024-10-01 17:30:24.419425] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:30:26.111 [2024-10-01 17:30:24.419581] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:26.111 [2024-10-01 17:30:24.419583] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:30:26.682 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:26.682 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:30:26.682 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:26.682 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:26.682 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:26.682 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:26.682 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:26.682 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.682 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:26.682 [2024-10-01 17:30:25.146268] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:26.682 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.682 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:30:26.682 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:30:26.682 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:26.682 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:26.682 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:26.682 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:26.682 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:26.682 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:26.682 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:26.682 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:26.682 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:26.682 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:26.682 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:26.682 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:26.682 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:26.682 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:26.682 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:26.682 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:26.682 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:26.682 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:26.682 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:26.682 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:26.682 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:26.682 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:26.682 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:26.682 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:30:26.682 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.682 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:26.682 Malloc1 00:30:26.942 [2024-10-01 17:30:25.244920] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:26.942 Malloc2 00:30:26.942 Malloc3 00:30:26.942 Malloc4 00:30:26.942 Malloc5 00:30:26.942 Malloc6 00:30:26.942 Malloc7 00:30:27.203 Malloc8 00:30:27.203 Malloc9 00:30:27.203 Malloc10 00:30:27.203 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.203 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:30:27.203 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:27.203 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:27.204 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3165690 00:30:27.204 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3165690 /var/tmp/bdevperf.sock 00:30:27.204 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3165690 ']' 00:30:27.204 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:27.204 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:27.204 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:27.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:27.204 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:27.204 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:27.204 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:27.204 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:27.204 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config=() 00:30:27.204 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # local subsystem config 00:30:27.204 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:27.204 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:27.204 { 00:30:27.204 "params": { 00:30:27.204 "name": "Nvme$subsystem", 00:30:27.204 "trtype": "$TEST_TRANSPORT", 00:30:27.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:27.204 "adrfam": "ipv4", 00:30:27.204 "trsvcid": "$NVMF_PORT", 00:30:27.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:27.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:27.204 "hdgst": ${hdgst:-false}, 00:30:27.204 "ddgst": ${ddgst:-false} 00:30:27.204 }, 00:30:27.204 "method": "bdev_nvme_attach_controller" 00:30:27.204 } 00:30:27.204 EOF 00:30:27.204 )") 00:30:27.204 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:30:27.204 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:27.204 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:27.204 { 00:30:27.204 "params": { 00:30:27.204 "name": "Nvme$subsystem", 00:30:27.204 "trtype": "$TEST_TRANSPORT", 00:30:27.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:27.204 "adrfam": "ipv4", 00:30:27.204 "trsvcid": "$NVMF_PORT", 00:30:27.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:27.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:27.204 "hdgst": ${hdgst:-false}, 00:30:27.204 "ddgst": ${ddgst:-false} 00:30:27.204 }, 00:30:27.204 "method": "bdev_nvme_attach_controller" 00:30:27.204 } 00:30:27.204 EOF 00:30:27.204 )") 00:30:27.204 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:30:27.204 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:27.204 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:27.204 { 00:30:27.204 "params": { 00:30:27.204 "name": "Nvme$subsystem", 00:30:27.204 "trtype": "$TEST_TRANSPORT", 00:30:27.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:27.204 "adrfam": "ipv4", 00:30:27.204 "trsvcid": "$NVMF_PORT", 00:30:27.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:27.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:27.204 "hdgst": ${hdgst:-false}, 00:30:27.204 "ddgst": ${ddgst:-false} 00:30:27.204 }, 00:30:27.204 "method": "bdev_nvme_attach_controller" 00:30:27.204 } 00:30:27.204 EOF 00:30:27.204 )") 00:30:27.204 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:30:27.204 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:27.204 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:27.204 { 00:30:27.204 "params": { 00:30:27.204 "name": "Nvme$subsystem", 00:30:27.204 "trtype": "$TEST_TRANSPORT", 00:30:27.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:27.204 "adrfam": "ipv4", 00:30:27.204 "trsvcid": "$NVMF_PORT", 00:30:27.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:27.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:27.204 "hdgst": ${hdgst:-false}, 00:30:27.204 "ddgst": ${ddgst:-false} 00:30:27.204 }, 00:30:27.204 "method": "bdev_nvme_attach_controller" 00:30:27.204 } 00:30:27.204 EOF 00:30:27.204 )") 00:30:27.204 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:30:27.204 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:27.204 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:27.204 { 00:30:27.204 "params": { 00:30:27.204 "name": "Nvme$subsystem", 00:30:27.204 "trtype": "$TEST_TRANSPORT", 00:30:27.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:27.204 "adrfam": "ipv4", 00:30:27.204 "trsvcid": "$NVMF_PORT", 00:30:27.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:27.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:27.204 "hdgst": ${hdgst:-false}, 00:30:27.204 "ddgst": ${ddgst:-false} 00:30:27.204 }, 00:30:27.204 "method": "bdev_nvme_attach_controller" 00:30:27.204 } 00:30:27.204 EOF 00:30:27.204 )") 00:30:27.204 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:30:27.204 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:27.204 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:27.204 { 00:30:27.204 "params": { 00:30:27.204 "name": "Nvme$subsystem", 00:30:27.204 "trtype": "$TEST_TRANSPORT", 00:30:27.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:27.204 "adrfam": "ipv4", 00:30:27.204 "trsvcid": "$NVMF_PORT", 00:30:27.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:27.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:27.204 "hdgst": ${hdgst:-false}, 00:30:27.204 "ddgst": ${ddgst:-false} 00:30:27.204 }, 00:30:27.204 "method": "bdev_nvme_attach_controller" 00:30:27.204 } 00:30:27.204 EOF 00:30:27.204 )") 00:30:27.204 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:30:27.204 [2024-10-01 17:30:25.691280] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:30:27.204 [2024-10-01 17:30:25.691334] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3165690 ] 00:30:27.204 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:27.204 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:27.204 { 00:30:27.204 "params": { 00:30:27.204 "name": "Nvme$subsystem", 00:30:27.204 "trtype": "$TEST_TRANSPORT", 00:30:27.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:27.204 "adrfam": "ipv4", 00:30:27.204 "trsvcid": "$NVMF_PORT", 00:30:27.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:27.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:27.204 "hdgst": ${hdgst:-false}, 00:30:27.204 "ddgst": ${ddgst:-false} 00:30:27.204 }, 00:30:27.204 "method": "bdev_nvme_attach_controller" 00:30:27.204 } 00:30:27.204 EOF 00:30:27.204 )") 00:30:27.204 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:30:27.204 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:27.204 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:27.204 { 00:30:27.204 "params": { 00:30:27.204 "name": "Nvme$subsystem", 00:30:27.204 "trtype": "$TEST_TRANSPORT", 00:30:27.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:27.204 "adrfam": "ipv4", 00:30:27.204 "trsvcid": "$NVMF_PORT", 00:30:27.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:27.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:27.204 "hdgst": ${hdgst:-false}, 00:30:27.204 "ddgst": ${ddgst:-false} 00:30:27.204 }, 00:30:27.204 "method": "bdev_nvme_attach_controller" 00:30:27.204 } 00:30:27.204 EOF 00:30:27.204 )") 00:30:27.204 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:30:27.204 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:27.204 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:27.204 { 00:30:27.204 "params": { 00:30:27.204 "name": "Nvme$subsystem", 00:30:27.204 "trtype": "$TEST_TRANSPORT", 00:30:27.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:27.204 "adrfam": "ipv4", 00:30:27.204 "trsvcid": "$NVMF_PORT", 00:30:27.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:27.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:27.204 "hdgst": ${hdgst:-false}, 00:30:27.204 "ddgst": ${ddgst:-false} 00:30:27.204 }, 00:30:27.204 "method": "bdev_nvme_attach_controller" 00:30:27.204 } 00:30:27.204 EOF 00:30:27.204 )") 00:30:27.204 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:30:27.204 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:27.204 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:27.204 { 00:30:27.204 "params": { 00:30:27.204 "name": "Nvme$subsystem", 00:30:27.204 "trtype": "$TEST_TRANSPORT", 00:30:27.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:27.204 "adrfam": "ipv4", 00:30:27.204 "trsvcid": "$NVMF_PORT", 00:30:27.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:27.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:27.204 "hdgst": ${hdgst:-false}, 00:30:27.204 "ddgst": ${ddgst:-false} 00:30:27.204 }, 00:30:27.204 "method": "bdev_nvme_attach_controller" 00:30:27.204 } 00:30:27.204 EOF 00:30:27.204 )") 00:30:27.204 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:30:27.204 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # jq . 00:30:27.204 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@583 -- # IFS=, 00:30:27.204 17:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:27.204 "params": { 00:30:27.204 "name": "Nvme1", 00:30:27.204 "trtype": "tcp", 00:30:27.204 "traddr": "10.0.0.2", 00:30:27.204 "adrfam": "ipv4", 00:30:27.204 "trsvcid": "4420", 00:30:27.204 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:27.204 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:27.204 "hdgst": false, 00:30:27.204 "ddgst": false 00:30:27.204 }, 00:30:27.204 "method": "bdev_nvme_attach_controller" 00:30:27.204 },{ 00:30:27.204 "params": { 00:30:27.204 "name": "Nvme2", 00:30:27.204 "trtype": "tcp", 00:30:27.204 "traddr": "10.0.0.2", 00:30:27.204 "adrfam": "ipv4", 00:30:27.204 "trsvcid": "4420", 00:30:27.204 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:27.204 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:27.204 "hdgst": false, 00:30:27.204 "ddgst": false 00:30:27.204 }, 00:30:27.204 "method": "bdev_nvme_attach_controller" 00:30:27.204 },{ 00:30:27.204 "params": { 00:30:27.204 "name": "Nvme3", 00:30:27.204 "trtype": "tcp", 00:30:27.204 "traddr": "10.0.0.2", 00:30:27.204 "adrfam": "ipv4", 00:30:27.204 "trsvcid": "4420", 00:30:27.204 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:27.204 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:27.204 "hdgst": false, 00:30:27.204 "ddgst": false 00:30:27.204 }, 00:30:27.204 "method": "bdev_nvme_attach_controller" 00:30:27.204 },{ 00:30:27.204 "params": { 00:30:27.204 "name": "Nvme4", 00:30:27.204 "trtype": "tcp", 00:30:27.204 "traddr": "10.0.0.2", 00:30:27.204 "adrfam": "ipv4", 00:30:27.204 "trsvcid": "4420", 00:30:27.204 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:27.204 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:27.204 "hdgst": false, 00:30:27.204 "ddgst": false 00:30:27.204 }, 00:30:27.204 "method": "bdev_nvme_attach_controller" 00:30:27.204 },{ 00:30:27.204 "params": { 00:30:27.204 "name": "Nvme5", 00:30:27.204 "trtype": "tcp", 00:30:27.204 "traddr": "10.0.0.2", 00:30:27.204 "adrfam": "ipv4", 00:30:27.204 "trsvcid": "4420", 00:30:27.204 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:27.204 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:27.204 "hdgst": false, 00:30:27.204 "ddgst": false 00:30:27.204 }, 00:30:27.204 "method": "bdev_nvme_attach_controller" 00:30:27.204 },{ 00:30:27.204 "params": { 00:30:27.204 "name": "Nvme6", 00:30:27.204 "trtype": "tcp", 00:30:27.204 "traddr": "10.0.0.2", 00:30:27.204 "adrfam": "ipv4", 00:30:27.204 "trsvcid": "4420", 00:30:27.204 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:27.204 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:27.204 "hdgst": false, 00:30:27.204 "ddgst": false 00:30:27.204 }, 00:30:27.204 "method": "bdev_nvme_attach_controller" 00:30:27.204 },{ 00:30:27.204 "params": { 00:30:27.204 "name": "Nvme7", 00:30:27.204 "trtype": "tcp", 00:30:27.204 "traddr": "10.0.0.2", 00:30:27.204 "adrfam": "ipv4", 00:30:27.204 "trsvcid": "4420", 00:30:27.204 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:27.204 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:27.204 "hdgst": false, 00:30:27.204 "ddgst": false 00:30:27.204 }, 00:30:27.204 "method": "bdev_nvme_attach_controller" 00:30:27.204 },{ 00:30:27.204 "params": { 00:30:27.204 "name": "Nvme8", 00:30:27.204 "trtype": "tcp", 00:30:27.204 "traddr": "10.0.0.2", 00:30:27.204 "adrfam": "ipv4", 00:30:27.204 "trsvcid": "4420", 00:30:27.204 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:27.204 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:27.204 "hdgst": false, 00:30:27.204 "ddgst": false 00:30:27.204 }, 00:30:27.204 "method": "bdev_nvme_attach_controller" 00:30:27.204 },{ 00:30:27.204 "params": { 00:30:27.204 "name": "Nvme9", 00:30:27.204 "trtype": "tcp", 00:30:27.204 "traddr": "10.0.0.2", 00:30:27.204 "adrfam": "ipv4", 00:30:27.204 "trsvcid": "4420", 00:30:27.204 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:27.204 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:27.204 "hdgst": false, 00:30:27.204 "ddgst": false 00:30:27.204 }, 00:30:27.204 "method": "bdev_nvme_attach_controller" 00:30:27.204 },{ 00:30:27.204 "params": { 00:30:27.204 "name": "Nvme10", 00:30:27.204 "trtype": "tcp", 00:30:27.204 "traddr": "10.0.0.2", 00:30:27.204 "adrfam": "ipv4", 00:30:27.204 "trsvcid": "4420", 00:30:27.204 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:27.204 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:27.204 "hdgst": false, 00:30:27.204 "ddgst": false 00:30:27.204 }, 00:30:27.204 "method": "bdev_nvme_attach_controller" 00:30:27.204 }' 00:30:27.464 [2024-10-01 17:30:25.753505] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:27.464 [2024-10-01 17:30:25.784833] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:28.846 Running I/O for 10 seconds... 00:30:29.106 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:29.106 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:30:29.106 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:29.106 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.106 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:29.106 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.106 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:30:29.106 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:29.106 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:30:29.106 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:30:29.106 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:30:29.106 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:30:29.106 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:29.106 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:29.106 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:29.106 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.106 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:29.106 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.106 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:30:29.106 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:30:29.106 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:30:29.366 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:30:29.366 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:29.366 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:29.366 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:29.366 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.366 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:29.366 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.366 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:30:29.366 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:30:29.366 17:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:30:29.626 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:30:29.626 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:29.626 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:29.626 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:29.626 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.626 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:29.886 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.886 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=195 00:30:29.886 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:30:29.886 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:30:29.886 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:30:29.886 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:30:29.886 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3165690 00:30:29.886 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 3165690 ']' 00:30:29.886 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 3165690 00:30:29.886 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:30:29.886 1865.00 IOPS, 116.56 MiB/s 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:29.886 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3165690 00:30:29.886 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:29.886 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:29.886 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3165690' 00:30:29.886 killing process with pid 3165690 00:30:29.886 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 3165690 00:30:29.886 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 3165690 00:30:29.886 Received shutdown signal, test time was about 1.141372 seconds 00:30:29.886 00:30:29.886 Latency(us) 00:30:29.886 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:29.886 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:29.886 Verification LBA range: start 0x0 length 0x400 00:30:29.886 Nvme1n1 : 1.13 226.82 14.18 0.00 0.00 279369.81 16384.00 246415.36 00:30:29.886 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:29.886 Verification LBA range: start 0x0 length 0x400 00:30:29.886 Nvme2n1 : 1.14 225.13 14.07 0.00 0.00 276612.27 15291.73 274377.39 00:30:29.886 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:29.886 Verification LBA range: start 0x0 length 0x400 00:30:29.886 Nvme3n1 : 1.12 228.56 14.29 0.00 0.00 267760.53 12397.23 265639.25 00:30:29.886 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:29.886 Verification LBA range: start 0x0 length 0x400 00:30:29.886 Nvme4n1 : 1.13 282.57 17.66 0.00 0.00 212753.49 12014.93 260396.37 00:30:29.886 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:29.886 Verification LBA range: start 0x0 length 0x400 00:30:29.886 Nvme5n1 : 1.14 225.54 14.10 0.00 0.00 261965.65 20971.52 255153.49 00:30:29.886 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:29.886 Verification LBA range: start 0x0 length 0x400 00:30:29.886 Nvme6n1 : 1.13 227.28 14.21 0.00 0.00 255079.25 18022.40 249910.61 00:30:29.886 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:29.886 Verification LBA range: start 0x0 length 0x400 00:30:29.886 Nvme7n1 : 1.14 280.58 17.54 0.00 0.00 203083.86 13161.81 225443.84 00:30:29.886 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:29.886 Verification LBA range: start 0x0 length 0x400 00:30:29.886 Nvme8n1 : 1.10 231.78 14.49 0.00 0.00 240238.51 18459.31 262144.00 00:30:29.886 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:29.886 Verification LBA range: start 0x0 length 0x400 00:30:29.886 Nvme9n1 : 1.11 240.44 15.03 0.00 0.00 224887.31 7045.12 239424.85 00:30:29.886 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:29.886 Verification LBA range: start 0x0 length 0x400 00:30:29.886 Nvme10n1 : 1.11 233.86 14.62 0.00 0.00 228460.18 4068.69 228939.09 00:30:29.886 =================================================================================================================== 00:30:29.886 Total : 2402.56 150.16 0.00 0.00 243150.91 4068.69 274377.39 00:30:30.147 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:30:31.086 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3165309 00:30:31.086 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:30:31.086 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:31.086 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:31.086 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:31.086 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:31.086 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:31.086 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:30:31.086 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:31.086 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:30:31.086 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:31.086 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:31.086 rmmod nvme_tcp 00:30:31.086 rmmod nvme_fabrics 00:30:31.086 rmmod nvme_keyring 00:30:31.086 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:31.086 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:30:31.086 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:30:31.086 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@515 -- # '[' -n 3165309 ']' 00:30:31.086 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # killprocess 3165309 00:30:31.086 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 3165309 ']' 00:30:31.086 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 3165309 00:30:31.086 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:30:31.086 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:31.086 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3165309 00:30:31.346 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:31.346 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:31.346 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3165309' 00:30:31.346 killing process with pid 3165309 00:30:31.346 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 3165309 00:30:31.346 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 3165309 00:30:31.346 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:31.346 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:31.346 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:31.346 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:30:31.346 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-save 00:30:31.346 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:31.346 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-restore 00:30:31.346 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:31.346 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:31.346 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:31.346 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:31.346 17:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:33.886 17:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:33.886 00:30:33.886 real 0m8.063s 00:30:33.886 user 0m24.815s 00:30:33.886 sys 0m1.276s 00:30:33.886 17:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:33.886 17:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:33.886 ************************************ 00:30:33.886 END TEST nvmf_shutdown_tc2 00:30:33.886 ************************************ 00:30:33.886 17:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:30:33.886 17:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:33.886 17:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:33.886 17:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:33.886 ************************************ 00:30:33.886 START TEST nvmf_shutdown_tc3 00:30:33.886 ************************************ 00:30:33.886 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:30:33.886 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:30:33.886 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:30:33.886 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:33.886 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:33.886 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:33.886 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:33.886 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:33.886 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:33.886 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:33.886 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:33.886 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:33.886 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:33.886 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:30:33.886 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:33.886 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:33.886 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:30:33.886 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:33.886 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:33.886 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:33.886 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:33.886 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:33.886 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:30:33.886 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:33.886 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:30:33.886 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:30:33.886 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:30:33.886 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:30:33.886 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:30:33.886 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:30:33.886 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:33.886 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:33.887 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:33.887 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:33.887 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:33.887 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # is_hw=yes 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:33.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:33.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:30:33.887 00:30:33.887 --- 10.0.0.2 ping statistics --- 00:30:33.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:33.887 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:33.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:33.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:30:33.887 00:30:33.887 --- 10.0.0.1 ping statistics --- 00:30:33.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:33.887 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # return 0 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:33.887 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:33.888 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:33.888 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:33.888 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:33.888 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:33.888 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:33.888 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:30:33.888 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:33.888 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:33.888 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:33.888 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:33.888 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # nvmfpid=3167057 00:30:33.888 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # waitforlisten 3167057 00:30:33.888 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 3167057 ']' 00:30:33.888 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:33.888 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:33.888 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:33.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:33.888 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:33.888 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:34.148 [2024-10-01 17:30:32.456289] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:30:34.148 [2024-10-01 17:30:32.456346] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:34.148 [2024-10-01 17:30:32.533013] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:34.148 [2024-10-01 17:30:32.563144] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:34.148 [2024-10-01 17:30:32.563176] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:34.148 [2024-10-01 17:30:32.563181] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:34.148 [2024-10-01 17:30:32.563190] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:34.148 [2024-10-01 17:30:32.563194] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:34.148 [2024-10-01 17:30:32.563302] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:30:34.148 [2024-10-01 17:30:32.563464] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:30:34.148 [2024-10-01 17:30:32.563621] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:34.148 [2024-10-01 17:30:32.563624] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:30:34.148 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:34.148 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:30:34.148 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:34.148 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:34.148 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:34.408 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:34.408 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:34.408 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.408 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:34.408 [2024-10-01 17:30:32.704609] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:34.408 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.408 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:30:34.408 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:30:34.408 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:34.408 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:34.408 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:34.408 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:34.408 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:34.408 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:34.408 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:34.408 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:34.408 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:34.408 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:34.408 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:34.408 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:34.408 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:34.408 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:34.408 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:34.408 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:34.408 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:34.408 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:34.408 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:34.408 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:34.408 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:34.408 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:34.408 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:34.409 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:30:34.409 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.409 17:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:34.409 Malloc1 00:30:34.409 [2024-10-01 17:30:32.803243] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:34.409 Malloc2 00:30:34.409 Malloc3 00:30:34.409 Malloc4 00:30:34.409 Malloc5 00:30:34.669 Malloc6 00:30:34.669 Malloc7 00:30:34.669 Malloc8 00:30:34.669 Malloc9 00:30:34.669 Malloc10 00:30:34.669 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.669 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:30:34.669 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:34.669 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:34.669 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3167211 00:30:34.669 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3167211 /var/tmp/bdevperf.sock 00:30:34.669 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 3167211 ']' 00:30:34.669 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:34.669 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:34.669 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:34.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:34.669 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:34.669 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:34.669 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:34.669 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:34.669 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config=() 00:30:34.669 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # local subsystem config 00:30:34.669 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:34.669 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:34.669 { 00:30:34.669 "params": { 00:30:34.669 "name": "Nvme$subsystem", 00:30:34.669 "trtype": "$TEST_TRANSPORT", 00:30:34.669 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:34.669 "adrfam": "ipv4", 00:30:34.669 "trsvcid": "$NVMF_PORT", 00:30:34.669 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:34.669 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:34.669 "hdgst": ${hdgst:-false}, 00:30:34.669 "ddgst": ${ddgst:-false} 00:30:34.669 }, 00:30:34.669 "method": "bdev_nvme_attach_controller" 00:30:34.669 } 00:30:34.669 EOF 00:30:34.669 )") 00:30:34.669 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:30:34.669 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:34.669 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:34.669 { 00:30:34.669 "params": { 00:30:34.669 "name": "Nvme$subsystem", 00:30:34.669 "trtype": "$TEST_TRANSPORT", 00:30:34.669 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:34.669 "adrfam": "ipv4", 00:30:34.669 "trsvcid": "$NVMF_PORT", 00:30:34.669 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:34.669 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:34.669 "hdgst": ${hdgst:-false}, 00:30:34.669 "ddgst": ${ddgst:-false} 00:30:34.669 }, 00:30:34.669 "method": "bdev_nvme_attach_controller" 00:30:34.669 } 00:30:34.669 EOF 00:30:34.669 )") 00:30:34.669 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:30:34.930 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:34.930 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:34.930 { 00:30:34.930 "params": { 00:30:34.930 "name": "Nvme$subsystem", 00:30:34.930 "trtype": "$TEST_TRANSPORT", 00:30:34.930 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:34.930 "adrfam": "ipv4", 00:30:34.930 "trsvcid": "$NVMF_PORT", 00:30:34.930 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:34.930 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:34.930 "hdgst": ${hdgst:-false}, 00:30:34.930 "ddgst": ${ddgst:-false} 00:30:34.930 }, 00:30:34.930 "method": "bdev_nvme_attach_controller" 00:30:34.930 } 00:30:34.930 EOF 00:30:34.930 )") 00:30:34.930 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:30:34.930 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:34.930 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:34.930 { 00:30:34.930 "params": { 00:30:34.930 "name": "Nvme$subsystem", 00:30:34.930 "trtype": "$TEST_TRANSPORT", 00:30:34.930 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:34.930 "adrfam": "ipv4", 00:30:34.930 "trsvcid": "$NVMF_PORT", 00:30:34.930 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:34.930 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:34.930 "hdgst": ${hdgst:-false}, 00:30:34.930 "ddgst": ${ddgst:-false} 00:30:34.930 }, 00:30:34.930 "method": "bdev_nvme_attach_controller" 00:30:34.930 } 00:30:34.930 EOF 00:30:34.930 )") 00:30:34.930 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:30:34.930 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:34.930 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:34.930 { 00:30:34.930 "params": { 00:30:34.930 "name": "Nvme$subsystem", 00:30:34.930 "trtype": "$TEST_TRANSPORT", 00:30:34.930 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:34.930 "adrfam": "ipv4", 00:30:34.930 "trsvcid": "$NVMF_PORT", 00:30:34.930 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:34.930 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:34.930 "hdgst": ${hdgst:-false}, 00:30:34.930 "ddgst": ${ddgst:-false} 00:30:34.930 }, 00:30:34.930 "method": "bdev_nvme_attach_controller" 00:30:34.930 } 00:30:34.930 EOF 00:30:34.930 )") 00:30:34.930 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:30:34.930 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:34.930 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:34.930 { 00:30:34.930 "params": { 00:30:34.930 "name": "Nvme$subsystem", 00:30:34.930 "trtype": "$TEST_TRANSPORT", 00:30:34.930 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:34.930 "adrfam": "ipv4", 00:30:34.930 "trsvcid": "$NVMF_PORT", 00:30:34.930 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:34.930 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:34.930 "hdgst": ${hdgst:-false}, 00:30:34.930 "ddgst": ${ddgst:-false} 00:30:34.930 }, 00:30:34.930 "method": "bdev_nvme_attach_controller" 00:30:34.930 } 00:30:34.930 EOF 00:30:34.930 )") 00:30:34.930 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:30:34.930 [2024-10-01 17:30:33.248261] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:30:34.930 [2024-10-01 17:30:33.248311] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3167211 ] 00:30:34.930 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:34.930 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:34.930 { 00:30:34.930 "params": { 00:30:34.930 "name": "Nvme$subsystem", 00:30:34.930 "trtype": "$TEST_TRANSPORT", 00:30:34.930 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:34.930 "adrfam": "ipv4", 00:30:34.930 "trsvcid": "$NVMF_PORT", 00:30:34.930 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:34.930 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:34.930 "hdgst": ${hdgst:-false}, 00:30:34.930 "ddgst": ${ddgst:-false} 00:30:34.930 }, 00:30:34.930 "method": "bdev_nvme_attach_controller" 00:30:34.930 } 00:30:34.930 EOF 00:30:34.930 )") 00:30:34.930 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:30:34.930 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:34.930 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:34.930 { 00:30:34.930 "params": { 00:30:34.930 "name": "Nvme$subsystem", 00:30:34.930 "trtype": "$TEST_TRANSPORT", 00:30:34.930 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:34.930 "adrfam": "ipv4", 00:30:34.930 "trsvcid": "$NVMF_PORT", 00:30:34.930 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:34.930 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:34.930 "hdgst": ${hdgst:-false}, 00:30:34.930 "ddgst": ${ddgst:-false} 00:30:34.930 }, 00:30:34.930 "method": "bdev_nvme_attach_controller" 00:30:34.930 } 00:30:34.930 EOF 00:30:34.930 )") 00:30:34.930 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:30:34.930 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:34.931 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:34.931 { 00:30:34.931 "params": { 00:30:34.931 "name": "Nvme$subsystem", 00:30:34.931 "trtype": "$TEST_TRANSPORT", 00:30:34.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:34.931 "adrfam": "ipv4", 00:30:34.931 "trsvcid": "$NVMF_PORT", 00:30:34.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:34.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:34.931 "hdgst": ${hdgst:-false}, 00:30:34.931 "ddgst": ${ddgst:-false} 00:30:34.931 }, 00:30:34.931 "method": "bdev_nvme_attach_controller" 00:30:34.931 } 00:30:34.931 EOF 00:30:34.931 )") 00:30:34.931 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:30:34.931 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:34.931 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:34.931 { 00:30:34.931 "params": { 00:30:34.931 "name": "Nvme$subsystem", 00:30:34.931 "trtype": "$TEST_TRANSPORT", 00:30:34.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:34.931 "adrfam": "ipv4", 00:30:34.931 "trsvcid": "$NVMF_PORT", 00:30:34.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:34.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:34.931 "hdgst": ${hdgst:-false}, 00:30:34.931 "ddgst": ${ddgst:-false} 00:30:34.931 }, 00:30:34.931 "method": "bdev_nvme_attach_controller" 00:30:34.931 } 00:30:34.931 EOF 00:30:34.931 )") 00:30:34.931 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:30:34.931 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # jq . 00:30:34.931 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@583 -- # IFS=, 00:30:34.931 17:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:34.931 "params": { 00:30:34.931 "name": "Nvme1", 00:30:34.931 "trtype": "tcp", 00:30:34.931 "traddr": "10.0.0.2", 00:30:34.931 "adrfam": "ipv4", 00:30:34.931 "trsvcid": "4420", 00:30:34.931 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:34.931 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:34.931 "hdgst": false, 00:30:34.931 "ddgst": false 00:30:34.931 }, 00:30:34.931 "method": "bdev_nvme_attach_controller" 00:30:34.931 },{ 00:30:34.931 "params": { 00:30:34.931 "name": "Nvme2", 00:30:34.931 "trtype": "tcp", 00:30:34.931 "traddr": "10.0.0.2", 00:30:34.931 "adrfam": "ipv4", 00:30:34.931 "trsvcid": "4420", 00:30:34.931 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:34.931 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:34.931 "hdgst": false, 00:30:34.931 "ddgst": false 00:30:34.931 }, 00:30:34.931 "method": "bdev_nvme_attach_controller" 00:30:34.931 },{ 00:30:34.931 "params": { 00:30:34.931 "name": "Nvme3", 00:30:34.931 "trtype": "tcp", 00:30:34.931 "traddr": "10.0.0.2", 00:30:34.931 "adrfam": "ipv4", 00:30:34.931 "trsvcid": "4420", 00:30:34.931 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:34.931 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:34.931 "hdgst": false, 00:30:34.931 "ddgst": false 00:30:34.931 }, 00:30:34.931 "method": "bdev_nvme_attach_controller" 00:30:34.931 },{ 00:30:34.931 "params": { 00:30:34.931 "name": "Nvme4", 00:30:34.931 "trtype": "tcp", 00:30:34.931 "traddr": "10.0.0.2", 00:30:34.931 "adrfam": "ipv4", 00:30:34.931 "trsvcid": "4420", 00:30:34.931 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:34.931 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:34.931 "hdgst": false, 00:30:34.931 "ddgst": false 00:30:34.931 }, 00:30:34.931 "method": "bdev_nvme_attach_controller" 00:30:34.931 },{ 00:30:34.931 "params": { 00:30:34.931 "name": "Nvme5", 00:30:34.931 "trtype": "tcp", 00:30:34.931 "traddr": "10.0.0.2", 00:30:34.931 "adrfam": "ipv4", 00:30:34.931 "trsvcid": "4420", 00:30:34.931 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:34.931 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:34.931 "hdgst": false, 00:30:34.931 "ddgst": false 00:30:34.931 }, 00:30:34.931 "method": "bdev_nvme_attach_controller" 00:30:34.931 },{ 00:30:34.931 "params": { 00:30:34.931 "name": "Nvme6", 00:30:34.931 "trtype": "tcp", 00:30:34.931 "traddr": "10.0.0.2", 00:30:34.931 "adrfam": "ipv4", 00:30:34.931 "trsvcid": "4420", 00:30:34.931 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:34.931 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:34.931 "hdgst": false, 00:30:34.931 "ddgst": false 00:30:34.931 }, 00:30:34.931 "method": "bdev_nvme_attach_controller" 00:30:34.931 },{ 00:30:34.931 "params": { 00:30:34.931 "name": "Nvme7", 00:30:34.931 "trtype": "tcp", 00:30:34.931 "traddr": "10.0.0.2", 00:30:34.931 "adrfam": "ipv4", 00:30:34.931 "trsvcid": "4420", 00:30:34.931 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:34.931 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:34.931 "hdgst": false, 00:30:34.931 "ddgst": false 00:30:34.931 }, 00:30:34.931 "method": "bdev_nvme_attach_controller" 00:30:34.931 },{ 00:30:34.931 "params": { 00:30:34.931 "name": "Nvme8", 00:30:34.931 "trtype": "tcp", 00:30:34.931 "traddr": "10.0.0.2", 00:30:34.931 "adrfam": "ipv4", 00:30:34.931 "trsvcid": "4420", 00:30:34.931 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:34.931 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:34.931 "hdgst": false, 00:30:34.931 "ddgst": false 00:30:34.931 }, 00:30:34.931 "method": "bdev_nvme_attach_controller" 00:30:34.931 },{ 00:30:34.931 "params": { 00:30:34.931 "name": "Nvme9", 00:30:34.931 "trtype": "tcp", 00:30:34.931 "traddr": "10.0.0.2", 00:30:34.931 "adrfam": "ipv4", 00:30:34.931 "trsvcid": "4420", 00:30:34.931 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:34.931 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:34.931 "hdgst": false, 00:30:34.931 "ddgst": false 00:30:34.931 }, 00:30:34.931 "method": "bdev_nvme_attach_controller" 00:30:34.931 },{ 00:30:34.931 "params": { 00:30:34.931 "name": "Nvme10", 00:30:34.931 "trtype": "tcp", 00:30:34.931 "traddr": "10.0.0.2", 00:30:34.931 "adrfam": "ipv4", 00:30:34.931 "trsvcid": "4420", 00:30:34.931 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:34.931 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:34.931 "hdgst": false, 00:30:34.931 "ddgst": false 00:30:34.931 }, 00:30:34.931 "method": "bdev_nvme_attach_controller" 00:30:34.931 }' 00:30:34.931 [2024-10-01 17:30:33.310331] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:34.931 [2024-10-01 17:30:33.341612] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:36.841 Running I/O for 10 seconds... 00:30:36.841 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:36.841 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:30:36.841 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:36.841 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.841 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:36.841 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.841 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:36.841 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:30:36.841 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:36.841 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:30:36.841 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:30:36.841 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:30:36.841 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:30:36.841 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:36.841 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:36.841 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:36.841 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.841 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:36.841 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.841 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:30:36.841 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:30:36.841 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:30:37.102 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:30:37.102 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:37.102 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:37.102 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.102 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:37.102 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:37.102 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.102 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:30:37.102 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:30:37.102 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:30:37.362 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:30:37.362 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:37.362 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:37.362 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:37.362 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.362 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:37.362 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.362 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:30:37.362 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:30:37.362 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:30:37.362 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:30:37.362 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:30:37.362 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3167057 00:30:37.362 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 3167057 ']' 00:30:37.362 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 3167057 00:30:37.363 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:30:37.363 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:37.363 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3167057 00:30:37.638 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:37.638 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:37.638 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3167057' 00:30:37.638 killing process with pid 3167057 00:30:37.638 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 3167057 00:30:37.638 17:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 3167057 00:30:37.638 [2024-10-01 17:30:35.925501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.638 [2024-10-01 17:30:35.925551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.638 [2024-10-01 17:30:35.925558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.638 [2024-10-01 17:30:35.925563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.638 [2024-10-01 17:30:35.925568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.638 [2024-10-01 17:30:35.925573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.638 [2024-10-01 17:30:35.925578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.638 [2024-10-01 17:30:35.925583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.638 [2024-10-01 17:30:35.925588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.638 [2024-10-01 17:30:35.925592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.638 [2024-10-01 17:30:35.925597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.638 [2024-10-01 17:30:35.925602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.638 [2024-10-01 17:30:35.925607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.638 [2024-10-01 17:30:35.925612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.638 [2024-10-01 17:30:35.925616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.638 [2024-10-01 17:30:35.925621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.638 [2024-10-01 17:30:35.925625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.638 [2024-10-01 17:30:35.925643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.925648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.925652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.925657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.925662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.925667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.925673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.925678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.925683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.925687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.925692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.925696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.925701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.925706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.925712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.925717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.925722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.925727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.925731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.925736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.925740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.925745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.925749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.925754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.925759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.925764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.925768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.925773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.925784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.925789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.925793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.925798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.925803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.925808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.925812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.925817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.925822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.925827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.925831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.925836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.925841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.925845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.925850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.925854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.925859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.925863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9d90 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.926939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.926954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.926959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.926965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.926969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.926974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.926979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.926984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.926989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.927001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.927006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.927011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.927016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.927021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.927025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.927031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.927037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.927042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.927047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.927051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.927056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.927061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.927066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.927072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.927077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.927082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.927087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.927092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.927097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.927102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.927106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.927111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.927116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.927121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.927126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.927131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.927137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.927142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.927147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.639 [2024-10-01 17:30:35.927151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.927156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.927161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.927166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.927171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.927176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.927184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.927189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.927193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.927198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.927203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.927207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.927212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.927216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.927222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.927227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.927231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.927237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.927241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.927246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.927250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.927255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.927259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.927264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56470 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.640 [2024-10-01 17:30:35.928658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56940 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.929683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56e30 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.931270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.931286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.931291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.931297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.931302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.931307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.931313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.931317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.931322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.931327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.931331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.931342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.931347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.641 [2024-10-01 17:30:35.931352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b50 is same with the state(6) to be set 00:30:37.642 [2024-10-01 17:30:35.931870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.642 [2024-10-01 17:30:35.931908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.642 [2024-10-01 17:30:35.931933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.642 [2024-10-01 17:30:35.931942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.642 [2024-10-01 17:30:35.931952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.642 [2024-10-01 17:30:35.931959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.642 [2024-10-01 17:30:35.931969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.642 [2024-10-01 17:30:35.931976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.642 [2024-10-01 17:30:35.931986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.642 [2024-10-01 17:30:35.932011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.642 [2024-10-01 17:30:35.932021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.642 [2024-10-01 17:30:35.932029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.642 [2024-10-01 17:30:35.932038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.642 [2024-10-01 17:30:35.932046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.642 [2024-10-01 17:30:35.932056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.642 [2024-10-01 17:30:35.932063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.642 [2024-10-01 17:30:35.932072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.642 [2024-10-01 17:30:35.932080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.642 [2024-10-01 17:30:35.932090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.642 [2024-10-01 17:30:35.932098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.642 [2024-10-01 17:30:35.932108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.642 [2024-10-01 17:30:35.932115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.642 [2024-10-01 17:30:35.932124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.642 [2024-10-01 17:30:35.932132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.642 [2024-10-01 17:30:35.932142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.642 [2024-10-01 17:30:35.932149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.642 [2024-10-01 17:30:35.932158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.642 [2024-10-01 17:30:35.932168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.642 [2024-10-01 17:30:35.932177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.642 [2024-10-01 17:30:35.932185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.642 [2024-10-01 17:30:35.932195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.642 [2024-10-01 17:30:35.932202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.642 [2024-10-01 17:30:35.932212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.642 [2024-10-01 17:30:35.932220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.643 [2024-10-01 17:30:35.932229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.643 [2024-10-01 17:30:35.932237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.643 [2024-10-01 17:30:35.932246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.643 [2024-10-01 17:30:35.932254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.643 [2024-10-01 17:30:35.932254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.643 [2024-10-01 17:30:35.932264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.643 [2024-10-01 17:30:35.932268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.643 [2024-10-01 17:30:35.932271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.643 [2024-10-01 17:30:35.932274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.643 [2024-10-01 17:30:35.932280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.643 [2024-10-01 17:30:35.932281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.643 [2024-10-01 17:30:35.932286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.643 [2024-10-01 17:30:35.932289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.643 [2024-10-01 17:30:35.932291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.643 [2024-10-01 17:30:35.932298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.643 [2024-10-01 17:30:35.932299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.643 [2024-10-01 17:30:35.932303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.643 [2024-10-01 17:30:35.932307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-01 17:30:35.932308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.643 he state(6) to be set 00:30:37.643 [2024-10-01 17:30:35.932317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.643 [2024-10-01 17:30:35.932319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.643 [2024-10-01 17:30:35.932322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.643 [2024-10-01 17:30:35.932327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.643 [2024-10-01 17:30:35.932327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.643 [2024-10-01 17:30:35.932332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.643 [2024-10-01 17:30:35.932338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.643 [2024-10-01 17:30:35.932338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.643 [2024-10-01 17:30:35.932344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.643 [2024-10-01 17:30:35.932347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.643 [2024-10-01 17:30:35.932350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.643 [2024-10-01 17:30:35.932356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.643 [2024-10-01 17:30:35.932357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.643 [2024-10-01 17:30:35.932360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.643 [2024-10-01 17:30:35.932366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with t[2024-10-01 17:30:35.932365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:30:37.643 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.643 [2024-10-01 17:30:35.932373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.643 [2024-10-01 17:30:35.932378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with t[2024-10-01 17:30:35.932378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:1he state(6) to be set 00:30:37.643 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.643 [2024-10-01 17:30:35.932386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.643 [2024-10-01 17:30:35.932388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.643 [2024-10-01 17:30:35.932392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.643 [2024-10-01 17:30:35.932397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with t[2024-10-01 17:30:35.932398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:1he state(6) to be set 00:30:37.643 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.643 [2024-10-01 17:30:35.932406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.643 [2024-10-01 17:30:35.932408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.643 [2024-10-01 17:30:35.932411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.643 [2024-10-01 17:30:35.932419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:1[2024-10-01 17:30:35.932419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.643 he state(6) to be set 00:30:37.643 [2024-10-01 17:30:35.932428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.643 [2024-10-01 17:30:35.932428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.643 [2024-10-01 17:30:35.932433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.643 [2024-10-01 17:30:35.932438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.643 [2024-10-01 17:30:35.932438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.643 [2024-10-01 17:30:35.932444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.643 [2024-10-01 17:30:35.932446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.643 [2024-10-01 17:30:35.932450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.643 [2024-10-01 17:30:35.932455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.643 [2024-10-01 17:30:35.932456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.643 [2024-10-01 17:30:35.932460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.643 [2024-10-01 17:30:35.932464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-01 17:30:35.932465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.643 he state(6) to be set 00:30:37.643 [2024-10-01 17:30:35.932472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.643 [2024-10-01 17:30:35.932475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.643 [2024-10-01 17:30:35.932477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.643 [2024-10-01 17:30:35.932483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.643 [2024-10-01 17:30:35.932483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.643 [2024-10-01 17:30:35.932487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.643 [2024-10-01 17:30:35.932493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.643 [2024-10-01 17:30:35.932493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.643 [2024-10-01 17:30:35.932500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.643 [2024-10-01 17:30:35.932502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.644 [2024-10-01 17:30:35.932505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.644 [2024-10-01 17:30:35.932512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.644 [2024-10-01 17:30:35.932513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.644 [2024-10-01 17:30:35.932517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.644 [2024-10-01 17:30:35.932522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with t[2024-10-01 17:30:35.932522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:30:37.644 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.644 [2024-10-01 17:30:35.932529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.644 [2024-10-01 17:30:35.932534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.644 [2024-10-01 17:30:35.932537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.644 [2024-10-01 17:30:35.932542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-01 17:30:35.932543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.644 he state(6) to be set 00:30:37.644 [2024-10-01 17:30:35.932551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.644 [2024-10-01 17:30:35.932554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.644 [2024-10-01 17:30:35.932556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.644 [2024-10-01 17:30:35.932562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.644 [2024-10-01 17:30:35.932562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.644 [2024-10-01 17:30:35.932567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.644 [2024-10-01 17:30:35.932572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with t[2024-10-01 17:30:35.932572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:1he state(6) to be set 00:30:37.644 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.644 [2024-10-01 17:30:35.932580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.644 [2024-10-01 17:30:35.932582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.644 [2024-10-01 17:30:35.932585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.644 [2024-10-01 17:30:35.932590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.644 [2024-10-01 17:30:35.932593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.644 [2024-10-01 17:30:35.932595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.644 [2024-10-01 17:30:35.932601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.644 [2024-10-01 17:30:35.932601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.644 [2024-10-01 17:30:35.932607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.644 [2024-10-01 17:30:35.932615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.644 [2024-10-01 17:30:35.932617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.644 [2024-10-01 17:30:35.932620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.644 [2024-10-01 17:30:35.932626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.644 [2024-10-01 17:30:35.932626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.644 [2024-10-01 17:30:35.932631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58040 is same with the state(6) to be set 00:30:37.644 [2024-10-01 17:30:35.932637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.644 [2024-10-01 17:30:35.932645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.644 [2024-10-01 17:30:35.932654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.644 [2024-10-01 17:30:35.932662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.644 [2024-10-01 17:30:35.932672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.644 [2024-10-01 17:30:35.932679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.644 [2024-10-01 17:30:35.932688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.644 [2024-10-01 17:30:35.932695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.644 [2024-10-01 17:30:35.932705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.644 [2024-10-01 17:30:35.932713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.644 [2024-10-01 17:30:35.932723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.644 [2024-10-01 17:30:35.932730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.644 [2024-10-01 17:30:35.932739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.644 [2024-10-01 17:30:35.932746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.644 [2024-10-01 17:30:35.932757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.644 [2024-10-01 17:30:35.932764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.644 [2024-10-01 17:30:35.932774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.644 [2024-10-01 17:30:35.932781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.644 [2024-10-01 17:30:35.932790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.644 [2024-10-01 17:30:35.932799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.644 [2024-10-01 17:30:35.932809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.644 [2024-10-01 17:30:35.932816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.644 [2024-10-01 17:30:35.932825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.644 [2024-10-01 17:30:35.932833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.644 [2024-10-01 17:30:35.932842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.644 [2024-10-01 17:30:35.932849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.644 [2024-10-01 17:30:35.932859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.644 [2024-10-01 17:30:35.932866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.644 [2024-10-01 17:30:35.932876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.644 [2024-10-01 17:30:35.932883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.644 [2024-10-01 17:30:35.932892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.644 [2024-10-01 17:30:35.932899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.644 [2024-10-01 17:30:35.932909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.644 [2024-10-01 17:30:35.932916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.644 [2024-10-01 17:30:35.932926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.644 [2024-10-01 17:30:35.932933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.644 [2024-10-01 17:30:35.932942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.644 [2024-10-01 17:30:35.932949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.644 [2024-10-01 17:30:35.932959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.644 [2024-10-01 17:30:35.932971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.644 [2024-10-01 17:30:35.932981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.644 [2024-10-01 17:30:35.932988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.644 [2024-10-01 17:30:35.933001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.644 [2024-10-01 17:30:35.933009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.644 [2024-10-01 17:30:35.933019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.644 [2024-10-01 17:30:35.933027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.644 [2024-10-01 17:30:35.933037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.644 [2024-10-01 17:30:35.933044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.645 [2024-10-01 17:30:35.933053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.645 [2024-10-01 17:30:35.933060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.645 [2024-10-01 17:30:35.933069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.645 [2024-10-01 17:30:35.933077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.645 [2024-10-01 17:30:35.933091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.645 [2024-10-01 17:30:35.933103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.645 [2024-10-01 17:30:35.933113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.645 [2024-10-01 17:30:35.933119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.645 [2024-10-01 17:30:35.933124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.645 [2024-10-01 17:30:35.933129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.645 [2024-10-01 17:30:35.933135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.645 [2024-10-01 17:30:35.933140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.645 [2024-10-01 17:30:35.933145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.645 [2024-10-01 17:30:35.933146] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1797960 was disconnected and freed. reset controller. 00:30:37.645 [2024-10-01 17:30:35.933150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.645 [2024-10-01 17:30:35.933156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.645 [2024-10-01 17:30:35.933160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.645 [2024-10-01 17:30:35.933165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.645 [2024-10-01 17:30:35.933170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.645 [2024-10-01 17:30:35.933174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.645 [2024-10-01 17:30:35.933179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.645 [2024-10-01 17:30:35.933184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.645 [2024-10-01 17:30:35.933194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.645 [2024-10-01 17:30:35.933199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.645 [2024-10-01 17:30:35.933203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.645 [2024-10-01 17:30:35.933209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.645 [2024-10-01 17:30:35.933213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.645 [2024-10-01 17:30:35.933218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.645 [2024-10-01 17:30:35.933223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.645 [2024-10-01 17:30:35.933228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.645 [2024-10-01 17:30:35.933233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.645 [2024-10-01 17:30:35.933238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.645 [2024-10-01 17:30:35.933243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.645 [2024-10-01 17:30:35.933248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.645 [2024-10-01 17:30:35.933253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.645 [2024-10-01 17:30:35.933258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.645 [2024-10-01 17:30:35.933263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.645 [2024-10-01 17:30:35.933268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.645 [2024-10-01 17:30:35.933272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.645 [2024-10-01 17:30:35.933277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.645 [2024-10-01 17:30:35.933282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.645 [2024-10-01 17:30:35.933287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.645 [2024-10-01 17:30:35.933292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.645 [2024-10-01 17:30:35.935224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.645 [2024-10-01 17:30:35.935244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.645 [2024-10-01 17:30:35.935257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.645 [2024-10-01 17:30:35.935264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.645 [2024-10-01 17:30:35.935274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.645 [2024-10-01 17:30:35.935281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.645 [2024-10-01 17:30:35.935294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.645 [2024-10-01 17:30:35.935302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.645 [2024-10-01 17:30:35.935312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.645 [2024-10-01 17:30:35.935319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.645 [2024-10-01 17:30:35.935329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.645 [2024-10-01 17:30:35.935337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.645 [2024-10-01 17:30:35.935346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.645 [2024-10-01 17:30:35.935354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.645 [2024-10-01 17:30:35.935363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.645 [2024-10-01 17:30:35.935370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.645 [2024-10-01 17:30:35.935380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.645 [2024-10-01 17:30:35.935387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.645 [2024-10-01 17:30:35.935396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.645 [2024-10-01 17:30:35.935404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.645 [2024-10-01 17:30:35.935413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.645 [2024-10-01 17:30:35.935420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.645 [2024-10-01 17:30:35.935430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.645 [2024-10-01 17:30:35.935437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.645 [2024-10-01 17:30:35.935446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.645 [2024-10-01 17:30:35.935454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.645 [2024-10-01 17:30:35.935464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.645 [2024-10-01 17:30:35.935471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.645 [2024-10-01 17:30:35.935481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.645 [2024-10-01 17:30:35.935488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.645 [2024-10-01 17:30:35.935497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.645 [2024-10-01 17:30:35.935506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.645 [2024-10-01 17:30:35.935516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.645 [2024-10-01 17:30:35.935523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.645 [2024-10-01 17:30:35.935532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.645 [2024-10-01 17:30:35.935540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.645 [2024-10-01 17:30:35.935549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.645 [2024-10-01 17:30:35.935558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.645 [2024-10-01 17:30:35.935568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.646 [2024-10-01 17:30:35.935575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.646 [2024-10-01 17:30:35.935584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.646 [2024-10-01 17:30:35.935591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.646 [2024-10-01 17:30:35.935601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.646 [2024-10-01 17:30:35.935609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.646 [2024-10-01 17:30:35.935619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.646 [2024-10-01 17:30:35.935626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.646 [2024-10-01 17:30:35.935635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.646 [2024-10-01 17:30:35.935643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.646 [2024-10-01 17:30:35.935652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.646 [2024-10-01 17:30:35.935661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.646 [2024-10-01 17:30:35.935670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.646 [2024-10-01 17:30:35.935678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.646 [2024-10-01 17:30:35.935687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.646 [2024-10-01 17:30:35.935695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.646 [2024-10-01 17:30:35.935704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.646 [2024-10-01 17:30:35.935712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.646 [2024-10-01 17:30:35.935723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.646 [2024-10-01 17:30:35.935731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.646 [2024-10-01 17:30:35.935741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.646 [2024-10-01 17:30:35.935748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.646 [2024-10-01 17:30:35.935757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.646 [2024-10-01 17:30:35.935765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.646 [2024-10-01 17:30:35.935774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.646 [2024-10-01 17:30:35.935783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.646 [2024-10-01 17:30:35.935793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.646 [2024-10-01 17:30:35.935800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.646 [2024-10-01 17:30:35.935809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.646 [2024-10-01 17:30:35.935817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.646 [2024-10-01 17:30:35.935827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.646 [2024-10-01 17:30:35.935834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.646 [2024-10-01 17:30:35.935844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.646 [2024-10-01 17:30:35.935851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.646 [2024-10-01 17:30:35.935860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.646 [2024-10-01 17:30:35.935868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.646 [2024-10-01 17:30:35.935877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.646 [2024-10-01 17:30:35.935885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.646 [2024-10-01 17:30:35.935894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.646 [2024-10-01 17:30:35.935901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.646 [2024-10-01 17:30:35.935911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.646 [2024-10-01 17:30:35.935919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.646 [2024-10-01 17:30:35.935928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.646 [2024-10-01 17:30:35.935937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.646 [2024-10-01 17:30:35.935946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.646 [2024-10-01 17:30:35.935953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.646 [2024-10-01 17:30:35.935963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.646 [2024-10-01 17:30:35.935970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.646 [2024-10-01 17:30:35.935979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.646 [2024-10-01 17:30:35.935987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.646 [2024-10-01 17:30:35.935999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.646 [2024-10-01 17:30:35.936007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.646 [2024-10-01 17:30:35.936017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.646 [2024-10-01 17:30:35.936024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.646 [2024-10-01 17:30:35.936033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.646 [2024-10-01 17:30:35.936040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.646 [2024-10-01 17:30:35.936050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.646 [2024-10-01 17:30:35.936057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.646 [2024-10-01 17:30:35.936066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.646 [2024-10-01 17:30:35.936074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.646 [2024-10-01 17:30:35.936083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.646 [2024-10-01 17:30:35.936090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.646 [2024-10-01 17:30:35.936099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.646 [2024-10-01 17:30:35.936106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.646 [2024-10-01 17:30:35.936116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.646 [2024-10-01 17:30:35.936123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.646 [2024-10-01 17:30:35.936132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.646 [2024-10-01 17:30:35.936140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.646 [2024-10-01 17:30:35.936151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.646 [2024-10-01 17:30:35.936158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.646 [2024-10-01 17:30:35.936167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.646 [2024-10-01 17:30:35.936174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.646 [2024-10-01 17:30:35.936184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.646 [2024-10-01 17:30:35.936191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.646 [2024-10-01 17:30:35.936203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.646 [2024-10-01 17:30:35.936210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.646 [2024-10-01 17:30:35.936220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.646 [2024-10-01 17:30:35.936228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.647 [2024-10-01 17:30:35.936238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.647 [2024-10-01 17:30:35.936246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.647 [2024-10-01 17:30:35.936255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.647 [2024-10-01 17:30:35.936262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.647 [2024-10-01 17:30:35.936271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.647 [2024-10-01 17:30:35.936280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.647 [2024-10-01 17:30:35.936289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.647 [2024-10-01 17:30:35.936297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.647 [2024-10-01 17:30:35.936306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.647 [2024-10-01 17:30:35.936313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.647 [2024-10-01 17:30:35.936322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.647 [2024-10-01 17:30:35.936330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.647 [2024-10-01 17:30:35.936354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.647 [2024-10-01 17:30:35.936390] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1798e90 was disconnected and freed. reset controller. 00:30:37.647 1856.00 IOPS, 116.00 MiB/s [2024-10-01 17:30:35.943304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.647 [2024-10-01 17:30:35.943325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.647 [2024-10-01 17:30:35.943334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.647 [2024-10-01 17:30:35.943342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.647 [2024-10-01 17:30:35.943350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.647 [2024-10-01 17:30:35.943357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.647 [2024-10-01 17:30:35.943366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.647 [2024-10-01 17:30:35.943373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.647 [2024-10-01 17:30:35.943381] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1809ad0 is same with the state(6) to be set 00:30:37.647 [2024-10-01 17:30:35.943409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.647 [2024-10-01 17:30:35.943418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.647 [2024-10-01 17:30:35.943427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.647 [2024-10-01 17:30:35.943435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.647 [2024-10-01 17:30:35.943443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.647 [2024-10-01 17:30:35.943450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.647 [2024-10-01 17:30:35.943459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.647 [2024-10-01 17:30:35.943466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.647 [2024-10-01 17:30:35.943474] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1393e90 is same with the state(6) to be set 00:30:37.647 [2024-10-01 17:30:35.943498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.647 [2024-10-01 17:30:35.943506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.647 [2024-10-01 17:30:35.943515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.647 [2024-10-01 17:30:35.943522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.647 [2024-10-01 17:30:35.943531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.647 [2024-10-01 17:30:35.943538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.647 [2024-10-01 17:30:35.943546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.647 [2024-10-01 17:30:35.943554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.647 [2024-10-01 17:30:35.943561] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1395b80 is same with the state(6) to be set 00:30:37.647 [2024-10-01 17:30:35.943585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.647 [2024-10-01 17:30:35.943594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.647 [2024-10-01 17:30:35.943602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.647 [2024-10-01 17:30:35.943609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.647 [2024-10-01 17:30:35.943618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.647 [2024-10-01 17:30:35.943625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.647 [2024-10-01 17:30:35.943633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.647 [2024-10-01 17:30:35.943640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.647 [2024-10-01 17:30:35.943648] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1395720 is same with the state(6) to be set 00:30:37.647 [2024-10-01 17:30:35.943670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.647 [2024-10-01 17:30:35.943679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.647 [2024-10-01 17:30:35.943687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.647 [2024-10-01 17:30:35.943695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.647 [2024-10-01 17:30:35.943703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.647 [2024-10-01 17:30:35.943710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.647 [2024-10-01 17:30:35.943718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.647 [2024-10-01 17:30:35.943725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.647 [2024-10-01 17:30:35.943732] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c0e70 is same with the state(6) to be set 00:30:37.647 [2024-10-01 17:30:35.943755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.647 [2024-10-01 17:30:35.943764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.647 [2024-10-01 17:30:35.943772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.647 [2024-10-01 17:30:35.943780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.647 [2024-10-01 17:30:35.943788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.647 [2024-10-01 17:30:35.943795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.647 [2024-10-01 17:30:35.943803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.648 [2024-10-01 17:30:35.943812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.648 [2024-10-01 17:30:35.943820] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b8570 is same with the state(6) to be set 00:30:37.648 [2024-10-01 17:30:35.943847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.648 [2024-10-01 17:30:35.943855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.648 [2024-10-01 17:30:35.943864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.648 [2024-10-01 17:30:35.943872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.648 [2024-10-01 17:30:35.943879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.648 [2024-10-01 17:30:35.943886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.648 [2024-10-01 17:30:35.943896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.648 [2024-10-01 17:30:35.943905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.648 [2024-10-01 17:30:35.943912] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c12e0 is same with the state(6) to be set 00:30:37.648 [2024-10-01 17:30:35.943933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.648 [2024-10-01 17:30:35.945196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.648 [2024-10-01 17:30:35.945217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.648 [2024-10-01 17:30:35.945224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.648 [2024-10-01 17:30:35.945231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.648 [2024-10-01 17:30:35.945237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.648 [2024-10-01 17:30:35.945242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.648 [2024-10-01 17:30:35.945246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.648 [2024-10-01 17:30:35.945251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.648 [2024-10-01 17:30:35.945256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.648 [2024-10-01 17:30:35.945260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.648 [2024-10-01 17:30:35.945266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.648 [2024-10-01 17:30:35.945270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.648 [2024-10-01 17:30:35.945275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.648 [2024-10-01 17:30:35.945280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.648 [2024-10-01 17:30:35.945285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.648 [2024-10-01 17:30:35.945294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.648 [2024-10-01 17:30:35.945299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.648 [2024-10-01 17:30:35.945304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.648 [2024-10-01 17:30:35.945309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.648 [2024-10-01 17:30:35.945314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.648 [2024-10-01 17:30:35.945318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.648 [2024-10-01 17:30:35.945323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.648 [2024-10-01 17:30:35.945327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.648 [2024-10-01 17:30:35.945332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.648 [2024-10-01 17:30:35.945336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.648 [2024-10-01 17:30:35.945341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58510 is same with the state(6) to be set 00:30:37.648 [2024-10-01 17:30:35.953434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.648 [2024-10-01 17:30:35.953471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.648 [2024-10-01 17:30:35.953481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.648 [2024-10-01 17:30:35.953491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.648 [2024-10-01 17:30:35.953498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.648 [2024-10-01 17:30:35.953507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.648 [2024-10-01 17:30:35.953515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.648 [2024-10-01 17:30:35.953523] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c0b10 is same with the state(6) to be set 00:30:37.648 [2024-10-01 17:30:35.953579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.648 [2024-10-01 17:30:35.953590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.648 [2024-10-01 17:30:35.953599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.648 [2024-10-01 17:30:35.953607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.648 [2024-10-01 17:30:35.953615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.648 [2024-10-01 17:30:35.953623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.648 [2024-10-01 17:30:35.953632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.648 [2024-10-01 17:30:35.953644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.648 [2024-10-01 17:30:35.953652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a4610 is same with the state(6) to be set 00:30:37.648 [2024-10-01 17:30:35.953716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.648 [2024-10-01 17:30:35.953728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.648 [2024-10-01 17:30:35.953742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.648 [2024-10-01 17:30:35.953750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.648 [2024-10-01 17:30:35.953760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.648 [2024-10-01 17:30:35.953768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.648 [2024-10-01 17:30:35.953778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.648 [2024-10-01 17:30:35.953786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.648 [2024-10-01 17:30:35.953797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.648 [2024-10-01 17:30:35.953805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.648 [2024-10-01 17:30:35.953815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.648 [2024-10-01 17:30:35.953823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.648 [2024-10-01 17:30:35.953833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.648 [2024-10-01 17:30:35.953840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.648 [2024-10-01 17:30:35.953851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.648 [2024-10-01 17:30:35.953858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.648 [2024-10-01 17:30:35.953868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.648 [2024-10-01 17:30:35.953876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.648 [2024-10-01 17:30:35.953886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.648 [2024-10-01 17:30:35.953893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.648 [2024-10-01 17:30:35.953903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.648 [2024-10-01 17:30:35.953911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.648 [2024-10-01 17:30:35.953920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.648 [2024-10-01 17:30:35.953930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.648 [2024-10-01 17:30:35.953940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.648 [2024-10-01 17:30:35.953947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.648 [2024-10-01 17:30:35.953958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.649 [2024-10-01 17:30:35.953965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.649 [2024-10-01 17:30:35.953974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.649 [2024-10-01 17:30:35.953982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.649 [2024-10-01 17:30:35.953991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.649 [2024-10-01 17:30:35.954021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.649 [2024-10-01 17:30:35.954032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.649 [2024-10-01 17:30:35.954040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.649 [2024-10-01 17:30:35.954049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.649 [2024-10-01 17:30:35.954057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.649 [2024-10-01 17:30:35.954066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.649 [2024-10-01 17:30:35.954074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.649 [2024-10-01 17:30:35.954084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.649 [2024-10-01 17:30:35.954091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.649 [2024-10-01 17:30:35.954101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.649 [2024-10-01 17:30:35.954108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.649 [2024-10-01 17:30:35.954118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.649 [2024-10-01 17:30:35.954126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.649 [2024-10-01 17:30:35.954136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.649 [2024-10-01 17:30:35.954143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.649 [2024-10-01 17:30:35.954153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.649 [2024-10-01 17:30:35.954161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.649 [2024-10-01 17:30:35.954173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.649 [2024-10-01 17:30:35.954181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.649 [2024-10-01 17:30:35.954190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.649 [2024-10-01 17:30:35.954198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.649 [2024-10-01 17:30:35.954208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.649 [2024-10-01 17:30:35.954216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.649 [2024-10-01 17:30:35.954226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.649 [2024-10-01 17:30:35.954234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.649 [2024-10-01 17:30:35.954243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.649 [2024-10-01 17:30:35.954251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.649 [2024-10-01 17:30:35.954261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.649 [2024-10-01 17:30:35.954268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.649 [2024-10-01 17:30:35.954278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.649 [2024-10-01 17:30:35.954285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.649 [2024-10-01 17:30:35.954295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.649 [2024-10-01 17:30:35.954303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.649 [2024-10-01 17:30:35.954312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.649 [2024-10-01 17:30:35.954320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.649 [2024-10-01 17:30:35.954330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.649 [2024-10-01 17:30:35.954339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.649 [2024-10-01 17:30:35.954349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.649 [2024-10-01 17:30:35.954357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.649 [2024-10-01 17:30:35.954366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.649 [2024-10-01 17:30:35.954373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.649 [2024-10-01 17:30:35.954383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.649 [2024-10-01 17:30:35.954396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.649 [2024-10-01 17:30:35.954406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.649 [2024-10-01 17:30:35.954413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.649 [2024-10-01 17:30:35.954423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.649 [2024-10-01 17:30:35.954430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.649 [2024-10-01 17:30:35.954440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.649 [2024-10-01 17:30:35.954447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.649 [2024-10-01 17:30:35.954457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.649 [2024-10-01 17:30:35.954465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.649 [2024-10-01 17:30:35.954475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.649 [2024-10-01 17:30:35.954483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.649 [2024-10-01 17:30:35.954493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.649 [2024-10-01 17:30:35.954500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.649 [2024-10-01 17:30:35.954509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.649 [2024-10-01 17:30:35.954517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.649 [2024-10-01 17:30:35.954526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.649 [2024-10-01 17:30:35.954534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.649 [2024-10-01 17:30:35.954544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.649 [2024-10-01 17:30:35.954552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.649 [2024-10-01 17:30:35.954561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.649 [2024-10-01 17:30:35.954568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.649 [2024-10-01 17:30:35.954578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.649 [2024-10-01 17:30:35.954585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.649 [2024-10-01 17:30:35.954594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.649 [2024-10-01 17:30:35.954602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.649 [2024-10-01 17:30:35.954613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.649 [2024-10-01 17:30:35.954621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.649 [2024-10-01 17:30:35.954630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.649 [2024-10-01 17:30:35.954637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.649 [2024-10-01 17:30:35.954647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.649 [2024-10-01 17:30:35.954655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.649 [2024-10-01 17:30:35.954664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.649 [2024-10-01 17:30:35.954672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.649 [2024-10-01 17:30:35.954682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.650 [2024-10-01 17:30:35.954690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.650 [2024-10-01 17:30:35.954701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.650 [2024-10-01 17:30:35.954709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.650 [2024-10-01 17:30:35.954719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.650 [2024-10-01 17:30:35.954726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.650 [2024-10-01 17:30:35.954736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.650 [2024-10-01 17:30:35.954743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.650 [2024-10-01 17:30:35.954753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.650 [2024-10-01 17:30:35.954761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.650 [2024-10-01 17:30:35.954770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.650 [2024-10-01 17:30:35.954778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.650 [2024-10-01 17:30:35.954789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.650 [2024-10-01 17:30:35.954796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.650 [2024-10-01 17:30:35.954806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.650 [2024-10-01 17:30:35.954814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.650 [2024-10-01 17:30:35.954823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.650 [2024-10-01 17:30:35.954833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.650 [2024-10-01 17:30:35.954843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.650 [2024-10-01 17:30:35.954851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.650 [2024-10-01 17:30:35.954861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.650 [2024-10-01 17:30:35.954868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.650 [2024-10-01 17:30:35.954922] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1599d80 was disconnected and freed. reset controller. 00:30:37.650 [2024-10-01 17:30:35.957741] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:30:37.650 [2024-10-01 17:30:35.957777] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:30:37.650 [2024-10-01 17:30:35.957791] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c12e0 (9): Bad file descriptor 00:30:37.650 [2024-10-01 17:30:35.957802] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c0b10 (9): Bad file descriptor 00:30:37.650 [2024-10-01 17:30:35.957850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.650 [2024-10-01 17:30:35.957861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.650 [2024-10-01 17:30:35.957870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.650 [2024-10-01 17:30:35.957878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.650 [2024-10-01 17:30:35.957886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.650 [2024-10-01 17:30:35.957894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.650 [2024-10-01 17:30:35.957902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.650 [2024-10-01 17:30:35.957910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.650 [2024-10-01 17:30:35.957917] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1808580 is same with the state(6) to be set 00:30:37.650 [2024-10-01 17:30:35.957937] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1809ad0 (9): Bad file descriptor 00:30:37.650 [2024-10-01 17:30:35.957950] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1393e90 (9): Bad file descriptor 00:30:37.650 [2024-10-01 17:30:35.957969] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1395b80 (9): Bad file descriptor 00:30:37.650 [2024-10-01 17:30:35.957981] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1395720 (9): Bad file descriptor 00:30:37.650 [2024-10-01 17:30:35.958011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c0e70 (9): Bad file descriptor 00:30:37.650 [2024-10-01 17:30:35.958030] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b8570 (9): Bad file descriptor 00:30:37.650 [2024-10-01 17:30:35.958046] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a4610 (9): Bad file descriptor 00:30:37.650 [2024-10-01 17:30:35.959644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:37.650 [2024-10-01 17:30:35.960831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.650 [2024-10-01 17:30:35.960857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c0b10 with addr=10.0.0.2, port=4420 00:30:37.650 [2024-10-01 17:30:35.960866] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c0b10 is same with the state(6) to be set 00:30:37.650 [2024-10-01 17:30:35.961342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.650 [2024-10-01 17:30:35.961384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c12e0 with addr=10.0.0.2, port=4420 00:30:37.650 [2024-10-01 17:30:35.961395] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c12e0 is same with the state(6) to be set 00:30:37.650 [2024-10-01 17:30:35.961679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.650 [2024-10-01 17:30:35.961692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1395b80 with addr=10.0.0.2, port=4420 00:30:37.650 [2024-10-01 17:30:35.961700] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1395b80 is same with the state(6) to be set 00:30:37.650 [2024-10-01 17:30:35.962164] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:37.650 [2024-10-01 17:30:35.962215] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:37.650 [2024-10-01 17:30:35.962294] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:37.650 [2024-10-01 17:30:35.962312] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c0b10 (9): Bad file descriptor 00:30:37.650 [2024-10-01 17:30:35.962325] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c12e0 (9): Bad file descriptor 00:30:37.650 [2024-10-01 17:30:35.962335] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1395b80 (9): Bad file descriptor 00:30:37.650 [2024-10-01 17:30:35.962396] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:37.650 [2024-10-01 17:30:35.962435] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:37.650 [2024-10-01 17:30:35.962479] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:37.650 [2024-10-01 17:30:35.962556] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:37.650 [2024-10-01 17:30:35.962577] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:30:37.650 [2024-10-01 17:30:35.962585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:30:37.650 [2024-10-01 17:30:35.962594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:30:37.650 [2024-10-01 17:30:35.962610] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:30:37.650 [2024-10-01 17:30:35.962617] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:30:37.650 [2024-10-01 17:30:35.962624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:30:37.650 [2024-10-01 17:30:35.962636] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:37.650 [2024-10-01 17:30:35.962643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:37.650 [2024-10-01 17:30:35.962650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:37.650 [2024-10-01 17:30:35.962713] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:37.650 [2024-10-01 17:30:35.962723] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:37.650 [2024-10-01 17:30:35.962730] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:37.650 [2024-10-01 17:30:35.967770] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1808580 (9): Bad file descriptor 00:30:37.650 [2024-10-01 17:30:35.967931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.650 [2024-10-01 17:30:35.967945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.650 [2024-10-01 17:30:35.967962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.650 [2024-10-01 17:30:35.967970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.650 [2024-10-01 17:30:35.967980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.650 [2024-10-01 17:30:35.967987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.650 [2024-10-01 17:30:35.968002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.650 [2024-10-01 17:30:35.968010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.650 [2024-10-01 17:30:35.968020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.650 [2024-10-01 17:30:35.968028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.650 [2024-10-01 17:30:35.968037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.650 [2024-10-01 17:30:35.968044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.650 [2024-10-01 17:30:35.968054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.651 [2024-10-01 17:30:35.968061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.651 [2024-10-01 17:30:35.968071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.651 [2024-10-01 17:30:35.968079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.651 [2024-10-01 17:30:35.968088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.651 [2024-10-01 17:30:35.968095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.651 [2024-10-01 17:30:35.968105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.651 [2024-10-01 17:30:35.968113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.651 [2024-10-01 17:30:35.968123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.651 [2024-10-01 17:30:35.968130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.651 [2024-10-01 17:30:35.968140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.651 [2024-10-01 17:30:35.968147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.651 [2024-10-01 17:30:35.968157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.651 [2024-10-01 17:30:35.968168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.651 [2024-10-01 17:30:35.968178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.651 [2024-10-01 17:30:35.968185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.651 [2024-10-01 17:30:35.968196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.651 [2024-10-01 17:30:35.968204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.651 [2024-10-01 17:30:35.968214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.651 [2024-10-01 17:30:35.968221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.651 [2024-10-01 17:30:35.968232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.651 [2024-10-01 17:30:35.968240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.651 [2024-10-01 17:30:35.968250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.651 [2024-10-01 17:30:35.968258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.651 [2024-10-01 17:30:35.968268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.651 [2024-10-01 17:30:35.968276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.651 [2024-10-01 17:30:35.968286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.651 [2024-10-01 17:30:35.968294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.651 [2024-10-01 17:30:35.968304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.651 [2024-10-01 17:30:35.968312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.651 [2024-10-01 17:30:35.968321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.651 [2024-10-01 17:30:35.968329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.651 [2024-10-01 17:30:35.968339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.651 [2024-10-01 17:30:35.968347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.651 [2024-10-01 17:30:35.968357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.651 [2024-10-01 17:30:35.968365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.651 [2024-10-01 17:30:35.968376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.651 [2024-10-01 17:30:35.968384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.651 [2024-10-01 17:30:35.968397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.651 [2024-10-01 17:30:35.968406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.651 [2024-10-01 17:30:35.968416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.651 [2024-10-01 17:30:35.968424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.651 [2024-10-01 17:30:35.968434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.651 [2024-10-01 17:30:35.968441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.651 [2024-10-01 17:30:35.968451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.651 [2024-10-01 17:30:35.968459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.651 [2024-10-01 17:30:35.968469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.651 [2024-10-01 17:30:35.968477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.651 [2024-10-01 17:30:35.968487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.651 [2024-10-01 17:30:35.968495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.651 [2024-10-01 17:30:35.968505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.651 [2024-10-01 17:30:35.968512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.651 [2024-10-01 17:30:35.968522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.651 [2024-10-01 17:30:35.968530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.651 [2024-10-01 17:30:35.968540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.651 [2024-10-01 17:30:35.968548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.651 [2024-10-01 17:30:35.968558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.651 [2024-10-01 17:30:35.968565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.651 [2024-10-01 17:30:35.968575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.651 [2024-10-01 17:30:35.968583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.651 [2024-10-01 17:30:35.968593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.651 [2024-10-01 17:30:35.968601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.651 [2024-10-01 17:30:35.968611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.651 [2024-10-01 17:30:35.968621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.651 [2024-10-01 17:30:35.968631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.651 [2024-10-01 17:30:35.968639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.651 [2024-10-01 17:30:35.968649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.651 [2024-10-01 17:30:35.968657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.651 [2024-10-01 17:30:35.968667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.651 [2024-10-01 17:30:35.968675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.651 [2024-10-01 17:30:35.968684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.651 [2024-10-01 17:30:35.968692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.651 [2024-10-01 17:30:35.968701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.651 [2024-10-01 17:30:35.968709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.652 [2024-10-01 17:30:35.968719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.652 [2024-10-01 17:30:35.968727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.652 [2024-10-01 17:30:35.968736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.652 [2024-10-01 17:30:35.968744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.652 [2024-10-01 17:30:35.968753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.652 [2024-10-01 17:30:35.968761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.652 [2024-10-01 17:30:35.968771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.652 [2024-10-01 17:30:35.968779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.652 [2024-10-01 17:30:35.968789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.652 [2024-10-01 17:30:35.968797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.652 [2024-10-01 17:30:35.968806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.652 [2024-10-01 17:30:35.968814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.652 [2024-10-01 17:30:35.968824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.652 [2024-10-01 17:30:35.968832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.652 [2024-10-01 17:30:35.968844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.652 [2024-10-01 17:30:35.968852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.652 [2024-10-01 17:30:35.968862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.652 [2024-10-01 17:30:35.968870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.652 [2024-10-01 17:30:35.968880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.652 [2024-10-01 17:30:35.968888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.652 [2024-10-01 17:30:35.968898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.652 [2024-10-01 17:30:35.968906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.652 [2024-10-01 17:30:35.968916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.652 [2024-10-01 17:30:35.968923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.652 [2024-10-01 17:30:35.968933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.652 [2024-10-01 17:30:35.968940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.652 [2024-10-01 17:30:35.968950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.652 [2024-10-01 17:30:35.968958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.652 [2024-10-01 17:30:35.968968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.652 [2024-10-01 17:30:35.968976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.652 [2024-10-01 17:30:35.968986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.652 [2024-10-01 17:30:35.968997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.652 [2024-10-01 17:30:35.969008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.652 [2024-10-01 17:30:35.969016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.652 [2024-10-01 17:30:35.969026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.652 [2024-10-01 17:30:35.969034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.652 [2024-10-01 17:30:35.969044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.652 [2024-10-01 17:30:35.969052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.652 [2024-10-01 17:30:35.969063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.652 [2024-10-01 17:30:35.969072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.652 [2024-10-01 17:30:35.969082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.652 [2024-10-01 17:30:35.969089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.652 [2024-10-01 17:30:35.969098] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159b020 is same with the state(6) to be set 00:30:37.652 [2024-10-01 17:30:35.970442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.652 [2024-10-01 17:30:35.970458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.652 [2024-10-01 17:30:35.970472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.652 [2024-10-01 17:30:35.970482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.652 [2024-10-01 17:30:35.970494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.652 [2024-10-01 17:30:35.970504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.652 [2024-10-01 17:30:35.970516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.652 [2024-10-01 17:30:35.970525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.652 [2024-10-01 17:30:35.970535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.652 [2024-10-01 17:30:35.970543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.652 [2024-10-01 17:30:35.970553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.652 [2024-10-01 17:30:35.970562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.652 [2024-10-01 17:30:35.970571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.652 [2024-10-01 17:30:35.970579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.652 [2024-10-01 17:30:35.970589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.652 [2024-10-01 17:30:35.970597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.652 [2024-10-01 17:30:35.970608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.652 [2024-10-01 17:30:35.970616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.652 [2024-10-01 17:30:35.970626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.652 [2024-10-01 17:30:35.970634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.652 [2024-10-01 17:30:35.970644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.652 [2024-10-01 17:30:35.970654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.652 [2024-10-01 17:30:35.970664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.652 [2024-10-01 17:30:35.970672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.652 [2024-10-01 17:30:35.970681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.652 [2024-10-01 17:30:35.970689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.652 [2024-10-01 17:30:35.970699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.653 [2024-10-01 17:30:35.970706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.653 [2024-10-01 17:30:35.970716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.653 [2024-10-01 17:30:35.970724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.653 [2024-10-01 17:30:35.970734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.653 [2024-10-01 17:30:35.970742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.653 [2024-10-01 17:30:35.970751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.653 [2024-10-01 17:30:35.970758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.653 [2024-10-01 17:30:35.970768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.653 [2024-10-01 17:30:35.970776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.653 [2024-10-01 17:30:35.970786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.653 [2024-10-01 17:30:35.970794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.653 [2024-10-01 17:30:35.970804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.653 [2024-10-01 17:30:35.970811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.653 [2024-10-01 17:30:35.970822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.653 [2024-10-01 17:30:35.970830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.653 [2024-10-01 17:30:35.970839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.653 [2024-10-01 17:30:35.970847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.653 [2024-10-01 17:30:35.970857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.653 [2024-10-01 17:30:35.970864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.653 [2024-10-01 17:30:35.970876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.653 [2024-10-01 17:30:35.970884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.653 [2024-10-01 17:30:35.970894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.653 [2024-10-01 17:30:35.970901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.653 [2024-10-01 17:30:35.970911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.653 [2024-10-01 17:30:35.970919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.653 [2024-10-01 17:30:35.970929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.653 [2024-10-01 17:30:35.970937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.653 [2024-10-01 17:30:35.970947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.653 [2024-10-01 17:30:35.970955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.653 [2024-10-01 17:30:35.970964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.653 [2024-10-01 17:30:35.970972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.653 [2024-10-01 17:30:35.970982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.653 [2024-10-01 17:30:35.970990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.653 [2024-10-01 17:30:35.971008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.653 [2024-10-01 17:30:35.971016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.653 [2024-10-01 17:30:35.971026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.653 [2024-10-01 17:30:35.971034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.653 [2024-10-01 17:30:35.971044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.653 [2024-10-01 17:30:35.971051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.653 [2024-10-01 17:30:35.971061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.653 [2024-10-01 17:30:35.971069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.653 [2024-10-01 17:30:35.971078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.653 [2024-10-01 17:30:35.971086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.653 [2024-10-01 17:30:35.971096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.653 [2024-10-01 17:30:35.971105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.653 [2024-10-01 17:30:35.971115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.653 [2024-10-01 17:30:35.971123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.653 [2024-10-01 17:30:35.971133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.653 [2024-10-01 17:30:35.971141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.653 [2024-10-01 17:30:35.971151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.653 [2024-10-01 17:30:35.971159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.653 [2024-10-01 17:30:35.971168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.653 [2024-10-01 17:30:35.971176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.653 [2024-10-01 17:30:35.971186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.653 [2024-10-01 17:30:35.971194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.653 [2024-10-01 17:30:35.971203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.653 [2024-10-01 17:30:35.971211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.653 [2024-10-01 17:30:35.971222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.653 [2024-10-01 17:30:35.971230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.653 [2024-10-01 17:30:35.971240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.653 [2024-10-01 17:30:35.971248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.653 [2024-10-01 17:30:35.971257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.653 [2024-10-01 17:30:35.971265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.653 [2024-10-01 17:30:35.971275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.653 [2024-10-01 17:30:35.971283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.653 [2024-10-01 17:30:35.971293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.653 [2024-10-01 17:30:35.971301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.653 [2024-10-01 17:30:35.971312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.653 [2024-10-01 17:30:35.971319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.653 [2024-10-01 17:30:35.971331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.653 [2024-10-01 17:30:35.971339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.653 [2024-10-01 17:30:35.971349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.653 [2024-10-01 17:30:35.971357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.653 [2024-10-01 17:30:35.971367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.653 [2024-10-01 17:30:35.971374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.653 [2024-10-01 17:30:35.971384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.653 [2024-10-01 17:30:35.971392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.653 [2024-10-01 17:30:35.971401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.653 [2024-10-01 17:30:35.971410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.653 [2024-10-01 17:30:35.971420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.653 [2024-10-01 17:30:35.971427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.654 [2024-10-01 17:30:35.971437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.654 [2024-10-01 17:30:35.971445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.654 [2024-10-01 17:30:35.971455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.654 [2024-10-01 17:30:35.971462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.654 [2024-10-01 17:30:35.971472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.654 [2024-10-01 17:30:35.971480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.654 [2024-10-01 17:30:35.971489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.654 [2024-10-01 17:30:35.971497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.654 [2024-10-01 17:30:35.971506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.654 [2024-10-01 17:30:35.971514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.654 [2024-10-01 17:30:35.971524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.654 [2024-10-01 17:30:35.971532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.654 [2024-10-01 17:30:35.971541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.654 [2024-10-01 17:30:35.971551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.654 [2024-10-01 17:30:35.971560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.654 [2024-10-01 17:30:35.971568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.654 [2024-10-01 17:30:35.971578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.654 [2024-10-01 17:30:35.971586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.654 [2024-10-01 17:30:35.971595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.654 [2024-10-01 17:30:35.971603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.654 [2024-10-01 17:30:35.971611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177d0d0 is same with the state(6) to be set 00:30:37.654 [2024-10-01 17:30:35.972949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.654 [2024-10-01 17:30:35.972967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.654 [2024-10-01 17:30:35.972980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.654 [2024-10-01 17:30:35.972989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.654 [2024-10-01 17:30:35.973005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.654 [2024-10-01 17:30:35.973015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.654 [2024-10-01 17:30:35.973026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.654 [2024-10-01 17:30:35.973034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.654 [2024-10-01 17:30:35.973044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.654 [2024-10-01 17:30:35.973053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.654 [2024-10-01 17:30:35.973063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.654 [2024-10-01 17:30:35.973071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.654 [2024-10-01 17:30:35.973080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.654 [2024-10-01 17:30:35.973089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.654 [2024-10-01 17:30:35.973098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.654 [2024-10-01 17:30:35.973106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.654 [2024-10-01 17:30:35.973116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.654 [2024-10-01 17:30:35.973123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.654 [2024-10-01 17:30:35.973139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.654 [2024-10-01 17:30:35.973147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.654 [2024-10-01 17:30:35.973156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.654 [2024-10-01 17:30:35.973164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.654 [2024-10-01 17:30:35.973174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.654 [2024-10-01 17:30:35.973181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.654 [2024-10-01 17:30:35.973191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.654 [2024-10-01 17:30:35.973199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.654 [2024-10-01 17:30:35.973209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.654 [2024-10-01 17:30:35.973217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.654 [2024-10-01 17:30:35.973229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.654 [2024-10-01 17:30:35.973237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.654 [2024-10-01 17:30:35.973247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.654 [2024-10-01 17:30:35.973254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.654 [2024-10-01 17:30:35.973264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.654 [2024-10-01 17:30:35.973272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.654 [2024-10-01 17:30:35.973281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.654 [2024-10-01 17:30:35.973289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.654 [2024-10-01 17:30:35.973299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.654 [2024-10-01 17:30:35.973307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.654 [2024-10-01 17:30:35.973316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.654 [2024-10-01 17:30:35.973324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.654 [2024-10-01 17:30:35.973334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.654 [2024-10-01 17:30:35.973341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.654 [2024-10-01 17:30:35.973351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.654 [2024-10-01 17:30:35.973360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.654 [2024-10-01 17:30:35.973370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.654 [2024-10-01 17:30:35.973378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.654 [2024-10-01 17:30:35.973387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.654 [2024-10-01 17:30:35.973395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.654 [2024-10-01 17:30:35.973404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.654 [2024-10-01 17:30:35.973412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.654 [2024-10-01 17:30:35.973421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.654 [2024-10-01 17:30:35.973429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.655 [2024-10-01 17:30:35.973439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-10-01 17:30:35.973446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.655 [2024-10-01 17:30:35.973456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-10-01 17:30:35.973463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.655 [2024-10-01 17:30:35.973473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-10-01 17:30:35.973480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.655 [2024-10-01 17:30:35.973489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-10-01 17:30:35.973498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.655 [2024-10-01 17:30:35.973507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-10-01 17:30:35.973515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.655 [2024-10-01 17:30:35.973525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-10-01 17:30:35.973534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.655 [2024-10-01 17:30:35.973543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-10-01 17:30:35.973551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.655 [2024-10-01 17:30:35.973561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-10-01 17:30:35.973570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.655 [2024-10-01 17:30:35.973581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-10-01 17:30:35.973589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.655 [2024-10-01 17:30:35.973599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-10-01 17:30:35.973607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.655 [2024-10-01 17:30:35.973617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-10-01 17:30:35.973624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.655 [2024-10-01 17:30:35.973634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-10-01 17:30:35.973641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.655 [2024-10-01 17:30:35.973651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-10-01 17:30:35.973658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.655 [2024-10-01 17:30:35.973668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-10-01 17:30:35.973675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.655 [2024-10-01 17:30:35.973685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-10-01 17:30:35.973694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.655 [2024-10-01 17:30:35.973703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-10-01 17:30:35.973710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.655 [2024-10-01 17:30:35.973720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-10-01 17:30:35.973727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.655 [2024-10-01 17:30:35.973736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-10-01 17:30:35.973744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.655 [2024-10-01 17:30:35.973753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-10-01 17:30:35.973761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.655 [2024-10-01 17:30:35.973770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-10-01 17:30:35.973778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.655 [2024-10-01 17:30:35.973789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-10-01 17:30:35.973799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.655 [2024-10-01 17:30:35.973809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-10-01 17:30:35.973817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.655 [2024-10-01 17:30:35.973827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-10-01 17:30:35.973835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.655 [2024-10-01 17:30:35.973845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-10-01 17:30:35.973853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.655 [2024-10-01 17:30:35.973864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-10-01 17:30:35.973872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.655 [2024-10-01 17:30:35.973882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-10-01 17:30:35.973890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.655 [2024-10-01 17:30:35.973900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-10-01 17:30:35.973907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.655 [2024-10-01 17:30:35.973917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-10-01 17:30:35.973925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.655 [2024-10-01 17:30:35.973934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-10-01 17:30:35.973942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.655 [2024-10-01 17:30:35.973951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-10-01 17:30:35.973958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.655 [2024-10-01 17:30:35.973969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-10-01 17:30:35.973976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.655 [2024-10-01 17:30:35.973986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-10-01 17:30:35.973997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.655 [2024-10-01 17:30:35.974007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-10-01 17:30:35.974015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.655 [2024-10-01 17:30:35.974026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-10-01 17:30:35.974034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.655 [2024-10-01 17:30:35.974043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-10-01 17:30:35.974051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.655 [2024-10-01 17:30:35.974060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-10-01 17:30:35.974068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.655 [2024-10-01 17:30:35.974077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-10-01 17:30:35.974085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.655 [2024-10-01 17:30:35.974094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-10-01 17:30:35.974102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.655 [2024-10-01 17:30:35.974110] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a550 is same with the state(6) to be set 00:30:37.655 [2024-10-01 17:30:35.975430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.655 [2024-10-01 17:30:35.975446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.656 [2024-10-01 17:30:35.975459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-10-01 17:30:35.975469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.656 [2024-10-01 17:30:35.975480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-10-01 17:30:35.975488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.656 [2024-10-01 17:30:35.975498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-10-01 17:30:35.975506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.656 [2024-10-01 17:30:35.975518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-10-01 17:30:35.975527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.656 [2024-10-01 17:30:35.975536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-10-01 17:30:35.975545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.656 [2024-10-01 17:30:35.975554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-10-01 17:30:35.975562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.656 [2024-10-01 17:30:35.975575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-10-01 17:30:35.975584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.656 [2024-10-01 17:30:35.975594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-10-01 17:30:35.975603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.656 [2024-10-01 17:30:35.975612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-10-01 17:30:35.975620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.656 [2024-10-01 17:30:35.975631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-10-01 17:30:35.975638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.656 [2024-10-01 17:30:35.975648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-10-01 17:30:35.975656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.656 [2024-10-01 17:30:35.975665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-10-01 17:30:35.975673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.656 [2024-10-01 17:30:35.975682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-10-01 17:30:35.975690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.656 [2024-10-01 17:30:35.975701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-10-01 17:30:35.975708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.656 [2024-10-01 17:30:35.975718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-10-01 17:30:35.975726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.656 [2024-10-01 17:30:35.975736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-10-01 17:30:35.975743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.656 [2024-10-01 17:30:35.975753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-10-01 17:30:35.975761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.656 [2024-10-01 17:30:35.975771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-10-01 17:30:35.975779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.656 [2024-10-01 17:30:35.975789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-10-01 17:30:35.975799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.656 [2024-10-01 17:30:35.975809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-10-01 17:30:35.975816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.656 [2024-10-01 17:30:35.975826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-10-01 17:30:35.975834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.656 [2024-10-01 17:30:35.975844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-10-01 17:30:35.975852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.656 [2024-10-01 17:30:35.975861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-10-01 17:30:35.975869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.656 [2024-10-01 17:30:35.975879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-10-01 17:30:35.975887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.656 [2024-10-01 17:30:35.975896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-10-01 17:30:35.975904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.656 [2024-10-01 17:30:35.975914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-10-01 17:30:35.975922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.656 [2024-10-01 17:30:35.975932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-10-01 17:30:35.975939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.656 [2024-10-01 17:30:35.975948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-10-01 17:30:35.975956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.656 [2024-10-01 17:30:35.975966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-10-01 17:30:35.975973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.656 [2024-10-01 17:30:35.975983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-10-01 17:30:35.975990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.656 [2024-10-01 17:30:35.976006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-10-01 17:30:35.976014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.656 [2024-10-01 17:30:35.976025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-10-01 17:30:35.976033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.656 [2024-10-01 17:30:35.976042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-10-01 17:30:35.976050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.656 [2024-10-01 17:30:35.976059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.656 [2024-10-01 17:30:35.976066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.657 [2024-10-01 17:30:35.976075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-10-01 17:30:35.976083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.657 [2024-10-01 17:30:35.976093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-10-01 17:30:35.976100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.657 [2024-10-01 17:30:35.976110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-10-01 17:30:35.976117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.657 [2024-10-01 17:30:35.976126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-10-01 17:30:35.976134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.657 [2024-10-01 17:30:35.976144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-10-01 17:30:35.976151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.657 [2024-10-01 17:30:35.976160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-10-01 17:30:35.976168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.657 [2024-10-01 17:30:35.976177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-10-01 17:30:35.976185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.657 [2024-10-01 17:30:35.976195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-10-01 17:30:35.976202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.657 [2024-10-01 17:30:35.976213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-10-01 17:30:35.976220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.657 [2024-10-01 17:30:35.976229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-10-01 17:30:35.976239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.657 [2024-10-01 17:30:35.976248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-10-01 17:30:35.976257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.657 [2024-10-01 17:30:35.976268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-10-01 17:30:35.976276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.657 [2024-10-01 17:30:35.976286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-10-01 17:30:35.976294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.657 [2024-10-01 17:30:35.976303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-10-01 17:30:35.976311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.657 [2024-10-01 17:30:35.976321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-10-01 17:30:35.976329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.657 [2024-10-01 17:30:35.976339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-10-01 17:30:35.976346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.657 [2024-10-01 17:30:35.976356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-10-01 17:30:35.976363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.657 [2024-10-01 17:30:35.976373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-10-01 17:30:35.976381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.657 [2024-10-01 17:30:35.976391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-10-01 17:30:35.976398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.657 [2024-10-01 17:30:35.976408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-10-01 17:30:35.976415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.657 [2024-10-01 17:30:35.976425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-10-01 17:30:35.976433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.657 [2024-10-01 17:30:35.976443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-10-01 17:30:35.976450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.657 [2024-10-01 17:30:35.976462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-10-01 17:30:35.976470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.657 [2024-10-01 17:30:35.976479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-10-01 17:30:35.976487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.657 [2024-10-01 17:30:35.976497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-10-01 17:30:35.976504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.657 [2024-10-01 17:30:35.976514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-10-01 17:30:35.976522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.657 [2024-10-01 17:30:35.976532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-10-01 17:30:35.976540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.657 [2024-10-01 17:30:35.976549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-10-01 17:30:35.976558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.657 [2024-10-01 17:30:35.976567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-10-01 17:30:35.976574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.657 [2024-10-01 17:30:35.976583] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a410 is same with the state(6) to be set 00:30:37.657 [2024-10-01 17:30:35.977914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-10-01 17:30:35.977928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.657 [2024-10-01 17:30:35.977940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-10-01 17:30:35.977948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.657 [2024-10-01 17:30:35.977958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.657 [2024-10-01 17:30:35.977966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.657 [2024-10-01 17:30:35.977975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.658 [2024-10-01 17:30:35.977983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.658 [2024-10-01 17:30:35.977996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.658 [2024-10-01 17:30:35.978004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.658 [2024-10-01 17:30:35.978017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.658 [2024-10-01 17:30:35.978025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.658 [2024-10-01 17:30:35.978035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.658 [2024-10-01 17:30:35.978042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.658 [2024-10-01 17:30:35.978052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.658 [2024-10-01 17:30:35.978059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.658 [2024-10-01 17:30:35.978069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.658 [2024-10-01 17:30:35.978076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.658 [2024-10-01 17:30:35.978086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.658 [2024-10-01 17:30:35.978094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.658 [2024-10-01 17:30:35.978104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.658 [2024-10-01 17:30:35.978112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.658 [2024-10-01 17:30:35.978121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.658 [2024-10-01 17:30:35.978129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.658 [2024-10-01 17:30:35.978139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.658 [2024-10-01 17:30:35.978148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.658 [2024-10-01 17:30:35.978158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.658 [2024-10-01 17:30:35.978166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.658 [2024-10-01 17:30:35.978176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.658 [2024-10-01 17:30:35.978184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.658 [2024-10-01 17:30:35.978194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.658 [2024-10-01 17:30:35.978201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.658 [2024-10-01 17:30:35.978211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.658 [2024-10-01 17:30:35.978218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.658 [2024-10-01 17:30:35.978228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.658 [2024-10-01 17:30:35.978238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.658 [2024-10-01 17:30:35.978248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.658 [2024-10-01 17:30:35.978256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.658 [2024-10-01 17:30:35.978266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.658 [2024-10-01 17:30:35.978274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.658 [2024-10-01 17:30:35.978283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.658 [2024-10-01 17:30:35.978291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.658 [2024-10-01 17:30:35.978301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.658 [2024-10-01 17:30:35.978309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.658 [2024-10-01 17:30:35.978320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.658 [2024-10-01 17:30:35.978328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.658 [2024-10-01 17:30:35.978337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.658 [2024-10-01 17:30:35.978345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.658 [2024-10-01 17:30:35.978355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.658 [2024-10-01 17:30:35.978362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.658 [2024-10-01 17:30:35.978373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.658 [2024-10-01 17:30:35.978381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.658 [2024-10-01 17:30:35.978390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.658 [2024-10-01 17:30:35.978398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.658 [2024-10-01 17:30:35.978408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.658 [2024-10-01 17:30:35.978415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.658 [2024-10-01 17:30:35.978425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.658 [2024-10-01 17:30:35.978434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.658 [2024-10-01 17:30:35.978443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.658 [2024-10-01 17:30:35.978451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.658 [2024-10-01 17:30:35.978463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.658 [2024-10-01 17:30:35.978472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.658 [2024-10-01 17:30:35.978482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.658 [2024-10-01 17:30:35.978490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.658 [2024-10-01 17:30:35.978499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.658 [2024-10-01 17:30:35.978508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.658 [2024-10-01 17:30:35.978517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.658 [2024-10-01 17:30:35.978524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.658 [2024-10-01 17:30:35.978534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.658 [2024-10-01 17:30:35.978542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.659 [2024-10-01 17:30:35.978551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.659 [2024-10-01 17:30:35.978559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.659 [2024-10-01 17:30:35.978568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.659 [2024-10-01 17:30:35.978576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.659 [2024-10-01 17:30:35.978586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.659 [2024-10-01 17:30:35.978594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.659 [2024-10-01 17:30:35.978603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.659 [2024-10-01 17:30:35.978611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.659 [2024-10-01 17:30:35.978621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.659 [2024-10-01 17:30:35.978628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.659 [2024-10-01 17:30:35.978638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.659 [2024-10-01 17:30:35.978645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.659 [2024-10-01 17:30:35.978655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.659 [2024-10-01 17:30:35.978663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.659 [2024-10-01 17:30:35.978672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.659 [2024-10-01 17:30:35.978682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.659 [2024-10-01 17:30:35.978691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.659 [2024-10-01 17:30:35.978698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.659 [2024-10-01 17:30:35.978708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.659 [2024-10-01 17:30:35.978716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.659 [2024-10-01 17:30:35.978726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.659 [2024-10-01 17:30:35.978734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.659 [2024-10-01 17:30:35.978744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.659 [2024-10-01 17:30:35.978752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.659 [2024-10-01 17:30:35.978762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.659 [2024-10-01 17:30:35.978770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.659 [2024-10-01 17:30:35.978779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.659 [2024-10-01 17:30:35.978787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.659 [2024-10-01 17:30:35.978796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.659 [2024-10-01 17:30:35.978804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.659 [2024-10-01 17:30:35.978814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.659 [2024-10-01 17:30:35.978821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.659 [2024-10-01 17:30:35.978831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.659 [2024-10-01 17:30:35.978839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.659 [2024-10-01 17:30:35.978848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.659 [2024-10-01 17:30:35.978856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.659 [2024-10-01 17:30:35.978866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.659 [2024-10-01 17:30:35.978874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.659 [2024-10-01 17:30:35.978883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.659 [2024-10-01 17:30:35.978891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.659 [2024-10-01 17:30:35.978901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.659 [2024-10-01 17:30:35.978911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.659 [2024-10-01 17:30:35.978920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.659 [2024-10-01 17:30:35.978928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.659 [2024-10-01 17:30:35.978938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.659 [2024-10-01 17:30:35.978946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.659 [2024-10-01 17:30:35.978955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.659 [2024-10-01 17:30:35.978963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.659 [2024-10-01 17:30:35.978973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.659 [2024-10-01 17:30:35.978981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.659 [2024-10-01 17:30:35.978992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.659 [2024-10-01 17:30:35.979004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.659 [2024-10-01 17:30:35.979013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.659 [2024-10-01 17:30:35.979021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.659 [2024-10-01 17:30:35.979031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.659 [2024-10-01 17:30:35.979038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.659 [2024-10-01 17:30:35.979049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.659 [2024-10-01 17:30:35.979056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.659 [2024-10-01 17:30:35.979064] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179b990 is same with the state(6) to be set 00:30:37.659 [2024-10-01 17:30:35.980414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.659 [2024-10-01 17:30:35.980429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.659 [2024-10-01 17:30:35.980443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.659 [2024-10-01 17:30:35.980453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.659 [2024-10-01 17:30:35.980464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.659 [2024-10-01 17:30:35.980472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.659 [2024-10-01 17:30:35.980482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.659 [2024-10-01 17:30:35.980497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.659 [2024-10-01 17:30:35.980507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.659 [2024-10-01 17:30:35.980515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.659 [2024-10-01 17:30:35.980525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.659 [2024-10-01 17:30:35.980534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.659 [2024-10-01 17:30:35.980544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.659 [2024-10-01 17:30:35.980552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.659 [2024-10-01 17:30:35.980562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.659 [2024-10-01 17:30:35.980570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.659 [2024-10-01 17:30:35.980580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.659 [2024-10-01 17:30:35.980588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.659 [2024-10-01 17:30:35.980599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.659 [2024-10-01 17:30:35.980607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.659 [2024-10-01 17:30:35.980617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.660 [2024-10-01 17:30:35.980625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.660 [2024-10-01 17:30:35.980635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.660 [2024-10-01 17:30:35.980643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.660 [2024-10-01 17:30:35.980652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.660 [2024-10-01 17:30:35.980662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.660 [2024-10-01 17:30:35.980671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.660 [2024-10-01 17:30:35.980679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.660 [2024-10-01 17:30:35.980689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.660 [2024-10-01 17:30:35.980697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.660 [2024-10-01 17:30:35.980707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.660 [2024-10-01 17:30:35.980714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.660 [2024-10-01 17:30:35.980727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.660 [2024-10-01 17:30:35.980735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.660 [2024-10-01 17:30:35.980744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.660 [2024-10-01 17:30:35.980752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.660 [2024-10-01 17:30:35.980762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.660 [2024-10-01 17:30:35.980769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.660 [2024-10-01 17:30:35.980779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.660 [2024-10-01 17:30:35.980787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.660 [2024-10-01 17:30:35.980796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.660 [2024-10-01 17:30:35.980804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.660 [2024-10-01 17:30:35.980813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.660 [2024-10-01 17:30:35.980821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.660 [2024-10-01 17:30:35.980830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.660 [2024-10-01 17:30:35.980838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.660 [2024-10-01 17:30:35.980848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.660 [2024-10-01 17:30:35.980855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.660 [2024-10-01 17:30:35.980866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.660 [2024-10-01 17:30:35.980873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.660 [2024-10-01 17:30:35.980883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.660 [2024-10-01 17:30:35.980891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.660 [2024-10-01 17:30:35.980900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.660 [2024-10-01 17:30:35.980908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.660 [2024-10-01 17:30:35.980919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.660 [2024-10-01 17:30:35.980926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.660 [2024-10-01 17:30:35.980936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.660 [2024-10-01 17:30:35.980946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.660 [2024-10-01 17:30:35.980956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.660 [2024-10-01 17:30:35.980963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.660 [2024-10-01 17:30:35.980973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.660 [2024-10-01 17:30:35.980980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.660 [2024-10-01 17:30:35.980990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.660 [2024-10-01 17:30:35.981002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.660 [2024-10-01 17:30:35.981013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.660 [2024-10-01 17:30:35.981020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.660 [2024-10-01 17:30:35.981030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.660 [2024-10-01 17:30:35.981038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.660 [2024-10-01 17:30:35.981048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.660 [2024-10-01 17:30:35.981055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.660 [2024-10-01 17:30:35.981065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.660 [2024-10-01 17:30:35.981073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.660 [2024-10-01 17:30:35.981083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.660 [2024-10-01 17:30:35.981091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.660 [2024-10-01 17:30:35.981100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.660 [2024-10-01 17:30:35.981108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.660 [2024-10-01 17:30:35.981118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.660 [2024-10-01 17:30:35.981126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.660 [2024-10-01 17:30:35.981136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.660 [2024-10-01 17:30:35.981144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.660 [2024-10-01 17:30:35.981153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.660 [2024-10-01 17:30:35.981161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.660 [2024-10-01 17:30:35.981173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.660 [2024-10-01 17:30:35.981181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.660 [2024-10-01 17:30:35.981191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.660 [2024-10-01 17:30:35.981199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.660 [2024-10-01 17:30:35.981208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.660 [2024-10-01 17:30:35.981216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.660 [2024-10-01 17:30:35.981225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.660 [2024-10-01 17:30:35.981233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.660 [2024-10-01 17:30:35.981243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.660 [2024-10-01 17:30:35.981251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.660 [2024-10-01 17:30:35.981260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.660 [2024-10-01 17:30:35.981268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.660 [2024-10-01 17:30:35.981277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.660 [2024-10-01 17:30:35.981285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.660 [2024-10-01 17:30:35.981295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.660 [2024-10-01 17:30:35.981303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.660 [2024-10-01 17:30:35.981312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.660 [2024-10-01 17:30:35.981320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.660 [2024-10-01 17:30:35.981330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.660 [2024-10-01 17:30:35.981337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.661 [2024-10-01 17:30:35.981347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.661 [2024-10-01 17:30:35.981354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.661 [2024-10-01 17:30:35.981364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.661 [2024-10-01 17:30:35.981373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.661 [2024-10-01 17:30:35.981383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.661 [2024-10-01 17:30:35.981393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.661 [2024-10-01 17:30:35.981403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.661 [2024-10-01 17:30:35.981411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.661 [2024-10-01 17:30:35.981421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.661 [2024-10-01 17:30:35.981429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.661 [2024-10-01 17:30:35.981438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.661 [2024-10-01 17:30:35.981447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.661 [2024-10-01 17:30:35.981457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.661 [2024-10-01 17:30:35.981464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.661 [2024-10-01 17:30:35.981474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.661 [2024-10-01 17:30:35.981482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.661 [2024-10-01 17:30:35.981492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.661 [2024-10-01 17:30:35.981500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.661 [2024-10-01 17:30:35.981510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.661 [2024-10-01 17:30:35.981518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.661 [2024-10-01 17:30:35.981528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.661 [2024-10-01 17:30:35.981536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.661 [2024-10-01 17:30:35.981545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.661 [2024-10-01 17:30:35.981553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.661 [2024-10-01 17:30:35.981563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.661 [2024-10-01 17:30:35.981571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.661 [2024-10-01 17:30:35.981580] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1611440 is same with the state(6) to be set 00:30:37.661 [2024-10-01 17:30:35.983168] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:30:37.661 [2024-10-01 17:30:35.983193] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:30:37.661 [2024-10-01 17:30:35.983204] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:30:37.661 [2024-10-01 17:30:35.983215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:30:37.661 [2024-10-01 17:30:35.983302] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:37.661 [2024-10-01 17:30:35.983318] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:37.661 [2024-10-01 17:30:35.983400] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:30:37.661 [2024-10-01 17:30:35.983412] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:30:37.661 [2024-10-01 17:30:35.983811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.661 [2024-10-01 17:30:35.983827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1395720 with addr=10.0.0.2, port=4420 00:30:37.661 [2024-10-01 17:30:35.983836] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1395720 is same with the state(6) to be set 00:30:37.661 [2024-10-01 17:30:35.984227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.661 [2024-10-01 17:30:35.984268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1393e90 with addr=10.0.0.2, port=4420 00:30:37.661 [2024-10-01 17:30:35.984280] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1393e90 is same with the state(6) to be set 00:30:37.661 [2024-10-01 17:30:35.984611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.661 [2024-10-01 17:30:35.984624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c0e70 with addr=10.0.0.2, port=4420 00:30:37.661 [2024-10-01 17:30:35.984631] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c0e70 is same with the state(6) to be set 00:30:37.661 [2024-10-01 17:30:35.984933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.661 [2024-10-01 17:30:35.984944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a4610 with addr=10.0.0.2, port=4420 00:30:37.661 [2024-10-01 17:30:35.984952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a4610 is same with the state(6) to be set 00:30:37.661 [2024-10-01 17:30:35.986276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.661 [2024-10-01 17:30:35.986294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.661 [2024-10-01 17:30:35.986310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.661 [2024-10-01 17:30:35.986318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.661 [2024-10-01 17:30:35.986328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.661 [2024-10-01 17:30:35.986336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.661 [2024-10-01 17:30:35.986345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.661 [2024-10-01 17:30:35.986353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.661 [2024-10-01 17:30:35.986363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.661 [2024-10-01 17:30:35.986370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.661 [2024-10-01 17:30:35.986380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.661 [2024-10-01 17:30:35.986387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.661 [2024-10-01 17:30:35.986402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.661 [2024-10-01 17:30:35.986409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.661 [2024-10-01 17:30:35.986419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.661 [2024-10-01 17:30:35.986427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.661 [2024-10-01 17:30:35.986437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.661 [2024-10-01 17:30:35.986445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.661 [2024-10-01 17:30:35.986456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.661 [2024-10-01 17:30:35.986463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.661 [2024-10-01 17:30:35.986473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.661 [2024-10-01 17:30:35.986481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.661 [2024-10-01 17:30:35.986490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.661 [2024-10-01 17:30:35.986498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.661 [2024-10-01 17:30:35.986508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.661 [2024-10-01 17:30:35.986516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.661 [2024-10-01 17:30:35.986525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.662 [2024-10-01 17:30:35.986533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-10-01 17:30:35.986543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.662 [2024-10-01 17:30:35.986551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-10-01 17:30:35.986561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.662 [2024-10-01 17:30:35.986569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-10-01 17:30:35.986578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.662 [2024-10-01 17:30:35.986586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-10-01 17:30:35.986596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.662 [2024-10-01 17:30:35.986604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-10-01 17:30:35.986613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.662 [2024-10-01 17:30:35.986623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-10-01 17:30:35.986633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.662 [2024-10-01 17:30:35.986641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-10-01 17:30:35.986652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.662 [2024-10-01 17:30:35.986659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-10-01 17:30:35.986669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.662 [2024-10-01 17:30:35.986677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-10-01 17:30:35.986687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.662 [2024-10-01 17:30:35.986695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-10-01 17:30:35.986705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.662 [2024-10-01 17:30:35.986713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-10-01 17:30:35.986723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.662 [2024-10-01 17:30:35.986730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-10-01 17:30:35.986740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.662 [2024-10-01 17:30:35.986748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-10-01 17:30:35.986758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.662 [2024-10-01 17:30:35.986766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-10-01 17:30:35.986775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.662 [2024-10-01 17:30:35.986783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-10-01 17:30:35.986793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.662 [2024-10-01 17:30:35.986801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-10-01 17:30:35.986810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.662 [2024-10-01 17:30:35.986818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-10-01 17:30:35.986828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.662 [2024-10-01 17:30:35.986836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-10-01 17:30:35.986847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.662 [2024-10-01 17:30:35.986855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-10-01 17:30:35.986864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.662 [2024-10-01 17:30:35.986872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-10-01 17:30:35.986882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.662 [2024-10-01 17:30:35.986890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-10-01 17:30:35.986899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.662 [2024-10-01 17:30:35.986907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-10-01 17:30:35.986917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.662 [2024-10-01 17:30:35.986925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-10-01 17:30:35.986935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.662 [2024-10-01 17:30:35.986942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-10-01 17:30:35.986951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.662 [2024-10-01 17:30:35.986958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-10-01 17:30:35.986968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.662 [2024-10-01 17:30:35.986975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-10-01 17:30:35.986985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.662 [2024-10-01 17:30:35.986993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-10-01 17:30:35.987006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.662 [2024-10-01 17:30:35.987014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-10-01 17:30:35.987023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.662 [2024-10-01 17:30:35.987031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-10-01 17:30:35.987040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.662 [2024-10-01 17:30:35.987048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-10-01 17:30:35.987057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.662 [2024-10-01 17:30:35.987067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-10-01 17:30:35.987077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.662 [2024-10-01 17:30:35.987085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-10-01 17:30:35.987094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.662 [2024-10-01 17:30:35.987102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-10-01 17:30:35.987112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.662 [2024-10-01 17:30:35.987120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-10-01 17:30:35.987130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.662 [2024-10-01 17:30:35.987138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-10-01 17:30:35.987147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.662 [2024-10-01 17:30:35.987155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-10-01 17:30:35.987165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.662 [2024-10-01 17:30:35.987173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-10-01 17:30:35.987183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.662 [2024-10-01 17:30:35.987191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-10-01 17:30:35.987202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.662 [2024-10-01 17:30:35.987209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-10-01 17:30:35.987219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.662 [2024-10-01 17:30:35.987227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-10-01 17:30:35.987237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.663 [2024-10-01 17:30:35.987246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.663 [2024-10-01 17:30:35.987257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.663 [2024-10-01 17:30:35.987264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.663 [2024-10-01 17:30:35.987275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.663 [2024-10-01 17:30:35.987283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.663 [2024-10-01 17:30:35.987295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.663 [2024-10-01 17:30:35.987303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.663 [2024-10-01 17:30:35.987313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.663 [2024-10-01 17:30:35.987320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.663 [2024-10-01 17:30:35.987330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.663 [2024-10-01 17:30:35.987337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.663 [2024-10-01 17:30:35.987347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.663 [2024-10-01 17:30:35.987355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.663 [2024-10-01 17:30:35.987364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.663 [2024-10-01 17:30:35.987372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.663 [2024-10-01 17:30:35.987382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.663 [2024-10-01 17:30:35.987390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.663 [2024-10-01 17:30:35.987400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.663 [2024-10-01 17:30:35.987407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.663 [2024-10-01 17:30:35.987417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.663 [2024-10-01 17:30:35.987425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.663 [2024-10-01 17:30:35.987435] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdd0 is same with the state(6) to be set 00:30:37.663 [2024-10-01 17:30:35.989838] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:37.663 [2024-10-01 17:30:35.989875] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:30:37.663 [2024-10-01 17:30:35.989886] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:30:37.663 task offset: 24576 on job bdev=Nvme5n1 fails 00:30:37.663 00:30:37.663 Latency(us) 00:30:37.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:37.663 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:37.663 Job: Nvme1n1 ended in about 1.04 seconds with error 00:30:37.663 Verification LBA range: start 0x0 length 0x400 00:30:37.663 Nvme1n1 : 1.04 185.20 11.57 61.73 0.00 256522.88 22063.79 242920.11 00:30:37.663 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:37.663 Job: Nvme2n1 ended in about 1.05 seconds with error 00:30:37.663 Verification LBA range: start 0x0 length 0x400 00:30:37.663 Nvme2n1 : 1.05 183.25 11.45 61.08 0.00 254542.72 17257.81 295348.91 00:30:37.663 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:37.663 Job: Nvme3n1 ended in about 1.05 seconds with error 00:30:37.663 Verification LBA range: start 0x0 length 0x400 00:30:37.663 Nvme3n1 : 1.05 182.81 11.43 60.94 0.00 250376.32 20971.52 242920.11 00:30:37.663 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:37.663 Job: Nvme4n1 ended in about 1.05 seconds with error 00:30:37.663 Verification LBA range: start 0x0 length 0x400 00:30:37.663 Nvme4n1 : 1.05 182.38 11.40 60.79 0.00 246299.95 19005.44 244667.73 00:30:37.663 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:37.663 Job: Nvme5n1 ended in about 1.03 seconds with error 00:30:37.663 Verification LBA range: start 0x0 length 0x400 00:30:37.663 Nvme5n1 : 1.03 185.73 11.61 61.91 0.00 236848.64 20753.07 256901.12 00:30:37.663 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:37.663 Job: Nvme6n1 ended in about 1.03 seconds with error 00:30:37.663 Verification LBA range: start 0x0 length 0x400 00:30:37.663 Nvme6n1 : 1.03 185.51 11.59 61.84 0.00 232410.45 15947.09 248162.99 00:30:37.663 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:37.663 Job: Nvme7n1 ended in about 1.06 seconds with error 00:30:37.663 Verification LBA range: start 0x0 length 0x400 00:30:37.663 Nvme7n1 : 1.06 181.95 11.37 60.65 0.00 232690.13 15837.87 288358.40 00:30:37.663 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:37.663 Job: Nvme8n1 ended in about 1.06 seconds with error 00:30:37.663 Verification LBA range: start 0x0 length 0x400 00:30:37.663 Nvme8n1 : 1.06 181.53 11.35 60.51 0.00 228573.44 18350.08 244667.73 00:30:37.663 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:37.663 Job: Nvme9n1 ended in about 1.07 seconds with error 00:30:37.663 Verification LBA range: start 0x0 length 0x400 00:30:37.663 Nvme9n1 : 1.07 180.10 11.26 60.03 0.00 225946.24 12288.00 255153.49 00:30:37.663 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:37.663 Job: Nvme10n1 ended in about 1.06 seconds with error 00:30:37.663 Verification LBA range: start 0x0 length 0x400 00:30:37.663 Nvme10n1 : 1.06 120.73 7.55 60.37 0.00 293156.98 19879.25 293601.28 00:30:37.663 =================================================================================================================== 00:30:37.663 Total : 1769.19 110.57 609.85 0.00 244520.87 12288.00 295348.91 00:30:37.663 [2024-10-01 17:30:36.016481] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:37.663 [2024-10-01 17:30:36.016512] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:30:37.663 [2024-10-01 17:30:36.016815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.663 [2024-10-01 17:30:36.016835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b8570 with addr=10.0.0.2, port=4420 00:30:37.663 [2024-10-01 17:30:36.016846] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b8570 is same with the state(6) to be set 00:30:37.663 [2024-10-01 17:30:36.017154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.663 [2024-10-01 17:30:36.017167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1809ad0 with addr=10.0.0.2, port=4420 00:30:37.663 [2024-10-01 17:30:36.017175] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1809ad0 is same with the state(6) to be set 00:30:37.663 [2024-10-01 17:30:36.017187] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1395720 (9): Bad file descriptor 00:30:37.663 [2024-10-01 17:30:36.017200] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1393e90 (9): Bad file descriptor 00:30:37.663 [2024-10-01 17:30:36.017209] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c0e70 (9): Bad file descriptor 00:30:37.663 [2024-10-01 17:30:36.017219] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a4610 (9): Bad file descriptor 00:30:37.663 [2024-10-01 17:30:36.017663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.663 [2024-10-01 17:30:36.017679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1395b80 with addr=10.0.0.2, port=4420 00:30:37.663 [2024-10-01 17:30:36.017687] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1395b80 is same with the state(6) to be set 00:30:37.663 [2024-10-01 17:30:36.017965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.663 [2024-10-01 17:30:36.017977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c12e0 with addr=10.0.0.2, port=4420 00:30:37.663 [2024-10-01 17:30:36.017984] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c12e0 is same with the state(6) to be set 00:30:37.663 [2024-10-01 17:30:36.018081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.663 [2024-10-01 17:30:36.018091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c0b10 with addr=10.0.0.2, port=4420 00:30:37.663 [2024-10-01 17:30:36.018099] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c0b10 is same with the state(6) to be set 00:30:37.663 [2024-10-01 17:30:36.018263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.663 [2024-10-01 17:30:36.018274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1808580 with addr=10.0.0.2, port=4420 00:30:37.663 [2024-10-01 17:30:36.018281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1808580 is same with the state(6) to be set 00:30:37.663 [2024-10-01 17:30:36.018290] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b8570 (9): Bad file descriptor 00:30:37.663 [2024-10-01 17:30:36.018299] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1809ad0 (9): Bad file descriptor 00:30:37.663 [2024-10-01 17:30:36.018310] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:30:37.663 [2024-10-01 17:30:36.018317] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:30:37.663 [2024-10-01 17:30:36.018326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:30:37.663 [2024-10-01 17:30:36.018340] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:30:37.663 [2024-10-01 17:30:36.018347] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:30:37.663 [2024-10-01 17:30:36.018355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:30:37.663 [2024-10-01 17:30:36.018366] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:30:37.663 [2024-10-01 17:30:36.018373] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:30:37.663 [2024-10-01 17:30:36.018380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:30:37.664 [2024-10-01 17:30:36.018391] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:30:37.664 [2024-10-01 17:30:36.018398] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:30:37.664 [2024-10-01 17:30:36.018405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:30:37.664 [2024-10-01 17:30:36.018429] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:37.664 [2024-10-01 17:30:36.018442] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:37.664 [2024-10-01 17:30:36.018453] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:37.664 [2024-10-01 17:30:36.018463] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:37.664 [2024-10-01 17:30:36.018480] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:37.664 [2024-10-01 17:30:36.018490] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:37.664 [2024-10-01 17:30:36.018830] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:37.664 [2024-10-01 17:30:36.018842] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:37.664 [2024-10-01 17:30:36.018849] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:37.664 [2024-10-01 17:30:36.018856] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:37.664 [2024-10-01 17:30:36.018864] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1395b80 (9): Bad file descriptor 00:30:37.664 [2024-10-01 17:30:36.018874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c12e0 (9): Bad file descriptor 00:30:37.664 [2024-10-01 17:30:36.018884] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c0b10 (9): Bad file descriptor 00:30:37.664 [2024-10-01 17:30:36.018894] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1808580 (9): Bad file descriptor 00:30:37.664 [2024-10-01 17:30:36.018902] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:30:37.664 [2024-10-01 17:30:36.018909] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:30:37.664 [2024-10-01 17:30:36.018917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:30:37.664 [2024-10-01 17:30:36.018927] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:30:37.664 [2024-10-01 17:30:36.018934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:30:37.664 [2024-10-01 17:30:36.018941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:30:37.664 [2024-10-01 17:30:36.019218] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:37.664 [2024-10-01 17:30:36.019230] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:37.664 [2024-10-01 17:30:36.019237] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:37.664 [2024-10-01 17:30:36.019245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:37.664 [2024-10-01 17:30:36.019252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:37.664 [2024-10-01 17:30:36.019262] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:30:37.664 [2024-10-01 17:30:36.019269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:30:37.664 [2024-10-01 17:30:36.019277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:30:37.664 [2024-10-01 17:30:36.019286] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:30:37.664 [2024-10-01 17:30:36.019293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:30:37.664 [2024-10-01 17:30:36.019300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:30:37.664 [2024-10-01 17:30:36.019311] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:30:37.664 [2024-10-01 17:30:36.019318] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:30:37.664 [2024-10-01 17:30:36.019326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:30:37.664 [2024-10-01 17:30:36.019364] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:37.664 [2024-10-01 17:30:36.019373] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:37.664 [2024-10-01 17:30:36.019380] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:37.664 [2024-10-01 17:30:36.019386] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:37.924 17:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:30:38.864 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3167211 00:30:38.864 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:30:38.864 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3167211 00:30:38.864 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:30:38.864 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:38.864 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:30:38.864 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:38.864 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 3167211 00:30:38.864 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:30:38.864 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:38.864 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:30:38.864 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:30:38.864 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:30:38.864 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:38.864 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:30:38.864 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:38.864 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:38.864 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:38.864 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:38.864 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:38.864 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:30:38.864 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:38.864 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:30:38.864 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:38.864 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:38.864 rmmod nvme_tcp 00:30:38.864 rmmod nvme_fabrics 00:30:38.864 rmmod nvme_keyring 00:30:38.864 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:38.864 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:30:38.864 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:30:38.864 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@515 -- # '[' -n 3167057 ']' 00:30:38.864 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # killprocess 3167057 00:30:38.864 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 3167057 ']' 00:30:38.864 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 3167057 00:30:38.864 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3167057) - No such process 00:30:38.864 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 3167057 is not found' 00:30:38.864 Process with pid 3167057 is not found 00:30:38.864 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:38.864 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:38.864 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:38.864 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:30:38.864 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-save 00:30:38.865 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:38.865 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-restore 00:30:38.865 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:38.865 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:38.865 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.865 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:38.865 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:41.409 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:41.409 00:30:41.409 real 0m7.335s 00:30:41.409 user 0m17.391s 00:30:41.409 sys 0m1.246s 00:30:41.409 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:41.409 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:41.409 ************************************ 00:30:41.409 END TEST nvmf_shutdown_tc3 00:30:41.409 ************************************ 00:30:41.409 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:30:41.409 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:30:41.409 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:30:41.409 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:41.409 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:41.409 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:41.409 ************************************ 00:30:41.409 START TEST nvmf_shutdown_tc4 00:30:41.409 ************************************ 00:30:41.409 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:30:41.409 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:30:41.409 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:30:41.409 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:41.409 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:41.410 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:41.410 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:41.410 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:41.410 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # is_hw=yes 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:41.410 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:41.411 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:41.411 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:41.411 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:41.411 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:41.411 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:41.411 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:41.411 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:41.411 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:41.411 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:41.411 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:41.411 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:41.411 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:41.411 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:41.411 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:41.411 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:41.411 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:41.411 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:41.411 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:41.411 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:41.411 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:41.411 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:41.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:41.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.589 ms 00:30:41.411 00:30:41.411 --- 10.0.0.2 ping statistics --- 00:30:41.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:41.411 rtt min/avg/max/mdev = 0.589/0.589/0.589/0.000 ms 00:30:41.411 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:41.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:41.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:30:41.411 00:30:41.411 --- 10.0.0.1 ping statistics --- 00:30:41.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:41.411 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:30:41.411 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:41.411 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # return 0 00:30:41.411 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:41.411 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:41.411 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:41.411 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:41.411 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:41.411 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:41.411 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:41.411 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:30:41.411 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:41.411 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:41.411 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:41.411 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # nvmfpid=3168648 00:30:41.411 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # waitforlisten 3168648 00:30:41.411 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:41.411 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 3168648 ']' 00:30:41.411 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:41.411 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:41.411 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:41.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:41.411 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:41.411 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:41.411 [2024-10-01 17:30:39.921599] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:30:41.411 [2024-10-01 17:30:39.921669] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:41.672 [2024-10-01 17:30:40.011383] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:41.672 [2024-10-01 17:30:40.051033] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:41.672 [2024-10-01 17:30:40.051075] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:41.672 [2024-10-01 17:30:40.051082] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:41.672 [2024-10-01 17:30:40.051087] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:41.672 [2024-10-01 17:30:40.051091] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:41.672 [2024-10-01 17:30:40.051220] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:30:41.672 [2024-10-01 17:30:40.051379] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:30:41.672 [2024-10-01 17:30:40.051536] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:41.672 [2024-10-01 17:30:40.051538] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:30:42.243 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:42.243 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:30:42.243 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:42.243 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:42.243 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:42.243 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:42.243 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:42.243 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.243 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:42.243 [2024-10-01 17:30:40.754585] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:42.243 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.243 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:30:42.243 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:30:42.243 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:42.243 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:42.243 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:42.243 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:42.243 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:42.243 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:42.243 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:42.243 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:42.243 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:42.243 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:42.243 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:42.503 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:42.503 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:42.503 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:42.503 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:42.503 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:42.503 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:42.503 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:42.503 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:42.503 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:42.503 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:42.503 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:42.503 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:42.503 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:30:42.503 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.503 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:42.503 Malloc1 00:30:42.503 [2024-10-01 17:30:40.853035] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:42.503 Malloc2 00:30:42.503 Malloc3 00:30:42.503 Malloc4 00:30:42.503 Malloc5 00:30:42.503 Malloc6 00:30:42.764 Malloc7 00:30:42.764 Malloc8 00:30:42.764 Malloc9 00:30:42.764 Malloc10 00:30:42.764 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.764 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:30:42.764 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:42.764 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:42.764 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3168865 00:30:42.764 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:30:42.764 17:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:30:42.764 [2024-10-01 17:30:41.302524] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:48.061 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:48.061 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3168648 00:30:48.061 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 3168648 ']' 00:30:48.061 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 3168648 00:30:48.061 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:30:48.061 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:48.061 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3168648 00:30:48.061 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:48.061 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:48.061 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3168648' 00:30:48.061 killing process with pid 3168648 00:30:48.061 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 3168648 00:30:48.061 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 3168648 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 starting I/O failed: -6 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 starting I/O failed: -6 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 starting I/O failed: -6 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 starting I/O failed: -6 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 starting I/O failed: -6 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 starting I/O failed: -6 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 starting I/O failed: -6 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 starting I/O failed: -6 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 starting I/O failed: -6 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 starting I/O failed: -6 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 starting I/O failed: -6 00:30:48.061 [2024-10-01 17:30:46.328893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 starting I/O failed: -6 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 starting I/O failed: -6 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 starting I/O failed: -6 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 starting I/O failed: -6 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 starting I/O failed: -6 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 starting I/O failed: -6 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 starting I/O failed: -6 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 starting I/O failed: -6 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 starting I/O failed: -6 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 starting I/O failed: -6 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 starting I/O failed: -6 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 starting I/O failed: -6 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 starting I/O failed: -6 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 starting I/O failed: -6 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 starting I/O failed: -6 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 starting I/O failed: -6 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 starting I/O failed: -6 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 starting I/O failed: -6 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 starting I/O failed: -6 00:30:48.061 [2024-10-01 17:30:46.329755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:48.061 [2024-10-01 17:30:46.329865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1384890 is same with Write completed with error (sct=0, sc=8) 00:30:48.061 the state(6) to be set 00:30:48.061 starting I/O failed: -6 00:30:48.061 [2024-10-01 17:30:46.329901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1384890 is same with the state(6) to be set 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 [2024-10-01 17:30:46.329907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1384890 is same with the state(6) to be set 00:30:48.061 [2024-10-01 17:30:46.329913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1384890 is same with the state(6) to be set 00:30:48.061 [2024-10-01 17:30:46.329919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1384890 is same with the state(6) to be set 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 [2024-10-01 17:30:46.329925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1384890 is same with the state(6) to be set 00:30:48.061 starting I/O failed: -6 00:30:48.061 [2024-10-01 17:30:46.329930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1384890 is same with the state(6) to be set 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 starting I/O failed: -6 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 starting I/O failed: -6 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 starting I/O failed: -6 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 starting I/O failed: -6 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 starting I/O failed: -6 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 starting I/O failed: -6 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 starting I/O failed: -6 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 starting I/O failed: -6 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 starting I/O failed: -6 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 starting I/O failed: -6 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 starting I/O failed: -6 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 starting I/O failed: -6 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 starting I/O failed: -6 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 starting I/O failed: -6 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.061 starting I/O failed: -6 00:30:48.061 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 [2024-10-01 17:30:46.330375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1385230 is same with the state(6) to be set 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 [2024-10-01 17:30:46.330400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1385230 is same with the state(6) to be set 00:30:48.062 [2024-10-01 17:30:46.330406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1385230 is same with starting I/O failed: -6 00:30:48.062 the state(6) to be set 00:30:48.062 [2024-10-01 17:30:46.330413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1385230 is same with the state(6) to be set 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 [2024-10-01 17:30:46.330418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1385230 is same with the state(6) to be set 00:30:48.062 [2024-10-01 17:30:46.330423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1385230 is same with the state(6) to be set 00:30:48.062 [2024-10-01 17:30:46.330429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1385230 is same with the state(6) to be set 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 [2024-10-01 17:30:46.330621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13843c0 is same with the state(6) to be set 00:30:48.062 starting I/O failed: -6 00:30:48.062 [2024-10-01 17:30:46.330644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13843c0 is same with Write completed with error (sct=0, sc=8) 00:30:48.062 the state(6) to be set 00:30:48.062 [2024-10-01 17:30:46.330652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13843c0 is same with the state(6) to be set 00:30:48.062 starting I/O failed: -6 00:30:48.062 [2024-10-01 17:30:46.330672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 [2024-10-01 17:30:46.332084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:48.062 NVMe io qpair process completion error 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 starting I/O failed: -6 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.062 Write completed with error (sct=0, sc=8) 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 [2024-10-01 17:30:46.333305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:48.063 starting I/O failed: -6 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 [2024-10-01 17:30:46.334304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 [2024-10-01 17:30:46.335217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.063 Write completed with error (sct=0, sc=8) 00:30:48.063 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 [2024-10-01 17:30:46.336564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:48.064 NVMe io qpair process completion error 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 [2024-10-01 17:30:46.337770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 [2024-10-01 17:30:46.338586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 starting I/O failed: -6 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.064 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 [2024-10-01 17:30:46.339538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 [2024-10-01 17:30:46.341146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:48.065 NVMe io qpair process completion error 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 starting I/O failed: -6 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.065 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 [2024-10-01 17:30:46.342309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:48.066 starting I/O failed: -6 00:30:48.066 starting I/O failed: -6 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 [2024-10-01 17:30:46.343307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 [2024-10-01 17:30:46.344213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.066 starting I/O failed: -6 00:30:48.066 starting I/O failed: -6 00:30:48.066 starting I/O failed: -6 00:30:48.066 starting I/O failed: -6 00:30:48.066 starting I/O failed: -6 00:30:48.066 starting I/O failed: -6 00:30:48.066 starting I/O failed: -6 00:30:48.066 starting I/O failed: -6 00:30:48.066 starting I/O failed: -6 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.066 Write completed with error (sct=0, sc=8) 00:30:48.066 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 [2024-10-01 17:30:46.347433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:48.067 NVMe io qpair process completion error 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 [2024-10-01 17:30:46.348503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 [2024-10-01 17:30:46.349437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.067 starting I/O failed: -6 00:30:48.067 Write completed with error (sct=0, sc=8) 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 [2024-10-01 17:30:46.350369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 [2024-10-01 17:30:46.351965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:48.068 NVMe io qpair process completion error 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 starting I/O failed: -6 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.068 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 [2024-10-01 17:30:46.353416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 [2024-10-01 17:30:46.354251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 [2024-10-01 17:30:46.355199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.069 Write completed with error (sct=0, sc=8) 00:30:48.069 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 [2024-10-01 17:30:46.359729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:48.070 NVMe io qpair process completion error 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 [2024-10-01 17:30:46.361003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 Write completed with error (sct=0, sc=8) 00:30:48.070 starting I/O failed: -6 00:30:48.070 [2024-10-01 17:30:46.361839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 [2024-10-01 17:30:46.362783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 [2024-10-01 17:30:46.364228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:48.071 NVMe io qpair process completion error 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.071 starting I/O failed: -6 00:30:48.071 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 [2024-10-01 17:30:46.365098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 [2024-10-01 17:30:46.365910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 [2024-10-01 17:30:46.366861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.072 starting I/O failed: -6 00:30:48.072 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 [2024-10-01 17:30:46.369754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:48.073 NVMe io qpair process completion error 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 [2024-10-01 17:30:46.370927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 [2024-10-01 17:30:46.371780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.073 Write completed with error (sct=0, sc=8) 00:30:48.073 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 [2024-10-01 17:30:46.372721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 [2024-10-01 17:30:46.374354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:48.074 NVMe io qpair process completion error 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 Write completed with error (sct=0, sc=8) 00:30:48.074 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 [2024-10-01 17:30:46.375897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 [2024-10-01 17:30:46.376738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 [2024-10-01 17:30:46.377666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.075 Write completed with error (sct=0, sc=8) 00:30:48.075 starting I/O failed: -6 00:30:48.076 Write completed with error (sct=0, sc=8) 00:30:48.076 starting I/O failed: -6 00:30:48.076 Write completed with error (sct=0, sc=8) 00:30:48.076 starting I/O failed: -6 00:30:48.076 Write completed with error (sct=0, sc=8) 00:30:48.076 starting I/O failed: -6 00:30:48.076 Write completed with error (sct=0, sc=8) 00:30:48.076 starting I/O failed: -6 00:30:48.076 Write completed with error (sct=0, sc=8) 00:30:48.076 starting I/O failed: -6 00:30:48.076 Write completed with error (sct=0, sc=8) 00:30:48.076 starting I/O failed: -6 00:30:48.076 Write completed with error (sct=0, sc=8) 00:30:48.076 starting I/O failed: -6 00:30:48.076 Write completed with error (sct=0, sc=8) 00:30:48.076 starting I/O failed: -6 00:30:48.076 Write completed with error (sct=0, sc=8) 00:30:48.076 starting I/O failed: -6 00:30:48.076 Write completed with error (sct=0, sc=8) 00:30:48.076 starting I/O failed: -6 00:30:48.076 Write completed with error (sct=0, sc=8) 00:30:48.076 starting I/O failed: -6 00:30:48.076 Write completed with error (sct=0, sc=8) 00:30:48.076 starting I/O failed: -6 00:30:48.076 Write completed with error (sct=0, sc=8) 00:30:48.076 starting I/O failed: -6 00:30:48.076 Write completed with error (sct=0, sc=8) 00:30:48.076 starting I/O failed: -6 00:30:48.076 Write completed with error (sct=0, sc=8) 00:30:48.076 starting I/O failed: -6 00:30:48.076 Write completed with error (sct=0, sc=8) 00:30:48.076 starting I/O failed: -6 00:30:48.076 Write completed with error (sct=0, sc=8) 00:30:48.076 starting I/O failed: -6 00:30:48.076 Write completed with error (sct=0, sc=8) 00:30:48.076 starting I/O failed: -6 00:30:48.076 Write completed with error (sct=0, sc=8) 00:30:48.076 starting I/O failed: -6 00:30:48.076 Write completed with error (sct=0, sc=8) 00:30:48.076 starting I/O failed: -6 00:30:48.076 Write completed with error (sct=0, sc=8) 00:30:48.076 starting I/O failed: -6 00:30:48.076 Write completed with error (sct=0, sc=8) 00:30:48.076 starting I/O failed: -6 00:30:48.076 Write completed with error (sct=0, sc=8) 00:30:48.076 starting I/O failed: -6 00:30:48.076 Write completed with error (sct=0, sc=8) 00:30:48.076 starting I/O failed: -6 00:30:48.076 Write completed with error (sct=0, sc=8) 00:30:48.076 starting I/O failed: -6 00:30:48.076 Write completed with error (sct=0, sc=8) 00:30:48.076 starting I/O failed: -6 00:30:48.076 Write completed with error (sct=0, sc=8) 00:30:48.076 starting I/O failed: -6 00:30:48.076 Write completed with error (sct=0, sc=8) 00:30:48.076 starting I/O failed: -6 00:30:48.076 Write completed with error (sct=0, sc=8) 00:30:48.076 starting I/O failed: -6 00:30:48.076 Write completed with error (sct=0, sc=8) 00:30:48.076 starting I/O failed: -6 00:30:48.076 Write completed with error (sct=0, sc=8) 00:30:48.076 starting I/O failed: -6 00:30:48.076 Write completed with error (sct=0, sc=8) 00:30:48.076 starting I/O failed: -6 00:30:48.076 Write completed with error (sct=0, sc=8) 00:30:48.076 starting I/O failed: -6 00:30:48.076 Write completed with error (sct=0, sc=8) 00:30:48.076 starting I/O failed: -6 00:30:48.076 Write completed with error (sct=0, sc=8) 00:30:48.076 starting I/O failed: -6 00:30:48.076 Write completed with error (sct=0, sc=8) 00:30:48.076 starting I/O failed: -6 00:30:48.076 Write completed with error (sct=0, sc=8) 00:30:48.076 starting I/O failed: -6 00:30:48.076 Write completed with error (sct=0, sc=8) 00:30:48.076 starting I/O failed: -6 00:30:48.076 Write completed with error (sct=0, sc=8) 00:30:48.076 starting I/O failed: -6 00:30:48.076 Write completed with error (sct=0, sc=8) 00:30:48.076 starting I/O failed: -6 00:30:48.076 Write completed with error (sct=0, sc=8) 00:30:48.076 starting I/O failed: -6 00:30:48.076 Write completed with error (sct=0, sc=8) 00:30:48.076 starting I/O failed: -6 00:30:48.076 Write completed with error (sct=0, sc=8) 00:30:48.076 starting I/O failed: -6 00:30:48.076 Write completed with error (sct=0, sc=8) 00:30:48.076 starting I/O failed: -6 00:30:48.076 Write completed with error (sct=0, sc=8) 00:30:48.076 starting I/O failed: -6 00:30:48.076 [2024-10-01 17:30:46.381903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.076 NVMe io qpair process completion error 00:30:48.076 Initializing NVMe Controllers 00:30:48.076 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:30:48.076 Controller IO queue size 128, less than required. 00:30:48.076 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:48.076 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:30:48.076 Controller IO queue size 128, less than required. 00:30:48.076 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:48.076 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:48.076 Controller IO queue size 128, less than required. 00:30:48.076 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:48.076 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:30:48.076 Controller IO queue size 128, less than required. 00:30:48.076 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:48.076 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:30:48.076 Controller IO queue size 128, less than required. 00:30:48.076 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:48.076 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:30:48.076 Controller IO queue size 128, less than required. 00:30:48.076 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:48.076 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:30:48.076 Controller IO queue size 128, less than required. 00:30:48.076 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:48.076 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:30:48.076 Controller IO queue size 128, less than required. 00:30:48.076 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:48.076 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:30:48.076 Controller IO queue size 128, less than required. 00:30:48.076 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:48.076 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:30:48.076 Controller IO queue size 128, less than required. 00:30:48.076 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:48.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:30:48.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:30:48.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:48.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:30:48.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:30:48.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:30:48.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:30:48.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:30:48.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:30:48.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:30:48.076 Initialization complete. Launching workers. 00:30:48.076 ======================================================== 00:30:48.076 Latency(us) 00:30:48.076 Device Information : IOPS MiB/s Average min max 00:30:48.076 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1891.77 81.29 67680.89 893.73 123323.72 00:30:48.076 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1878.99 80.74 68162.65 720.55 151244.05 00:30:48.076 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1916.91 82.37 66832.49 691.62 151395.41 00:30:48.076 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1875.84 80.60 68321.71 662.25 120830.69 00:30:48.076 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1826.18 78.47 70221.68 945.87 118443.76 00:30:48.076 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1898.47 81.58 67575.58 689.03 121875.35 00:30:48.076 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1906.44 81.92 67355.57 841.03 117870.28 00:30:48.076 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1902.46 81.75 67512.35 813.18 118331.93 00:30:48.076 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1902.88 81.76 67535.69 621.46 121498.74 00:30:48.076 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1906.02 81.90 67451.60 732.90 136873.77 00:30:48.076 ======================================================== 00:30:48.076 Total : 18905.96 812.37 67853.68 621.46 151395.41 00:30:48.076 00:30:48.076 [2024-10-01 17:30:46.386572] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1709b20 is same with the state(6) to be set 00:30:48.076 [2024-10-01 17:30:46.386617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170a1c0 is same with the state(6) to be set 00:30:48.076 [2024-10-01 17:30:46.386650] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170bc40 is same with the state(6) to be set 00:30:48.076 [2024-10-01 17:30:46.386679] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170c350 is same with the state(6) to be set 00:30:48.076 [2024-10-01 17:30:46.386708] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1709d00 is same with the state(6) to be set 00:30:48.076 [2024-10-01 17:30:46.386737] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170a820 is same with the state(6) to be set 00:30:48.076 [2024-10-01 17:30:46.386766] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170c020 is same with the state(6) to be set 00:30:48.076 [2024-10-01 17:30:46.386793] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170ab50 is same with the state(6) to be set 00:30:48.076 [2024-10-01 17:30:46.386821] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170a4f0 is same with the state(6) to be set 00:30:48.076 [2024-10-01 17:30:46.386849] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170c680 is same with the state(6) to be set 00:30:48.076 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:30:48.076 17:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:30:49.462 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3168865 00:30:49.462 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:30:49.462 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3168865 00:30:49.462 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:30:49.462 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:49.462 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:30:49.462 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:49.462 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 3168865 00:30:49.462 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:30:49.462 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:49.462 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:49.462 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:49.462 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:30:49.462 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:49.463 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:49.463 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:49.463 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:49.463 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:49.463 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:30:49.463 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:49.463 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:30:49.463 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:49.463 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:49.463 rmmod nvme_tcp 00:30:49.463 rmmod nvme_fabrics 00:30:49.463 rmmod nvme_keyring 00:30:49.463 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:49.463 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:30:49.463 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:30:49.463 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@515 -- # '[' -n 3168648 ']' 00:30:49.463 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # killprocess 3168648 00:30:49.463 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 3168648 ']' 00:30:49.463 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 3168648 00:30:49.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3168648) - No such process 00:30:49.463 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 3168648 is not found' 00:30:49.463 Process with pid 3168648 is not found 00:30:49.463 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:49.463 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:49.463 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:49.463 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:30:49.463 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-save 00:30:49.463 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:49.463 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-restore 00:30:49.463 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:49.463 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:49.463 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:49.463 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:49.463 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:51.377 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:51.377 00:30:51.377 real 0m10.272s 00:30:51.377 user 0m27.930s 00:30:51.377 sys 0m3.982s 00:30:51.377 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:51.377 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:51.377 ************************************ 00:30:51.377 END TEST nvmf_shutdown_tc4 00:30:51.377 ************************************ 00:30:51.377 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:30:51.377 00:30:51.377 real 0m43.111s 00:30:51.377 user 1m45.472s 00:30:51.377 sys 0m13.474s 00:30:51.377 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:51.377 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:51.377 ************************************ 00:30:51.377 END TEST nvmf_shutdown 00:30:51.377 ************************************ 00:30:51.377 17:30:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:30:51.377 00:30:51.377 real 19m30.197s 00:30:51.377 user 51m36.304s 00:30:51.377 sys 4m42.450s 00:30:51.377 17:30:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:51.377 17:30:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:51.377 ************************************ 00:30:51.377 END TEST nvmf_target_extra 00:30:51.377 ************************************ 00:30:51.377 17:30:49 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:30:51.377 17:30:49 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:51.377 17:30:49 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:51.377 17:30:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:51.377 ************************************ 00:30:51.377 START TEST nvmf_host 00:30:51.377 ************************************ 00:30:51.377 17:30:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:30:51.639 * Looking for test storage... 00:30:51.639 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:51.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.639 --rc genhtml_branch_coverage=1 00:30:51.639 --rc genhtml_function_coverage=1 00:30:51.639 --rc genhtml_legend=1 00:30:51.639 --rc geninfo_all_blocks=1 00:30:51.639 --rc geninfo_unexecuted_blocks=1 00:30:51.639 00:30:51.639 ' 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:51.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.639 --rc genhtml_branch_coverage=1 00:30:51.639 --rc genhtml_function_coverage=1 00:30:51.639 --rc genhtml_legend=1 00:30:51.639 --rc geninfo_all_blocks=1 00:30:51.639 --rc geninfo_unexecuted_blocks=1 00:30:51.639 00:30:51.639 ' 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:51.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.639 --rc genhtml_branch_coverage=1 00:30:51.639 --rc genhtml_function_coverage=1 00:30:51.639 --rc genhtml_legend=1 00:30:51.639 --rc geninfo_all_blocks=1 00:30:51.639 --rc geninfo_unexecuted_blocks=1 00:30:51.639 00:30:51.639 ' 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:51.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.639 --rc genhtml_branch_coverage=1 00:30:51.639 --rc genhtml_function_coverage=1 00:30:51.639 --rc genhtml_legend=1 00:30:51.639 --rc geninfo_all_blocks=1 00:30:51.639 --rc geninfo_unexecuted_blocks=1 00:30:51.639 00:30:51.639 ' 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:51.639 17:30:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:51.640 17:30:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:51.640 17:30:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:51.640 17:30:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:51.640 17:30:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:51.640 17:30:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:51.640 17:30:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:51.640 17:30:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:51.640 17:30:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:51.640 17:30:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:51.640 17:30:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:51.640 17:30:50 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:51.640 17:30:50 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:51.640 17:30:50 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:51.640 17:30:50 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:51.640 17:30:50 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.640 17:30:50 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.640 17:30:50 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.640 17:30:50 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:30:51.640 17:30:50 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.640 17:30:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:30:51.640 17:30:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:51.640 17:30:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:51.640 17:30:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:51.640 17:30:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:51.640 17:30:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:51.640 17:30:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:51.640 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:51.640 17:30:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:51.640 17:30:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:51.640 17:30:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:51.640 17:30:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:51.640 17:30:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:30:51.640 17:30:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:30:51.640 17:30:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:51.640 17:30:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:51.640 17:30:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:51.640 17:30:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.640 ************************************ 00:30:51.640 START TEST nvmf_multicontroller 00:30:51.640 ************************************ 00:30:51.640 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:51.903 * Looking for test storage... 00:30:51.903 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:51.903 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:51.903 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lcov --version 00:30:51.903 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:51.903 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:51.903 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:51.903 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:51.903 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:51.903 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:30:51.903 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:30:51.903 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:30:51.903 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:30:51.903 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:30:51.903 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:30:51.903 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:30:51.903 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:51.903 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:30:51.903 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:30:51.903 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:51.903 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:51.903 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:30:51.903 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:30:51.903 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:51.903 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:30:51.903 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:30:51.903 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:30:51.903 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:30:51.903 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:51.903 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:30:51.903 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:30:51.903 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:51.903 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:51.903 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:30:51.903 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:51.903 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:51.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.903 --rc genhtml_branch_coverage=1 00:30:51.903 --rc genhtml_function_coverage=1 00:30:51.903 --rc genhtml_legend=1 00:30:51.903 --rc geninfo_all_blocks=1 00:30:51.903 --rc geninfo_unexecuted_blocks=1 00:30:51.903 00:30:51.903 ' 00:30:51.903 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:51.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.903 --rc genhtml_branch_coverage=1 00:30:51.903 --rc genhtml_function_coverage=1 00:30:51.903 --rc genhtml_legend=1 00:30:51.903 --rc geninfo_all_blocks=1 00:30:51.903 --rc geninfo_unexecuted_blocks=1 00:30:51.903 00:30:51.903 ' 00:30:51.903 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:51.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.903 --rc genhtml_branch_coverage=1 00:30:51.903 --rc genhtml_function_coverage=1 00:30:51.903 --rc genhtml_legend=1 00:30:51.903 --rc geninfo_all_blocks=1 00:30:51.903 --rc geninfo_unexecuted_blocks=1 00:30:51.903 00:30:51.903 ' 00:30:51.903 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:51.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.903 --rc genhtml_branch_coverage=1 00:30:51.903 --rc genhtml_function_coverage=1 00:30:51.903 --rc genhtml_legend=1 00:30:51.903 --rc geninfo_all_blocks=1 00:30:51.903 --rc geninfo_unexecuted_blocks=1 00:30:51.903 00:30:51.903 ' 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:51.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:30:51.904 17:30:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:58.489 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:58.489 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:30:58.489 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:58.489 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:58.489 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:58.489 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:58.489 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:58.489 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:30:58.489 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:58.489 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:30:58.489 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:30:58.489 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:30:58.489 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:30:58.489 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:30:58.489 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:30:58.489 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:58.489 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:58.489 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:58.489 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:58.489 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:58.490 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:58.490 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:58.490 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:58.490 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # is_hw=yes 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:58.490 17:30:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:58.751 17:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:58.751 17:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:58.751 17:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:58.751 17:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:58.751 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:58.751 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:30:58.751 00:30:58.751 --- 10.0.0.2 ping statistics --- 00:30:58.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:58.751 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:30:58.751 17:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:58.751 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:58.751 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:30:58.751 00:30:58.751 --- 10.0.0.1 ping statistics --- 00:30:58.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:58.751 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:30:58.751 17:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:58.751 17:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # return 0 00:30:58.751 17:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:58.751 17:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:58.751 17:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:58.751 17:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:58.751 17:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:58.751 17:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:58.751 17:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:58.751 17:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:30:58.751 17:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:58.751 17:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:58.751 17:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:58.751 17:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # nvmfpid=3174137 00:30:58.751 17:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # waitforlisten 3174137 00:30:58.751 17:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:58.751 17:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 3174137 ']' 00:30:58.751 17:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:58.751 17:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:58.751 17:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:58.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:58.751 17:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:58.751 17:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:58.751 [2024-10-01 17:30:57.204909] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:30:58.751 [2024-10-01 17:30:57.204977] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:58.751 [2024-10-01 17:30:57.295443] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:59.013 [2024-10-01 17:30:57.342825] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:59.013 [2024-10-01 17:30:57.342884] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:59.013 [2024-10-01 17:30:57.342892] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:59.013 [2024-10-01 17:30:57.342899] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:59.013 [2024-10-01 17:30:57.342905] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:59.013 [2024-10-01 17:30:57.343006] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:30:59.013 [2024-10-01 17:30:57.343187] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:30:59.013 [2024-10-01 17:30:57.343283] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:59.582 17:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:59.582 17:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:30:59.582 17:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:59.582 17:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:59.582 17:30:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:59.582 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:59.582 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:59.583 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.583 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:59.583 [2024-10-01 17:30:58.046256] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:59.583 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.583 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:59.583 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.583 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:59.583 Malloc0 00:30:59.583 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.583 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:59.583 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.583 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:59.583 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.583 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:59.583 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.583 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:59.583 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.583 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:59.583 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.583 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:59.583 [2024-10-01 17:30:58.117266] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:59.583 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.583 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:59.583 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.583 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:59.583 [2024-10-01 17:30:58.129208] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:59.842 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.842 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:59.842 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.842 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:59.842 Malloc1 00:30:59.842 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.842 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:30:59.842 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.842 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:59.842 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.842 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:30:59.842 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.842 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:59.842 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.842 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:59.842 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.842 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:59.842 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.842 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:30:59.842 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.842 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:59.842 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.842 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3174486 00:30:59.842 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:59.843 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:30:59.843 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3174486 /var/tmp/bdevperf.sock 00:30:59.843 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 3174486 ']' 00:30:59.843 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:59.843 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:59.843 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:59.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:59.843 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:59.843 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:00.103 NVMe0n1 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.103 1 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:00.103 request: 00:31:00.103 { 00:31:00.103 "name": "NVMe0", 00:31:00.103 "trtype": "tcp", 00:31:00.103 "traddr": "10.0.0.2", 00:31:00.103 "adrfam": "ipv4", 00:31:00.103 "trsvcid": "4420", 00:31:00.103 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:00.103 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:31:00.103 "hostaddr": "10.0.0.1", 00:31:00.103 "prchk_reftag": false, 00:31:00.103 "prchk_guard": false, 00:31:00.103 "hdgst": false, 00:31:00.103 "ddgst": false, 00:31:00.103 "allow_unrecognized_csi": false, 00:31:00.103 "method": "bdev_nvme_attach_controller", 00:31:00.103 "req_id": 1 00:31:00.103 } 00:31:00.103 Got JSON-RPC error response 00:31:00.103 response: 00:31:00.103 { 00:31:00.103 "code": -114, 00:31:00.103 "message": "A controller named NVMe0 already exists with the specified network path" 00:31:00.103 } 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:00.103 request: 00:31:00.103 { 00:31:00.103 "name": "NVMe0", 00:31:00.103 "trtype": "tcp", 00:31:00.103 "traddr": "10.0.0.2", 00:31:00.103 "adrfam": "ipv4", 00:31:00.103 "trsvcid": "4420", 00:31:00.103 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:00.103 "hostaddr": "10.0.0.1", 00:31:00.103 "prchk_reftag": false, 00:31:00.103 "prchk_guard": false, 00:31:00.103 "hdgst": false, 00:31:00.103 "ddgst": false, 00:31:00.103 "allow_unrecognized_csi": false, 00:31:00.103 "method": "bdev_nvme_attach_controller", 00:31:00.103 "req_id": 1 00:31:00.103 } 00:31:00.103 Got JSON-RPC error response 00:31:00.103 response: 00:31:00.103 { 00:31:00.103 "code": -114, 00:31:00.103 "message": "A controller named NVMe0 already exists with the specified network path" 00:31:00.103 } 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:00.103 request: 00:31:00.103 { 00:31:00.103 "name": "NVMe0", 00:31:00.103 "trtype": "tcp", 00:31:00.103 "traddr": "10.0.0.2", 00:31:00.103 "adrfam": "ipv4", 00:31:00.103 "trsvcid": "4420", 00:31:00.103 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:00.103 "hostaddr": "10.0.0.1", 00:31:00.103 "prchk_reftag": false, 00:31:00.103 "prchk_guard": false, 00:31:00.103 "hdgst": false, 00:31:00.103 "ddgst": false, 00:31:00.103 "multipath": "disable", 00:31:00.103 "allow_unrecognized_csi": false, 00:31:00.103 "method": "bdev_nvme_attach_controller", 00:31:00.103 "req_id": 1 00:31:00.103 } 00:31:00.103 Got JSON-RPC error response 00:31:00.103 response: 00:31:00.103 { 00:31:00.103 "code": -114, 00:31:00.103 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:31:00.103 } 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:00.103 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:00.104 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:00.104 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:31:00.104 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:31:00.104 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:31:00.104 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:00.104 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:00.104 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:00.104 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:00.104 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:31:00.104 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.104 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:00.104 request: 00:31:00.104 { 00:31:00.104 "name": "NVMe0", 00:31:00.104 "trtype": "tcp", 00:31:00.104 "traddr": "10.0.0.2", 00:31:00.104 "adrfam": "ipv4", 00:31:00.104 "trsvcid": "4420", 00:31:00.104 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:00.104 "hostaddr": "10.0.0.1", 00:31:00.104 "prchk_reftag": false, 00:31:00.104 "prchk_guard": false, 00:31:00.104 "hdgst": false, 00:31:00.104 "ddgst": false, 00:31:00.104 "multipath": "failover", 00:31:00.104 "allow_unrecognized_csi": false, 00:31:00.104 "method": "bdev_nvme_attach_controller", 00:31:00.104 "req_id": 1 00:31:00.104 } 00:31:00.104 Got JSON-RPC error response 00:31:00.104 response: 00:31:00.104 { 00:31:00.104 "code": -114, 00:31:00.104 "message": "A controller named NVMe0 already exists with the specified network path" 00:31:00.104 } 00:31:00.104 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:00.104 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:31:00.104 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:00.104 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:00.104 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:00.104 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:00.104 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.104 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:00.363 00:31:00.364 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.364 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:00.364 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.364 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:00.364 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.364 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:31:00.364 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.364 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:00.622 00:31:00.622 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.622 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:00.622 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:31:00.622 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.622 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:00.622 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.622 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:31:00.622 17:30:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:01.561 { 00:31:01.561 "results": [ 00:31:01.561 { 00:31:01.561 "job": "NVMe0n1", 00:31:01.561 "core_mask": "0x1", 00:31:01.561 "workload": "write", 00:31:01.561 "status": "finished", 00:31:01.561 "queue_depth": 128, 00:31:01.561 "io_size": 4096, 00:31:01.561 "runtime": 1.00617, 00:31:01.561 "iops": 20151.66423169047, 00:31:01.561 "mibps": 78.7174384050409, 00:31:01.561 "io_failed": 0, 00:31:01.561 "io_timeout": 0, 00:31:01.561 "avg_latency_us": 6341.762567238772, 00:31:01.561 "min_latency_us": 2703.36, 00:31:01.561 "max_latency_us": 11141.12 00:31:01.561 } 00:31:01.561 ], 00:31:01.561 "core_count": 1 00:31:01.561 } 00:31:01.561 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:31:01.561 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.561 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:01.561 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.561 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:31:01.561 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3174486 00:31:01.561 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 3174486 ']' 00:31:01.561 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 3174486 00:31:01.561 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:31:01.561 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:01.561 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3174486 00:31:01.821 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:01.821 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:01.821 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3174486' 00:31:01.821 killing process with pid 3174486 00:31:01.821 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 3174486 00:31:01.821 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 3174486 00:31:01.821 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:01.821 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.821 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:01.821 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.821 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:01.821 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.821 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:01.821 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.821 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:31:01.821 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:01.821 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:31:01.821 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:31:01.821 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:31:01.821 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:31:01.821 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:31:01.821 [2024-10-01 17:30:58.261865] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:31:01.821 [2024-10-01 17:30:58.261925] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3174486 ] 00:31:01.821 [2024-10-01 17:30:58.322520] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:01.821 [2024-10-01 17:30:58.353802] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:01.821 [2024-10-01 17:30:58.922101] bdev.c:4696:bdev_name_add: *ERROR*: Bdev name aeccd29d-979e-4686-bcc6-67693f22e055 already exists 00:31:01.821 [2024-10-01 17:30:58.922131] bdev.c:7837:bdev_register: *ERROR*: Unable to add uuid:aeccd29d-979e-4686-bcc6-67693f22e055 alias for bdev NVMe1n1 00:31:01.821 [2024-10-01 17:30:58.922140] bdev_nvme.c:4481:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:31:01.821 Running I/O for 1 seconds... 00:31:01.821 20148.00 IOPS, 78.70 MiB/s 00:31:01.821 Latency(us) 00:31:01.821 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:01.821 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:31:01.821 NVMe0n1 : 1.01 20151.66 78.72 0.00 0.00 6341.76 2703.36 11141.12 00:31:01.821 =================================================================================================================== 00:31:01.821 Total : 20151.66 78.72 0.00 0.00 6341.76 2703.36 11141.12 00:31:01.821 Received shutdown signal, test time was about 1.000000 seconds 00:31:01.821 00:31:01.821 Latency(us) 00:31:01.821 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:01.821 =================================================================================================================== 00:31:01.821 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:01.821 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:31:01.821 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:01.821 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:31:01.821 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:31:01.821 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:01.821 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:31:01.821 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:01.821 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:31:01.821 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:01.821 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:01.821 rmmod nvme_tcp 00:31:01.821 rmmod nvme_fabrics 00:31:01.821 rmmod nvme_keyring 00:31:02.080 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:02.080 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:31:02.080 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:31:02.080 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@515 -- # '[' -n 3174137 ']' 00:31:02.080 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # killprocess 3174137 00:31:02.080 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 3174137 ']' 00:31:02.080 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 3174137 00:31:02.080 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:31:02.080 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:02.080 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3174137 00:31:02.080 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:02.080 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:02.080 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3174137' 00:31:02.080 killing process with pid 3174137 00:31:02.080 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 3174137 00:31:02.080 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 3174137 00:31:02.080 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:02.080 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:02.080 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:02.080 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:31:02.080 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-save 00:31:02.080 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:02.080 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-restore 00:31:02.080 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:02.080 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:02.080 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:02.080 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:02.080 17:31:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:04.621 17:31:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:04.621 00:31:04.621 real 0m12.531s 00:31:04.621 user 0m13.922s 00:31:04.621 sys 0m5.861s 00:31:04.621 17:31:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:04.621 17:31:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:04.621 ************************************ 00:31:04.621 END TEST nvmf_multicontroller 00:31:04.621 ************************************ 00:31:04.621 17:31:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.622 ************************************ 00:31:04.622 START TEST nvmf_aer 00:31:04.622 ************************************ 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:31:04.622 * Looking for test storage... 00:31:04.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lcov --version 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:04.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:04.622 --rc genhtml_branch_coverage=1 00:31:04.622 --rc genhtml_function_coverage=1 00:31:04.622 --rc genhtml_legend=1 00:31:04.622 --rc geninfo_all_blocks=1 00:31:04.622 --rc geninfo_unexecuted_blocks=1 00:31:04.622 00:31:04.622 ' 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:04.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:04.622 --rc genhtml_branch_coverage=1 00:31:04.622 --rc genhtml_function_coverage=1 00:31:04.622 --rc genhtml_legend=1 00:31:04.622 --rc geninfo_all_blocks=1 00:31:04.622 --rc geninfo_unexecuted_blocks=1 00:31:04.622 00:31:04.622 ' 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:04.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:04.622 --rc genhtml_branch_coverage=1 00:31:04.622 --rc genhtml_function_coverage=1 00:31:04.622 --rc genhtml_legend=1 00:31:04.622 --rc geninfo_all_blocks=1 00:31:04.622 --rc geninfo_unexecuted_blocks=1 00:31:04.622 00:31:04.622 ' 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:04.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:04.622 --rc genhtml_branch_coverage=1 00:31:04.622 --rc genhtml_function_coverage=1 00:31:04.622 --rc genhtml_legend=1 00:31:04.622 --rc geninfo_all_blocks=1 00:31:04.622 --rc geninfo_unexecuted_blocks=1 00:31:04.622 00:31:04.622 ' 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:04.622 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:04.622 17:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:04.622 17:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:31:04.623 17:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:04.623 17:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:04.623 17:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:04.623 17:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:04.623 17:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:04.623 17:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:04.623 17:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:04.623 17:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:04.623 17:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:04.623 17:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:04.623 17:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:31:04.623 17:31:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:12.759 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:12.759 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:31:12.759 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:12.759 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:12.759 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:12.759 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:12.759 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:12.759 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:31:12.759 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:12.759 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:31:12.759 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:31:12.759 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:31:12.759 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:31:12.759 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:31:12.759 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:31:12.759 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:12.759 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:12.759 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:12.759 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:12.759 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:12.759 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:12.759 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:12.759 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:12.759 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:12.759 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:12.759 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:12.759 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:12.759 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:12.759 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:12.760 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:12.760 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:12.760 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:12.760 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # is_hw=yes 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:12.760 17:31:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:12.760 17:31:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:12.760 17:31:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:12.760 17:31:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:12.760 17:31:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:12.760 17:31:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:12.760 17:31:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:12.760 17:31:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:12.760 17:31:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:12.760 17:31:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:12.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:12.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:31:12.760 00:31:12.760 --- 10.0.0.2 ping statistics --- 00:31:12.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:12.760 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:31:12.760 17:31:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:12.760 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:12.760 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:31:12.760 00:31:12.760 --- 10.0.0.1 ping statistics --- 00:31:12.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:12.760 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:31:12.760 17:31:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:12.760 17:31:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # return 0 00:31:12.760 17:31:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:12.760 17:31:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:12.760 17:31:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:12.760 17:31:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:12.760 17:31:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:12.760 17:31:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:12.760 17:31:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:12.760 17:31:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:31:12.760 17:31:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:12.760 17:31:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:12.760 17:31:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:12.760 17:31:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # nvmfpid=3179023 00:31:12.760 17:31:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # waitforlisten 3179023 00:31:12.760 17:31:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:12.760 17:31:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 3179023 ']' 00:31:12.760 17:31:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:12.760 17:31:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:12.760 17:31:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:12.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:12.760 17:31:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:12.760 17:31:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:12.760 [2024-10-01 17:31:10.418498] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:31:12.760 [2024-10-01 17:31:10.418570] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:12.760 [2024-10-01 17:31:10.492194] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:12.760 [2024-10-01 17:31:10.533167] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:12.760 [2024-10-01 17:31:10.533216] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:12.760 [2024-10-01 17:31:10.533225] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:12.760 [2024-10-01 17:31:10.533232] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:12.760 [2024-10-01 17:31:10.533238] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:12.760 [2024-10-01 17:31:10.533325] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:31:12.760 [2024-10-01 17:31:10.533457] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:31:12.760 [2024-10-01 17:31:10.533625] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:12.760 [2024-10-01 17:31:10.533627] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:31:12.760 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:12.760 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:31:12.760 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:12.760 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:12.761 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:12.761 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:12.761 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:12.761 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.761 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:12.761 [2024-10-01 17:31:11.274789] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:12.761 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.761 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:31:12.761 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.761 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:12.761 Malloc0 00:31:12.761 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.761 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:31:12.761 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.761 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:13.021 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.022 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:13.022 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.022 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:13.022 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.022 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:13.022 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.022 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:13.022 [2024-10-01 17:31:11.334154] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:13.022 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.022 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:31:13.022 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.022 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:13.022 [ 00:31:13.022 { 00:31:13.022 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:13.022 "subtype": "Discovery", 00:31:13.022 "listen_addresses": [], 00:31:13.022 "allow_any_host": true, 00:31:13.022 "hosts": [] 00:31:13.022 }, 00:31:13.022 { 00:31:13.022 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:13.022 "subtype": "NVMe", 00:31:13.022 "listen_addresses": [ 00:31:13.022 { 00:31:13.022 "trtype": "TCP", 00:31:13.022 "adrfam": "IPv4", 00:31:13.022 "traddr": "10.0.0.2", 00:31:13.022 "trsvcid": "4420" 00:31:13.022 } 00:31:13.022 ], 00:31:13.022 "allow_any_host": true, 00:31:13.022 "hosts": [], 00:31:13.022 "serial_number": "SPDK00000000000001", 00:31:13.022 "model_number": "SPDK bdev Controller", 00:31:13.022 "max_namespaces": 2, 00:31:13.022 "min_cntlid": 1, 00:31:13.022 "max_cntlid": 65519, 00:31:13.022 "namespaces": [ 00:31:13.022 { 00:31:13.022 "nsid": 1, 00:31:13.022 "bdev_name": "Malloc0", 00:31:13.022 "name": "Malloc0", 00:31:13.022 "nguid": "81C35A0856C84CF0B38BE7887D1C5135", 00:31:13.022 "uuid": "81c35a08-56c8-4cf0-b38b-e7887d1c5135" 00:31:13.022 } 00:31:13.022 ] 00:31:13.022 } 00:31:13.022 ] 00:31:13.022 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.022 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:31:13.022 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:31:13.022 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3179193 00:31:13.022 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:31:13.022 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:31:13.022 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:31:13.022 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:13.022 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:31:13.022 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:31:13.022 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:31:13.022 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:13.022 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:31:13.022 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:31:13.022 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:13.283 Malloc1 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:13.283 Asynchronous Event Request test 00:31:13.283 Attaching to 10.0.0.2 00:31:13.283 Attached to 10.0.0.2 00:31:13.283 Registering asynchronous event callbacks... 00:31:13.283 Starting namespace attribute notice tests for all controllers... 00:31:13.283 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:31:13.283 aer_cb - Changed Namespace 00:31:13.283 Cleaning up... 00:31:13.283 [ 00:31:13.283 { 00:31:13.283 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:13.283 "subtype": "Discovery", 00:31:13.283 "listen_addresses": [], 00:31:13.283 "allow_any_host": true, 00:31:13.283 "hosts": [] 00:31:13.283 }, 00:31:13.283 { 00:31:13.283 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:13.283 "subtype": "NVMe", 00:31:13.283 "listen_addresses": [ 00:31:13.283 { 00:31:13.283 "trtype": "TCP", 00:31:13.283 "adrfam": "IPv4", 00:31:13.283 "traddr": "10.0.0.2", 00:31:13.283 "trsvcid": "4420" 00:31:13.283 } 00:31:13.283 ], 00:31:13.283 "allow_any_host": true, 00:31:13.283 "hosts": [], 00:31:13.283 "serial_number": "SPDK00000000000001", 00:31:13.283 "model_number": "SPDK bdev Controller", 00:31:13.283 "max_namespaces": 2, 00:31:13.283 "min_cntlid": 1, 00:31:13.283 "max_cntlid": 65519, 00:31:13.283 "namespaces": [ 00:31:13.283 { 00:31:13.283 "nsid": 1, 00:31:13.283 "bdev_name": "Malloc0", 00:31:13.283 "name": "Malloc0", 00:31:13.283 "nguid": "81C35A0856C84CF0B38BE7887D1C5135", 00:31:13.283 "uuid": "81c35a08-56c8-4cf0-b38b-e7887d1c5135" 00:31:13.283 }, 00:31:13.283 { 00:31:13.283 "nsid": 2, 00:31:13.283 "bdev_name": "Malloc1", 00:31:13.283 "name": "Malloc1", 00:31:13.283 "nguid": "41F11263D08543A2B6D262B901A7CF26", 00:31:13.283 "uuid": "41f11263-d085-43a2-b6d2-62b901a7cf26" 00:31:13.283 } 00:31:13.283 ] 00:31:13.283 } 00:31:13.283 ] 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3179193 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:13.283 rmmod nvme_tcp 00:31:13.283 rmmod nvme_fabrics 00:31:13.283 rmmod nvme_keyring 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@515 -- # '[' -n 3179023 ']' 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # killprocess 3179023 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 3179023 ']' 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 3179023 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3179023 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3179023' 00:31:13.283 killing process with pid 3179023 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 3179023 00:31:13.283 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 3179023 00:31:13.545 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:13.545 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:13.545 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:13.545 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:31:13.545 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-save 00:31:13.545 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:13.545 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-restore 00:31:13.545 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:13.545 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:13.545 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:13.545 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:13.545 17:31:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:16.091 17:31:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:16.091 00:31:16.091 real 0m11.263s 00:31:16.091 user 0m7.808s 00:31:16.091 sys 0m6.021s 00:31:16.091 17:31:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:16.091 17:31:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:16.091 ************************************ 00:31:16.091 END TEST nvmf_aer 00:31:16.091 ************************************ 00:31:16.091 17:31:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:31:16.091 17:31:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:16.091 17:31:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:16.091 17:31:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.091 ************************************ 00:31:16.091 START TEST nvmf_async_init 00:31:16.091 ************************************ 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:31:16.092 * Looking for test storage... 00:31:16.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lcov --version 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:16.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.092 --rc genhtml_branch_coverage=1 00:31:16.092 --rc genhtml_function_coverage=1 00:31:16.092 --rc genhtml_legend=1 00:31:16.092 --rc geninfo_all_blocks=1 00:31:16.092 --rc geninfo_unexecuted_blocks=1 00:31:16.092 00:31:16.092 ' 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:16.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.092 --rc genhtml_branch_coverage=1 00:31:16.092 --rc genhtml_function_coverage=1 00:31:16.092 --rc genhtml_legend=1 00:31:16.092 --rc geninfo_all_blocks=1 00:31:16.092 --rc geninfo_unexecuted_blocks=1 00:31:16.092 00:31:16.092 ' 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:16.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.092 --rc genhtml_branch_coverage=1 00:31:16.092 --rc genhtml_function_coverage=1 00:31:16.092 --rc genhtml_legend=1 00:31:16.092 --rc geninfo_all_blocks=1 00:31:16.092 --rc geninfo_unexecuted_blocks=1 00:31:16.092 00:31:16.092 ' 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:16.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.092 --rc genhtml_branch_coverage=1 00:31:16.092 --rc genhtml_function_coverage=1 00:31:16.092 --rc genhtml_legend=1 00:31:16.092 --rc geninfo_all_blocks=1 00:31:16.092 --rc geninfo_unexecuted_blocks=1 00:31:16.092 00:31:16.092 ' 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.092 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:31:16.093 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.093 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:31:16.093 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:16.093 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:16.093 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:16.093 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:16.093 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:16.093 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:16.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:16.093 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:16.093 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:16.093 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:16.093 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:31:16.093 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:31:16.093 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:31:16.093 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:31:16.093 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:31:16.093 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:31:16.093 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=58355ff3c3ca4741a5ffe23c4f9f22c3 00:31:16.093 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:31:16.093 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:16.093 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:16.093 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:16.093 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:16.093 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:16.093 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:16.093 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:16.093 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:16.093 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:16.093 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:16.093 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:31:16.093 17:31:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:22.764 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:22.764 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:22.764 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:23.024 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:23.024 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:23.024 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:23.024 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:23.024 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:23.024 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:23.024 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:23.024 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:23.024 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:23.024 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:23.024 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:23.024 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:23.024 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:23.024 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:23.024 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:23.024 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:23.024 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # is_hw=yes 00:31:23.024 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:23.024 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:23.025 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:23.025 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:23.025 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:23.025 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:23.025 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:23.025 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:23.025 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:23.025 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:23.025 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:23.025 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:23.025 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:23.025 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:23.025 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:23.025 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:23.025 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:23.025 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:23.025 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:23.025 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:23.025 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:23.025 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:23.285 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:23.285 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:23.285 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:23.285 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:23.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:23.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.598 ms 00:31:23.285 00:31:23.285 --- 10.0.0.2 ping statistics --- 00:31:23.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:23.285 rtt min/avg/max/mdev = 0.598/0.598/0.598/0.000 ms 00:31:23.285 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:23.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:23.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:31:23.285 00:31:23.285 --- 10.0.0.1 ping statistics --- 00:31:23.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:23.285 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:31:23.285 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:23.285 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # return 0 00:31:23.285 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:23.285 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:23.285 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:23.285 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:23.285 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:23.285 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:23.285 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:23.285 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:31:23.285 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:23.285 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:23.285 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:23.285 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # nvmfpid=3183521 00:31:23.285 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # waitforlisten 3183521 00:31:23.285 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:31:23.285 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 3183521 ']' 00:31:23.285 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:23.285 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:23.285 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:23.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:23.285 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:23.285 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:23.285 [2024-10-01 17:31:21.726552] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:31:23.285 [2024-10-01 17:31:21.726622] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:23.285 [2024-10-01 17:31:21.798506] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:23.546 [2024-10-01 17:31:21.836545] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:23.546 [2024-10-01 17:31:21.836595] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:23.547 [2024-10-01 17:31:21.836603] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:23.547 [2024-10-01 17:31:21.836610] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:23.547 [2024-10-01 17:31:21.836616] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:23.547 [2024-10-01 17:31:21.836638] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:23.547 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:23.547 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:31:23.547 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:23.547 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:23.547 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:23.547 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:23.547 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:23.547 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.547 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:23.547 [2024-10-01 17:31:21.970566] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:23.547 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.547 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:31:23.547 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.547 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:23.547 null0 00:31:23.547 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.547 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:31:23.547 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.547 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:23.547 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.547 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:31:23.547 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.547 17:31:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:23.547 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.547 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 58355ff3c3ca4741a5ffe23c4f9f22c3 00:31:23.547 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.547 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:23.547 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.547 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:23.547 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.547 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:23.547 [2024-10-01 17:31:22.030878] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:23.547 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.547 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:31:23.547 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.547 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:23.808 nvme0n1 00:31:23.808 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.808 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:23.808 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.808 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:23.808 [ 00:31:23.808 { 00:31:23.808 "name": "nvme0n1", 00:31:23.808 "aliases": [ 00:31:23.808 "58355ff3-c3ca-4741-a5ff-e23c4f9f22c3" 00:31:23.808 ], 00:31:23.808 "product_name": "NVMe disk", 00:31:23.808 "block_size": 512, 00:31:23.808 "num_blocks": 2097152, 00:31:23.808 "uuid": "58355ff3-c3ca-4741-a5ff-e23c4f9f22c3", 00:31:23.808 "numa_id": 0, 00:31:23.808 "assigned_rate_limits": { 00:31:23.808 "rw_ios_per_sec": 0, 00:31:23.808 "rw_mbytes_per_sec": 0, 00:31:23.808 "r_mbytes_per_sec": 0, 00:31:23.808 "w_mbytes_per_sec": 0 00:31:23.808 }, 00:31:23.808 "claimed": false, 00:31:23.808 "zoned": false, 00:31:23.808 "supported_io_types": { 00:31:23.808 "read": true, 00:31:23.808 "write": true, 00:31:23.808 "unmap": false, 00:31:23.808 "flush": true, 00:31:23.808 "reset": true, 00:31:23.808 "nvme_admin": true, 00:31:23.808 "nvme_io": true, 00:31:23.808 "nvme_io_md": false, 00:31:23.808 "write_zeroes": true, 00:31:23.808 "zcopy": false, 00:31:23.808 "get_zone_info": false, 00:31:23.808 "zone_management": false, 00:31:23.808 "zone_append": false, 00:31:23.808 "compare": true, 00:31:23.808 "compare_and_write": true, 00:31:23.808 "abort": true, 00:31:23.808 "seek_hole": false, 00:31:23.808 "seek_data": false, 00:31:23.808 "copy": true, 00:31:23.808 "nvme_iov_md": false 00:31:23.808 }, 00:31:23.808 "memory_domains": [ 00:31:23.808 { 00:31:23.808 "dma_device_id": "system", 00:31:23.808 "dma_device_type": 1 00:31:23.808 } 00:31:23.808 ], 00:31:23.808 "driver_specific": { 00:31:23.808 "nvme": [ 00:31:23.808 { 00:31:23.808 "trid": { 00:31:23.808 "trtype": "TCP", 00:31:23.808 "adrfam": "IPv4", 00:31:23.808 "traddr": "10.0.0.2", 00:31:23.808 "trsvcid": "4420", 00:31:23.808 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:23.808 }, 00:31:23.808 "ctrlr_data": { 00:31:23.808 "cntlid": 1, 00:31:23.808 "vendor_id": "0x8086", 00:31:23.808 "model_number": "SPDK bdev Controller", 00:31:23.808 "serial_number": "00000000000000000000", 00:31:23.808 "firmware_revision": "25.01", 00:31:23.808 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:23.808 "oacs": { 00:31:23.808 "security": 0, 00:31:23.808 "format": 0, 00:31:23.808 "firmware": 0, 00:31:23.808 "ns_manage": 0 00:31:23.808 }, 00:31:23.808 "multi_ctrlr": true, 00:31:23.808 "ana_reporting": false 00:31:23.808 }, 00:31:23.808 "vs": { 00:31:23.808 "nvme_version": "1.3" 00:31:23.808 }, 00:31:23.808 "ns_data": { 00:31:23.808 "id": 1, 00:31:23.808 "can_share": true 00:31:23.808 } 00:31:23.808 } 00:31:23.808 ], 00:31:23.808 "mp_policy": "active_passive" 00:31:23.808 } 00:31:23.808 } 00:31:23.808 ] 00:31:23.808 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.808 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:31:23.808 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.808 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:23.808 [2024-10-01 17:31:22.308021] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:23.808 [2024-10-01 17:31:22.308087] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19b28a0 (9): Bad file descriptor 00:31:24.069 [2024-10-01 17:31:22.440093] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:24.069 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.069 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:24.069 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.069 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:24.069 [ 00:31:24.069 { 00:31:24.069 "name": "nvme0n1", 00:31:24.069 "aliases": [ 00:31:24.069 "58355ff3-c3ca-4741-a5ff-e23c4f9f22c3" 00:31:24.069 ], 00:31:24.069 "product_name": "NVMe disk", 00:31:24.069 "block_size": 512, 00:31:24.069 "num_blocks": 2097152, 00:31:24.069 "uuid": "58355ff3-c3ca-4741-a5ff-e23c4f9f22c3", 00:31:24.069 "numa_id": 0, 00:31:24.069 "assigned_rate_limits": { 00:31:24.069 "rw_ios_per_sec": 0, 00:31:24.069 "rw_mbytes_per_sec": 0, 00:31:24.069 "r_mbytes_per_sec": 0, 00:31:24.069 "w_mbytes_per_sec": 0 00:31:24.069 }, 00:31:24.069 "claimed": false, 00:31:24.069 "zoned": false, 00:31:24.069 "supported_io_types": { 00:31:24.069 "read": true, 00:31:24.069 "write": true, 00:31:24.069 "unmap": false, 00:31:24.069 "flush": true, 00:31:24.069 "reset": true, 00:31:24.069 "nvme_admin": true, 00:31:24.069 "nvme_io": true, 00:31:24.069 "nvme_io_md": false, 00:31:24.069 "write_zeroes": true, 00:31:24.069 "zcopy": false, 00:31:24.069 "get_zone_info": false, 00:31:24.069 "zone_management": false, 00:31:24.069 "zone_append": false, 00:31:24.069 "compare": true, 00:31:24.069 "compare_and_write": true, 00:31:24.069 "abort": true, 00:31:24.069 "seek_hole": false, 00:31:24.069 "seek_data": false, 00:31:24.069 "copy": true, 00:31:24.069 "nvme_iov_md": false 00:31:24.069 }, 00:31:24.069 "memory_domains": [ 00:31:24.069 { 00:31:24.069 "dma_device_id": "system", 00:31:24.069 "dma_device_type": 1 00:31:24.069 } 00:31:24.069 ], 00:31:24.069 "driver_specific": { 00:31:24.069 "nvme": [ 00:31:24.069 { 00:31:24.069 "trid": { 00:31:24.069 "trtype": "TCP", 00:31:24.069 "adrfam": "IPv4", 00:31:24.069 "traddr": "10.0.0.2", 00:31:24.069 "trsvcid": "4420", 00:31:24.069 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:24.069 }, 00:31:24.069 "ctrlr_data": { 00:31:24.069 "cntlid": 2, 00:31:24.069 "vendor_id": "0x8086", 00:31:24.069 "model_number": "SPDK bdev Controller", 00:31:24.069 "serial_number": "00000000000000000000", 00:31:24.069 "firmware_revision": "25.01", 00:31:24.069 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:24.069 "oacs": { 00:31:24.069 "security": 0, 00:31:24.069 "format": 0, 00:31:24.069 "firmware": 0, 00:31:24.069 "ns_manage": 0 00:31:24.069 }, 00:31:24.069 "multi_ctrlr": true, 00:31:24.069 "ana_reporting": false 00:31:24.069 }, 00:31:24.069 "vs": { 00:31:24.069 "nvme_version": "1.3" 00:31:24.069 }, 00:31:24.069 "ns_data": { 00:31:24.069 "id": 1, 00:31:24.069 "can_share": true 00:31:24.069 } 00:31:24.069 } 00:31:24.069 ], 00:31:24.069 "mp_policy": "active_passive" 00:31:24.069 } 00:31:24.069 } 00:31:24.069 ] 00:31:24.069 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.069 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:24.069 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.069 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:24.069 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.069 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:31:24.069 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.xprQKfIZsZ 00:31:24.069 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:31:24.069 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.xprQKfIZsZ 00:31:24.069 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.xprQKfIZsZ 00:31:24.069 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.069 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:24.069 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.069 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:31:24.069 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.069 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:24.069 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.069 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:31:24.069 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.069 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:24.069 [2024-10-01 17:31:22.528720] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:24.069 [2024-10-01 17:31:22.528833] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:24.069 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.069 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:31:24.069 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.069 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:24.069 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.069 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:31:24.069 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.069 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:24.069 [2024-10-01 17:31:22.552802] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:24.330 nvme0n1 00:31:24.330 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.330 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:24.330 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.330 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:24.330 [ 00:31:24.330 { 00:31:24.330 "name": "nvme0n1", 00:31:24.330 "aliases": [ 00:31:24.330 "58355ff3-c3ca-4741-a5ff-e23c4f9f22c3" 00:31:24.330 ], 00:31:24.330 "product_name": "NVMe disk", 00:31:24.330 "block_size": 512, 00:31:24.330 "num_blocks": 2097152, 00:31:24.330 "uuid": "58355ff3-c3ca-4741-a5ff-e23c4f9f22c3", 00:31:24.330 "numa_id": 0, 00:31:24.330 "assigned_rate_limits": { 00:31:24.330 "rw_ios_per_sec": 0, 00:31:24.330 "rw_mbytes_per_sec": 0, 00:31:24.330 "r_mbytes_per_sec": 0, 00:31:24.330 "w_mbytes_per_sec": 0 00:31:24.330 }, 00:31:24.330 "claimed": false, 00:31:24.330 "zoned": false, 00:31:24.330 "supported_io_types": { 00:31:24.330 "read": true, 00:31:24.330 "write": true, 00:31:24.330 "unmap": false, 00:31:24.330 "flush": true, 00:31:24.330 "reset": true, 00:31:24.330 "nvme_admin": true, 00:31:24.330 "nvme_io": true, 00:31:24.330 "nvme_io_md": false, 00:31:24.330 "write_zeroes": true, 00:31:24.330 "zcopy": false, 00:31:24.330 "get_zone_info": false, 00:31:24.330 "zone_management": false, 00:31:24.330 "zone_append": false, 00:31:24.330 "compare": true, 00:31:24.330 "compare_and_write": true, 00:31:24.330 "abort": true, 00:31:24.330 "seek_hole": false, 00:31:24.330 "seek_data": false, 00:31:24.330 "copy": true, 00:31:24.330 "nvme_iov_md": false 00:31:24.330 }, 00:31:24.330 "memory_domains": [ 00:31:24.330 { 00:31:24.330 "dma_device_id": "system", 00:31:24.330 "dma_device_type": 1 00:31:24.330 } 00:31:24.330 ], 00:31:24.330 "driver_specific": { 00:31:24.330 "nvme": [ 00:31:24.330 { 00:31:24.330 "trid": { 00:31:24.330 "trtype": "TCP", 00:31:24.330 "adrfam": "IPv4", 00:31:24.330 "traddr": "10.0.0.2", 00:31:24.330 "trsvcid": "4421", 00:31:24.330 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:24.330 }, 00:31:24.330 "ctrlr_data": { 00:31:24.330 "cntlid": 3, 00:31:24.330 "vendor_id": "0x8086", 00:31:24.330 "model_number": "SPDK bdev Controller", 00:31:24.330 "serial_number": "00000000000000000000", 00:31:24.330 "firmware_revision": "25.01", 00:31:24.330 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:24.330 "oacs": { 00:31:24.330 "security": 0, 00:31:24.330 "format": 0, 00:31:24.330 "firmware": 0, 00:31:24.330 "ns_manage": 0 00:31:24.330 }, 00:31:24.330 "multi_ctrlr": true, 00:31:24.330 "ana_reporting": false 00:31:24.330 }, 00:31:24.330 "vs": { 00:31:24.330 "nvme_version": "1.3" 00:31:24.330 }, 00:31:24.330 "ns_data": { 00:31:24.330 "id": 1, 00:31:24.330 "can_share": true 00:31:24.330 } 00:31:24.330 } 00:31:24.330 ], 00:31:24.330 "mp_policy": "active_passive" 00:31:24.330 } 00:31:24.330 } 00:31:24.330 ] 00:31:24.330 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.330 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:24.330 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.330 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:24.330 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.330 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.xprQKfIZsZ 00:31:24.330 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:31:24.330 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:31:24.330 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:24.330 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:31:24.330 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:24.330 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:31:24.330 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:24.330 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:24.330 rmmod nvme_tcp 00:31:24.330 rmmod nvme_fabrics 00:31:24.330 rmmod nvme_keyring 00:31:24.330 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:24.330 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:31:24.330 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:31:24.330 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@515 -- # '[' -n 3183521 ']' 00:31:24.330 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # killprocess 3183521 00:31:24.330 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 3183521 ']' 00:31:24.330 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 3183521 00:31:24.330 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:31:24.330 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:24.330 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3183521 00:31:24.330 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:24.330 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:24.330 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3183521' 00:31:24.330 killing process with pid 3183521 00:31:24.330 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 3183521 00:31:24.330 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 3183521 00:31:24.591 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:24.591 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:24.591 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:24.591 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:31:24.591 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-save 00:31:24.591 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:24.591 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-restore 00:31:24.591 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:24.591 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:24.591 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:24.591 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:24.591 17:31:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:26.499 17:31:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:26.499 00:31:26.499 real 0m10.908s 00:31:26.499 user 0m3.416s 00:31:26.499 sys 0m5.877s 00:31:26.499 17:31:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:26.499 17:31:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:26.499 ************************************ 00:31:26.499 END TEST nvmf_async_init 00:31:26.499 ************************************ 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.761 ************************************ 00:31:26.761 START TEST dma 00:31:26.761 ************************************ 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:31:26.761 * Looking for test storage... 00:31:26.761 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lcov --version 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:26.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:26.761 --rc genhtml_branch_coverage=1 00:31:26.761 --rc genhtml_function_coverage=1 00:31:26.761 --rc genhtml_legend=1 00:31:26.761 --rc geninfo_all_blocks=1 00:31:26.761 --rc geninfo_unexecuted_blocks=1 00:31:26.761 00:31:26.761 ' 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:26.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:26.761 --rc genhtml_branch_coverage=1 00:31:26.761 --rc genhtml_function_coverage=1 00:31:26.761 --rc genhtml_legend=1 00:31:26.761 --rc geninfo_all_blocks=1 00:31:26.761 --rc geninfo_unexecuted_blocks=1 00:31:26.761 00:31:26.761 ' 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:26.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:26.761 --rc genhtml_branch_coverage=1 00:31:26.761 --rc genhtml_function_coverage=1 00:31:26.761 --rc genhtml_legend=1 00:31:26.761 --rc geninfo_all_blocks=1 00:31:26.761 --rc geninfo_unexecuted_blocks=1 00:31:26.761 00:31:26.761 ' 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:26.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:26.761 --rc genhtml_branch_coverage=1 00:31:26.761 --rc genhtml_function_coverage=1 00:31:26.761 --rc genhtml_legend=1 00:31:26.761 --rc geninfo_all_blocks=1 00:31:26.761 --rc geninfo_unexecuted_blocks=1 00:31:26.761 00:31:26.761 ' 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:26.761 17:31:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:27.023 17:31:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:27.023 17:31:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:27.023 17:31:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:27.023 17:31:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:27.023 17:31:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:27.023 17:31:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:27.023 17:31:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:27.023 17:31:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:27.023 17:31:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:31:27.023 17:31:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:27.023 17:31:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:27.023 17:31:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:27.023 17:31:25 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.023 17:31:25 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.023 17:31:25 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.023 17:31:25 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:31:27.023 17:31:25 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.023 17:31:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:31:27.023 17:31:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:27.023 17:31:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:27.023 17:31:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:27.023 17:31:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:27.023 17:31:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:27.023 17:31:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:27.023 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:27.023 17:31:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:27.023 17:31:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:27.023 17:31:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:27.023 17:31:25 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:31:27.023 17:31:25 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:31:27.023 00:31:27.023 real 0m0.231s 00:31:27.023 user 0m0.130s 00:31:27.023 sys 0m0.116s 00:31:27.023 17:31:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:27.023 17:31:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:31:27.023 ************************************ 00:31:27.023 END TEST dma 00:31:27.023 ************************************ 00:31:27.023 17:31:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:31:27.023 17:31:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:27.023 17:31:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:27.023 17:31:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.023 ************************************ 00:31:27.023 START TEST nvmf_identify 00:31:27.023 ************************************ 00:31:27.023 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:31:27.023 * Looking for test storage... 00:31:27.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:27.023 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:27.023 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:31:27.023 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:27.286 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:27.286 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:27.286 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:27.286 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:27.286 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:31:27.286 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:31:27.286 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:31:27.286 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:31:27.286 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:31:27.286 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:31:27.286 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:31:27.286 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:27.286 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:31:27.286 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:31:27.286 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:27.286 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:27.286 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:31:27.286 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:31:27.286 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:27.286 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:31:27.286 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:31:27.286 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:31:27.286 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:31:27.286 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:27.286 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:31:27.286 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:31:27.286 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:27.286 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:27.286 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:31:27.286 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:27.286 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:27.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.286 --rc genhtml_branch_coverage=1 00:31:27.286 --rc genhtml_function_coverage=1 00:31:27.286 --rc genhtml_legend=1 00:31:27.286 --rc geninfo_all_blocks=1 00:31:27.286 --rc geninfo_unexecuted_blocks=1 00:31:27.286 00:31:27.286 ' 00:31:27.286 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:27.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.286 --rc genhtml_branch_coverage=1 00:31:27.286 --rc genhtml_function_coverage=1 00:31:27.286 --rc genhtml_legend=1 00:31:27.286 --rc geninfo_all_blocks=1 00:31:27.286 --rc geninfo_unexecuted_blocks=1 00:31:27.286 00:31:27.286 ' 00:31:27.286 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:27.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.286 --rc genhtml_branch_coverage=1 00:31:27.286 --rc genhtml_function_coverage=1 00:31:27.286 --rc genhtml_legend=1 00:31:27.286 --rc geninfo_all_blocks=1 00:31:27.286 --rc geninfo_unexecuted_blocks=1 00:31:27.286 00:31:27.286 ' 00:31:27.286 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:27.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.286 --rc genhtml_branch_coverage=1 00:31:27.286 --rc genhtml_function_coverage=1 00:31:27.286 --rc genhtml_legend=1 00:31:27.286 --rc geninfo_all_blocks=1 00:31:27.286 --rc geninfo_unexecuted_blocks=1 00:31:27.286 00:31:27.286 ' 00:31:27.286 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:27.286 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:31:27.286 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:27.286 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:27.286 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:27.286 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:27.286 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:27.286 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:27.286 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:27.286 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:27.286 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:27.287 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:27.287 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:27.287 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:27.287 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:27.287 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:27.287 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:27.287 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:27.287 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:27.287 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:31:27.287 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:27.287 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:27.287 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:27.287 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.287 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.287 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.287 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:31:27.287 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.287 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:31:27.287 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:27.287 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:27.287 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:27.287 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:27.287 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:27.287 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:27.287 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:27.287 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:27.287 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:27.287 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:27.287 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:27.287 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:27.287 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:31:27.287 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:27.287 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:27.287 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:27.287 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:27.287 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:27.287 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:27.287 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:27.287 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:27.287 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:27.287 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:27.287 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:31:27.287 17:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:35.429 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:35.429 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:31:35.429 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:35.430 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:35.430 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:35.430 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:35.430 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # is_hw=yes 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:35.430 17:31:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:35.430 17:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:35.430 17:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:35.430 17:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:35.430 17:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:35.430 17:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:35.430 17:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:35.430 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:35.430 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:31:35.430 00:31:35.430 --- 10.0.0.2 ping statistics --- 00:31:35.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:35.430 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:31:35.430 17:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:35.430 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:35.430 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:31:35.430 00:31:35.430 --- 10.0.0.1 ping statistics --- 00:31:35.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:35.430 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:31:35.430 17:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:35.430 17:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # return 0 00:31:35.430 17:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:35.430 17:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:35.430 17:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:35.430 17:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:35.430 17:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:35.430 17:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:35.430 17:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:35.431 17:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:31:35.431 17:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:35.431 17:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:35.431 17:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3187924 00:31:35.431 17:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:35.431 17:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:35.431 17:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3187924 00:31:35.431 17:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 3187924 ']' 00:31:35.431 17:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:35.431 17:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:35.431 17:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:35.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:35.431 17:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:35.431 17:31:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:35.431 [2024-10-01 17:31:33.241664] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:31:35.431 [2024-10-01 17:31:33.241733] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:35.431 [2024-10-01 17:31:33.315149] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:35.431 [2024-10-01 17:31:33.355682] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:35.431 [2024-10-01 17:31:33.355725] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:35.431 [2024-10-01 17:31:33.355733] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:35.431 [2024-10-01 17:31:33.355739] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:35.431 [2024-10-01 17:31:33.355746] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:35.431 [2024-10-01 17:31:33.355893] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:31:35.431 [2024-10-01 17:31:33.356018] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:31:35.431 [2024-10-01 17:31:33.356118] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:35.431 [2024-10-01 17:31:33.356119] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:31:35.692 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:35.692 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:31:35.692 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:35.692 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.692 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:35.692 [2024-10-01 17:31:34.060700] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:35.692 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.692 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:31:35.692 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:35.692 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:35.692 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:35.692 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.692 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:35.692 Malloc0 00:31:35.692 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.692 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:35.692 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.692 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:35.692 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.692 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:31:35.692 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.692 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:35.692 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.692 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:35.692 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.692 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:35.692 [2024-10-01 17:31:34.160016] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:35.693 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.693 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:35.693 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.693 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:35.693 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.693 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:31:35.693 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.693 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:35.693 [ 00:31:35.693 { 00:31:35.693 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:35.693 "subtype": "Discovery", 00:31:35.693 "listen_addresses": [ 00:31:35.693 { 00:31:35.693 "trtype": "TCP", 00:31:35.693 "adrfam": "IPv4", 00:31:35.693 "traddr": "10.0.0.2", 00:31:35.693 "trsvcid": "4420" 00:31:35.693 } 00:31:35.693 ], 00:31:35.693 "allow_any_host": true, 00:31:35.693 "hosts": [] 00:31:35.693 }, 00:31:35.693 { 00:31:35.693 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:35.693 "subtype": "NVMe", 00:31:35.693 "listen_addresses": [ 00:31:35.693 { 00:31:35.693 "trtype": "TCP", 00:31:35.693 "adrfam": "IPv4", 00:31:35.693 "traddr": "10.0.0.2", 00:31:35.693 "trsvcid": "4420" 00:31:35.693 } 00:31:35.693 ], 00:31:35.693 "allow_any_host": true, 00:31:35.693 "hosts": [], 00:31:35.693 "serial_number": "SPDK00000000000001", 00:31:35.693 "model_number": "SPDK bdev Controller", 00:31:35.693 "max_namespaces": 32, 00:31:35.693 "min_cntlid": 1, 00:31:35.693 "max_cntlid": 65519, 00:31:35.693 "namespaces": [ 00:31:35.693 { 00:31:35.693 "nsid": 1, 00:31:35.693 "bdev_name": "Malloc0", 00:31:35.693 "name": "Malloc0", 00:31:35.693 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:31:35.693 "eui64": "ABCDEF0123456789", 00:31:35.693 "uuid": "d21b665d-302e-4bca-afcb-1e958ee59228" 00:31:35.693 } 00:31:35.693 ] 00:31:35.693 } 00:31:35.693 ] 00:31:35.693 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.693 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:31:35.693 [2024-10-01 17:31:34.221411] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:31:35.693 [2024-10-01 17:31:34.221453] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3188161 ] 00:31:35.958 [2024-10-01 17:31:34.252636] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:31:35.958 [2024-10-01 17:31:34.252687] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:31:35.958 [2024-10-01 17:31:34.252693] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:31:35.958 [2024-10-01 17:31:34.252705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:31:35.958 [2024-10-01 17:31:34.252715] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:31:35.958 [2024-10-01 17:31:34.256274] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:31:35.958 [2024-10-01 17:31:34.256305] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xc4f0d0 0 00:31:35.958 [2024-10-01 17:31:34.264009] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:31:35.958 [2024-10-01 17:31:34.264021] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:31:35.958 [2024-10-01 17:31:34.264027] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:31:35.958 [2024-10-01 17:31:34.264030] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:31:35.958 [2024-10-01 17:31:34.264057] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.958 [2024-10-01 17:31:34.264063] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.958 [2024-10-01 17:31:34.264067] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc4f0d0) 00:31:35.958 [2024-10-01 17:31:34.264079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:35.958 [2024-10-01 17:31:34.264096] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9540, cid 0, qid 0 00:31:35.958 [2024-10-01 17:31:34.272004] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.958 [2024-10-01 17:31:34.272013] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.958 [2024-10-01 17:31:34.272017] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.958 [2024-10-01 17:31:34.272022] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9540) on tqpair=0xc4f0d0 00:31:35.958 [2024-10-01 17:31:34.272032] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:35.958 [2024-10-01 17:31:34.272038] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:31:35.958 [2024-10-01 17:31:34.272043] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:31:35.958 [2024-10-01 17:31:34.272056] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.958 [2024-10-01 17:31:34.272063] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.958 [2024-10-01 17:31:34.272067] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc4f0d0) 00:31:35.958 [2024-10-01 17:31:34.272075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.958 [2024-10-01 17:31:34.272089] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9540, cid 0, qid 0 00:31:35.958 [2024-10-01 17:31:34.272288] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.958 [2024-10-01 17:31:34.272294] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.958 [2024-10-01 17:31:34.272298] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.958 [2024-10-01 17:31:34.272302] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9540) on tqpair=0xc4f0d0 00:31:35.958 [2024-10-01 17:31:34.272307] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:31:35.958 [2024-10-01 17:31:34.272315] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:31:35.958 [2024-10-01 17:31:34.272322] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.958 [2024-10-01 17:31:34.272325] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.958 [2024-10-01 17:31:34.272329] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc4f0d0) 00:31:35.958 [2024-10-01 17:31:34.272336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.958 [2024-10-01 17:31:34.272346] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9540, cid 0, qid 0 00:31:35.958 [2024-10-01 17:31:34.272547] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.958 [2024-10-01 17:31:34.272554] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.958 [2024-10-01 17:31:34.272558] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.958 [2024-10-01 17:31:34.272562] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9540) on tqpair=0xc4f0d0 00:31:35.958 [2024-10-01 17:31:34.272567] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:31:35.958 [2024-10-01 17:31:34.272575] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:31:35.958 [2024-10-01 17:31:34.272581] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.958 [2024-10-01 17:31:34.272585] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.958 [2024-10-01 17:31:34.272588] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc4f0d0) 00:31:35.958 [2024-10-01 17:31:34.272595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.958 [2024-10-01 17:31:34.272606] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9540, cid 0, qid 0 00:31:35.958 [2024-10-01 17:31:34.272803] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.958 [2024-10-01 17:31:34.272810] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.958 [2024-10-01 17:31:34.272813] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.958 [2024-10-01 17:31:34.272817] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9540) on tqpair=0xc4f0d0 00:31:35.958 [2024-10-01 17:31:34.272822] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:35.958 [2024-10-01 17:31:34.272832] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.958 [2024-10-01 17:31:34.272835] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.958 [2024-10-01 17:31:34.272839] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc4f0d0) 00:31:35.958 [2024-10-01 17:31:34.272846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.958 [2024-10-01 17:31:34.272859] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9540, cid 0, qid 0 00:31:35.958 [2024-10-01 17:31:34.273019] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.958 [2024-10-01 17:31:34.273026] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.958 [2024-10-01 17:31:34.273030] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.958 [2024-10-01 17:31:34.273034] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9540) on tqpair=0xc4f0d0 00:31:35.958 [2024-10-01 17:31:34.273038] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:31:35.958 [2024-10-01 17:31:34.273043] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:31:35.958 [2024-10-01 17:31:34.273050] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:35.958 [2024-10-01 17:31:34.273156] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:31:35.958 [2024-10-01 17:31:34.273161] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:35.958 [2024-10-01 17:31:34.273169] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.958 [2024-10-01 17:31:34.273173] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.958 [2024-10-01 17:31:34.273177] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc4f0d0) 00:31:35.958 [2024-10-01 17:31:34.273183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.958 [2024-10-01 17:31:34.273194] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9540, cid 0, qid 0 00:31:35.958 [2024-10-01 17:31:34.273370] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.958 [2024-10-01 17:31:34.273377] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.958 [2024-10-01 17:31:34.273380] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.958 [2024-10-01 17:31:34.273384] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9540) on tqpair=0xc4f0d0 00:31:35.958 [2024-10-01 17:31:34.273389] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:35.958 [2024-10-01 17:31:34.273398] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.958 [2024-10-01 17:31:34.273402] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.958 [2024-10-01 17:31:34.273406] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc4f0d0) 00:31:35.958 [2024-10-01 17:31:34.273412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.958 [2024-10-01 17:31:34.273422] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9540, cid 0, qid 0 00:31:35.958 [2024-10-01 17:31:34.273590] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.958 [2024-10-01 17:31:34.273597] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.958 [2024-10-01 17:31:34.273600] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.958 [2024-10-01 17:31:34.273604] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9540) on tqpair=0xc4f0d0 00:31:35.958 [2024-10-01 17:31:34.273608] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:35.958 [2024-10-01 17:31:34.273613] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:31:35.958 [2024-10-01 17:31:34.273621] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:31:35.958 [2024-10-01 17:31:34.273630] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:31:35.958 [2024-10-01 17:31:34.273638] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.958 [2024-10-01 17:31:34.273642] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc4f0d0) 00:31:35.959 [2024-10-01 17:31:34.273649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.959 [2024-10-01 17:31:34.273659] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9540, cid 0, qid 0 00:31:35.959 [2024-10-01 17:31:34.273879] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:35.959 [2024-10-01 17:31:34.273885] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:35.959 [2024-10-01 17:31:34.273889] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:35.959 [2024-10-01 17:31:34.273893] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc4f0d0): datao=0, datal=4096, cccid=0 00:31:35.959 [2024-10-01 17:31:34.273898] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcb9540) on tqpair(0xc4f0d0): expected_datao=0, payload_size=4096 00:31:35.959 [2024-10-01 17:31:34.273902] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.959 [2024-10-01 17:31:34.273914] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:35.959 [2024-10-01 17:31:34.273919] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:35.959 [2024-10-01 17:31:34.274074] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.959 [2024-10-01 17:31:34.274081] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.959 [2024-10-01 17:31:34.274084] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.959 [2024-10-01 17:31:34.274088] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9540) on tqpair=0xc4f0d0 00:31:35.959 [2024-10-01 17:31:34.274095] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:31:35.959 [2024-10-01 17:31:34.274100] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:31:35.959 [2024-10-01 17:31:34.274105] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:31:35.959 [2024-10-01 17:31:34.274110] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:31:35.959 [2024-10-01 17:31:34.274114] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:31:35.959 [2024-10-01 17:31:34.274119] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:31:35.959 [2024-10-01 17:31:34.274127] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:31:35.959 [2024-10-01 17:31:34.274134] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.959 [2024-10-01 17:31:34.274138] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.959 [2024-10-01 17:31:34.274142] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc4f0d0) 00:31:35.959 [2024-10-01 17:31:34.274149] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:35.959 [2024-10-01 17:31:34.274160] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9540, cid 0, qid 0 00:31:35.959 [2024-10-01 17:31:34.274360] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.959 [2024-10-01 17:31:34.274367] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.959 [2024-10-01 17:31:34.274370] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.959 [2024-10-01 17:31:34.274374] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9540) on tqpair=0xc4f0d0 00:31:35.959 [2024-10-01 17:31:34.274383] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.959 [2024-10-01 17:31:34.274387] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.959 [2024-10-01 17:31:34.274391] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc4f0d0) 00:31:35.959 [2024-10-01 17:31:34.274397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:35.959 [2024-10-01 17:31:34.274403] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.959 [2024-10-01 17:31:34.274407] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.959 [2024-10-01 17:31:34.274410] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xc4f0d0) 00:31:35.959 [2024-10-01 17:31:34.274416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:35.959 [2024-10-01 17:31:34.274422] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.959 [2024-10-01 17:31:34.274426] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.959 [2024-10-01 17:31:34.274429] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xc4f0d0) 00:31:35.959 [2024-10-01 17:31:34.274435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:35.959 [2024-10-01 17:31:34.274441] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.959 [2024-10-01 17:31:34.274445] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.959 [2024-10-01 17:31:34.274448] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4f0d0) 00:31:35.959 [2024-10-01 17:31:34.274454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:35.959 [2024-10-01 17:31:34.274459] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:31:35.959 [2024-10-01 17:31:34.274469] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:35.959 [2024-10-01 17:31:34.274476] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.959 [2024-10-01 17:31:34.274480] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc4f0d0) 00:31:35.959 [2024-10-01 17:31:34.274486] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.959 [2024-10-01 17:31:34.274498] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9540, cid 0, qid 0 00:31:35.959 [2024-10-01 17:31:34.274503] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb96c0, cid 1, qid 0 00:31:35.959 [2024-10-01 17:31:34.274508] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9840, cid 2, qid 0 00:31:35.959 [2024-10-01 17:31:34.274513] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb99c0, cid 3, qid 0 00:31:35.959 [2024-10-01 17:31:34.274518] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9b40, cid 4, qid 0 00:31:35.959 [2024-10-01 17:31:34.274753] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.959 [2024-10-01 17:31:34.274760] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.959 [2024-10-01 17:31:34.274763] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.959 [2024-10-01 17:31:34.274767] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9b40) on tqpair=0xc4f0d0 00:31:35.959 [2024-10-01 17:31:34.274772] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:31:35.959 [2024-10-01 17:31:34.274777] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:31:35.959 [2024-10-01 17:31:34.274789] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.959 [2024-10-01 17:31:34.274793] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc4f0d0) 00:31:35.959 [2024-10-01 17:31:34.274799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.959 [2024-10-01 17:31:34.274809] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9b40, cid 4, qid 0 00:31:35.959 [2024-10-01 17:31:34.274984] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:35.959 [2024-10-01 17:31:34.274990] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:35.959 [2024-10-01 17:31:34.274998] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:35.959 [2024-10-01 17:31:34.275001] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc4f0d0): datao=0, datal=4096, cccid=4 00:31:35.959 [2024-10-01 17:31:34.275006] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcb9b40) on tqpair(0xc4f0d0): expected_datao=0, payload_size=4096 00:31:35.959 [2024-10-01 17:31:34.275010] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.959 [2024-10-01 17:31:34.275017] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:35.959 [2024-10-01 17:31:34.275021] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:35.959 [2024-10-01 17:31:34.275202] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.959 [2024-10-01 17:31:34.275208] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.959 [2024-10-01 17:31:34.275212] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.959 [2024-10-01 17:31:34.275216] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9b40) on tqpair=0xc4f0d0 00:31:35.959 [2024-10-01 17:31:34.275226] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:31:35.959 [2024-10-01 17:31:34.275251] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.959 [2024-10-01 17:31:34.275255] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc4f0d0) 00:31:35.959 [2024-10-01 17:31:34.275262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.959 [2024-10-01 17:31:34.275269] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.959 [2024-10-01 17:31:34.275273] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.959 [2024-10-01 17:31:34.275276] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc4f0d0) 00:31:35.959 [2024-10-01 17:31:34.275282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:35.959 [2024-10-01 17:31:34.275295] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9b40, cid 4, qid 0 00:31:35.959 [2024-10-01 17:31:34.275300] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9cc0, cid 5, qid 0 00:31:35.959 [2024-10-01 17:31:34.275499] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:35.959 [2024-10-01 17:31:34.275505] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:35.959 [2024-10-01 17:31:34.275508] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:35.959 [2024-10-01 17:31:34.275512] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc4f0d0): datao=0, datal=1024, cccid=4 00:31:35.959 [2024-10-01 17:31:34.275517] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcb9b40) on tqpair(0xc4f0d0): expected_datao=0, payload_size=1024 00:31:35.959 [2024-10-01 17:31:34.275521] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.959 [2024-10-01 17:31:34.275528] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:35.959 [2024-10-01 17:31:34.275531] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:35.959 [2024-10-01 17:31:34.275537] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.959 [2024-10-01 17:31:34.275543] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.959 [2024-10-01 17:31:34.275549] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.959 [2024-10-01 17:31:34.275553] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9cc0) on tqpair=0xc4f0d0 00:31:35.959 [2024-10-01 17:31:34.317169] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.959 [2024-10-01 17:31:34.317180] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.959 [2024-10-01 17:31:34.317184] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.960 [2024-10-01 17:31:34.317188] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9b40) on tqpair=0xc4f0d0 00:31:35.960 [2024-10-01 17:31:34.317199] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.960 [2024-10-01 17:31:34.317203] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc4f0d0) 00:31:35.960 [2024-10-01 17:31:34.317210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.960 [2024-10-01 17:31:34.317227] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9b40, cid 4, qid 0 00:31:35.960 [2024-10-01 17:31:34.317404] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:35.960 [2024-10-01 17:31:34.317411] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:35.960 [2024-10-01 17:31:34.317415] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:35.960 [2024-10-01 17:31:34.317419] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc4f0d0): datao=0, datal=3072, cccid=4 00:31:35.960 [2024-10-01 17:31:34.317423] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcb9b40) on tqpair(0xc4f0d0): expected_datao=0, payload_size=3072 00:31:35.960 [2024-10-01 17:31:34.317428] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.960 [2024-10-01 17:31:34.317435] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:35.960 [2024-10-01 17:31:34.317438] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:35.960 [2024-10-01 17:31:34.317585] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.960 [2024-10-01 17:31:34.317591] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.960 [2024-10-01 17:31:34.317595] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.960 [2024-10-01 17:31:34.317599] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9b40) on tqpair=0xc4f0d0 00:31:35.960 [2024-10-01 17:31:34.317607] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.960 [2024-10-01 17:31:34.317611] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc4f0d0) 00:31:35.960 [2024-10-01 17:31:34.317618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.960 [2024-10-01 17:31:34.317631] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb9b40, cid 4, qid 0 00:31:35.960 [2024-10-01 17:31:34.317847] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:35.960 [2024-10-01 17:31:34.317853] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:35.960 [2024-10-01 17:31:34.317857] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:35.960 [2024-10-01 17:31:34.317861] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc4f0d0): datao=0, datal=8, cccid=4 00:31:35.960 [2024-10-01 17:31:34.317865] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcb9b40) on tqpair(0xc4f0d0): expected_datao=0, payload_size=8 00:31:35.960 [2024-10-01 17:31:34.317870] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.960 [2024-10-01 17:31:34.317876] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:35.960 [2024-10-01 17:31:34.317880] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:35.960 [2024-10-01 17:31:34.365007] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.960 [2024-10-01 17:31:34.365018] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.960 [2024-10-01 17:31:34.365021] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.960 [2024-10-01 17:31:34.365028] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9b40) on tqpair=0xc4f0d0 00:31:35.960 ===================================================== 00:31:35.960 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:35.960 ===================================================== 00:31:35.960 Controller Capabilities/Features 00:31:35.960 ================================ 00:31:35.960 Vendor ID: 0000 00:31:35.960 Subsystem Vendor ID: 0000 00:31:35.960 Serial Number: .................... 00:31:35.960 Model Number: ........................................ 00:31:35.960 Firmware Version: 25.01 00:31:35.960 Recommended Arb Burst: 0 00:31:35.960 IEEE OUI Identifier: 00 00 00 00:31:35.960 Multi-path I/O 00:31:35.960 May have multiple subsystem ports: No 00:31:35.960 May have multiple controllers: No 00:31:35.960 Associated with SR-IOV VF: No 00:31:35.960 Max Data Transfer Size: 131072 00:31:35.960 Max Number of Namespaces: 0 00:31:35.960 Max Number of I/O Queues: 1024 00:31:35.960 NVMe Specification Version (VS): 1.3 00:31:35.960 NVMe Specification Version (Identify): 1.3 00:31:35.960 Maximum Queue Entries: 128 00:31:35.960 Contiguous Queues Required: Yes 00:31:35.960 Arbitration Mechanisms Supported 00:31:35.960 Weighted Round Robin: Not Supported 00:31:35.960 Vendor Specific: Not Supported 00:31:35.960 Reset Timeout: 15000 ms 00:31:35.960 Doorbell Stride: 4 bytes 00:31:35.960 NVM Subsystem Reset: Not Supported 00:31:35.960 Command Sets Supported 00:31:35.960 NVM Command Set: Supported 00:31:35.960 Boot Partition: Not Supported 00:31:35.960 Memory Page Size Minimum: 4096 bytes 00:31:35.960 Memory Page Size Maximum: 4096 bytes 00:31:35.960 Persistent Memory Region: Not Supported 00:31:35.960 Optional Asynchronous Events Supported 00:31:35.960 Namespace Attribute Notices: Not Supported 00:31:35.960 Firmware Activation Notices: Not Supported 00:31:35.960 ANA Change Notices: Not Supported 00:31:35.960 PLE Aggregate Log Change Notices: Not Supported 00:31:35.960 LBA Status Info Alert Notices: Not Supported 00:31:35.960 EGE Aggregate Log Change Notices: Not Supported 00:31:35.960 Normal NVM Subsystem Shutdown event: Not Supported 00:31:35.960 Zone Descriptor Change Notices: Not Supported 00:31:35.960 Discovery Log Change Notices: Supported 00:31:35.960 Controller Attributes 00:31:35.960 128-bit Host Identifier: Not Supported 00:31:35.960 Non-Operational Permissive Mode: Not Supported 00:31:35.960 NVM Sets: Not Supported 00:31:35.960 Read Recovery Levels: Not Supported 00:31:35.960 Endurance Groups: Not Supported 00:31:35.960 Predictable Latency Mode: Not Supported 00:31:35.960 Traffic Based Keep ALive: Not Supported 00:31:35.960 Namespace Granularity: Not Supported 00:31:35.960 SQ Associations: Not Supported 00:31:35.960 UUID List: Not Supported 00:31:35.960 Multi-Domain Subsystem: Not Supported 00:31:35.960 Fixed Capacity Management: Not Supported 00:31:35.960 Variable Capacity Management: Not Supported 00:31:35.960 Delete Endurance Group: Not Supported 00:31:35.960 Delete NVM Set: Not Supported 00:31:35.960 Extended LBA Formats Supported: Not Supported 00:31:35.960 Flexible Data Placement Supported: Not Supported 00:31:35.960 00:31:35.960 Controller Memory Buffer Support 00:31:35.960 ================================ 00:31:35.960 Supported: No 00:31:35.960 00:31:35.960 Persistent Memory Region Support 00:31:35.960 ================================ 00:31:35.960 Supported: No 00:31:35.960 00:31:35.960 Admin Command Set Attributes 00:31:35.960 ============================ 00:31:35.960 Security Send/Receive: Not Supported 00:31:35.960 Format NVM: Not Supported 00:31:35.960 Firmware Activate/Download: Not Supported 00:31:35.960 Namespace Management: Not Supported 00:31:35.960 Device Self-Test: Not Supported 00:31:35.960 Directives: Not Supported 00:31:35.960 NVMe-MI: Not Supported 00:31:35.960 Virtualization Management: Not Supported 00:31:35.960 Doorbell Buffer Config: Not Supported 00:31:35.960 Get LBA Status Capability: Not Supported 00:31:35.960 Command & Feature Lockdown Capability: Not Supported 00:31:35.960 Abort Command Limit: 1 00:31:35.960 Async Event Request Limit: 4 00:31:35.960 Number of Firmware Slots: N/A 00:31:35.960 Firmware Slot 1 Read-Only: N/A 00:31:35.960 Firmware Activation Without Reset: N/A 00:31:35.960 Multiple Update Detection Support: N/A 00:31:35.960 Firmware Update Granularity: No Information Provided 00:31:35.960 Per-Namespace SMART Log: No 00:31:35.960 Asymmetric Namespace Access Log Page: Not Supported 00:31:35.960 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:35.960 Command Effects Log Page: Not Supported 00:31:35.960 Get Log Page Extended Data: Supported 00:31:35.960 Telemetry Log Pages: Not Supported 00:31:35.960 Persistent Event Log Pages: Not Supported 00:31:35.960 Supported Log Pages Log Page: May Support 00:31:35.960 Commands Supported & Effects Log Page: Not Supported 00:31:35.960 Feature Identifiers & Effects Log Page:May Support 00:31:35.960 NVMe-MI Commands & Effects Log Page: May Support 00:31:35.960 Data Area 4 for Telemetry Log: Not Supported 00:31:35.960 Error Log Page Entries Supported: 128 00:31:35.960 Keep Alive: Not Supported 00:31:35.960 00:31:35.960 NVM Command Set Attributes 00:31:35.960 ========================== 00:31:35.960 Submission Queue Entry Size 00:31:35.960 Max: 1 00:31:35.960 Min: 1 00:31:35.960 Completion Queue Entry Size 00:31:35.960 Max: 1 00:31:35.960 Min: 1 00:31:35.960 Number of Namespaces: 0 00:31:35.960 Compare Command: Not Supported 00:31:35.960 Write Uncorrectable Command: Not Supported 00:31:35.960 Dataset Management Command: Not Supported 00:31:35.960 Write Zeroes Command: Not Supported 00:31:35.960 Set Features Save Field: Not Supported 00:31:35.960 Reservations: Not Supported 00:31:35.960 Timestamp: Not Supported 00:31:35.960 Copy: Not Supported 00:31:35.960 Volatile Write Cache: Not Present 00:31:35.960 Atomic Write Unit (Normal): 1 00:31:35.960 Atomic Write Unit (PFail): 1 00:31:35.960 Atomic Compare & Write Unit: 1 00:31:35.960 Fused Compare & Write: Supported 00:31:35.960 Scatter-Gather List 00:31:35.960 SGL Command Set: Supported 00:31:35.960 SGL Keyed: Supported 00:31:35.960 SGL Bit Bucket Descriptor: Not Supported 00:31:35.961 SGL Metadata Pointer: Not Supported 00:31:35.961 Oversized SGL: Not Supported 00:31:35.961 SGL Metadata Address: Not Supported 00:31:35.961 SGL Offset: Supported 00:31:35.961 Transport SGL Data Block: Not Supported 00:31:35.961 Replay Protected Memory Block: Not Supported 00:31:35.961 00:31:35.961 Firmware Slot Information 00:31:35.961 ========================= 00:31:35.961 Active slot: 0 00:31:35.961 00:31:35.961 00:31:35.961 Error Log 00:31:35.961 ========= 00:31:35.961 00:31:35.961 Active Namespaces 00:31:35.961 ================= 00:31:35.961 Discovery Log Page 00:31:35.961 ================== 00:31:35.961 Generation Counter: 2 00:31:35.961 Number of Records: 2 00:31:35.961 Record Format: 0 00:31:35.961 00:31:35.961 Discovery Log Entry 0 00:31:35.961 ---------------------- 00:31:35.961 Transport Type: 3 (TCP) 00:31:35.961 Address Family: 1 (IPv4) 00:31:35.961 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:35.961 Entry Flags: 00:31:35.961 Duplicate Returned Information: 1 00:31:35.961 Explicit Persistent Connection Support for Discovery: 1 00:31:35.961 Transport Requirements: 00:31:35.961 Secure Channel: Not Required 00:31:35.961 Port ID: 0 (0x0000) 00:31:35.961 Controller ID: 65535 (0xffff) 00:31:35.961 Admin Max SQ Size: 128 00:31:35.961 Transport Service Identifier: 4420 00:31:35.961 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:35.961 Transport Address: 10.0.0.2 00:31:35.961 Discovery Log Entry 1 00:31:35.961 ---------------------- 00:31:35.961 Transport Type: 3 (TCP) 00:31:35.961 Address Family: 1 (IPv4) 00:31:35.961 Subsystem Type: 2 (NVM Subsystem) 00:31:35.961 Entry Flags: 00:31:35.961 Duplicate Returned Information: 0 00:31:35.961 Explicit Persistent Connection Support for Discovery: 0 00:31:35.961 Transport Requirements: 00:31:35.961 Secure Channel: Not Required 00:31:35.961 Port ID: 0 (0x0000) 00:31:35.961 Controller ID: 65535 (0xffff) 00:31:35.961 Admin Max SQ Size: 128 00:31:35.961 Transport Service Identifier: 4420 00:31:35.961 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:31:35.961 Transport Address: 10.0.0.2 [2024-10-01 17:31:34.365109] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:31:35.961 [2024-10-01 17:31:34.365120] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9540) on tqpair=0xc4f0d0 00:31:35.961 [2024-10-01 17:31:34.365126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.961 [2024-10-01 17:31:34.365132] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb96c0) on tqpair=0xc4f0d0 00:31:35.961 [2024-10-01 17:31:34.365137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.961 [2024-10-01 17:31:34.365142] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb9840) on tqpair=0xc4f0d0 00:31:35.961 [2024-10-01 17:31:34.365146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.961 [2024-10-01 17:31:34.365151] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb99c0) on tqpair=0xc4f0d0 00:31:35.961 [2024-10-01 17:31:34.365156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.961 [2024-10-01 17:31:34.365164] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.961 [2024-10-01 17:31:34.365168] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.961 [2024-10-01 17:31:34.365172] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4f0d0) 00:31:35.961 [2024-10-01 17:31:34.365179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.961 [2024-10-01 17:31:34.365193] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb99c0, cid 3, qid 0 00:31:35.961 [2024-10-01 17:31:34.365404] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.961 [2024-10-01 17:31:34.365411] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.961 [2024-10-01 17:31:34.365414] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.961 [2024-10-01 17:31:34.365418] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb99c0) on tqpair=0xc4f0d0 00:31:35.961 [2024-10-01 17:31:34.365425] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.961 [2024-10-01 17:31:34.365429] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.961 [2024-10-01 17:31:34.365432] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4f0d0) 00:31:35.961 [2024-10-01 17:31:34.365439] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.961 [2024-10-01 17:31:34.365452] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb99c0, cid 3, qid 0 00:31:35.961 [2024-10-01 17:31:34.365666] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.961 [2024-10-01 17:31:34.365673] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.961 [2024-10-01 17:31:34.365676] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.961 [2024-10-01 17:31:34.365680] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb99c0) on tqpair=0xc4f0d0 00:31:35.961 [2024-10-01 17:31:34.365685] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:31:35.961 [2024-10-01 17:31:34.365693] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:31:35.961 [2024-10-01 17:31:34.365702] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.961 [2024-10-01 17:31:34.365706] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.961 [2024-10-01 17:31:34.365710] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4f0d0) 00:31:35.961 [2024-10-01 17:31:34.365716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.961 [2024-10-01 17:31:34.365729] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb99c0, cid 3, qid 0 00:31:35.961 [2024-10-01 17:31:34.365885] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.961 [2024-10-01 17:31:34.365891] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.961 [2024-10-01 17:31:34.365895] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.961 [2024-10-01 17:31:34.365899] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb99c0) on tqpair=0xc4f0d0 00:31:35.961 [2024-10-01 17:31:34.365908] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.961 [2024-10-01 17:31:34.365912] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.961 [2024-10-01 17:31:34.365916] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4f0d0) 00:31:35.961 [2024-10-01 17:31:34.365923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.961 [2024-10-01 17:31:34.365933] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb99c0, cid 3, qid 0 00:31:35.961 [2024-10-01 17:31:34.366114] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.961 [2024-10-01 17:31:34.366121] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.961 [2024-10-01 17:31:34.366125] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.961 [2024-10-01 17:31:34.366128] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb99c0) on tqpair=0xc4f0d0 00:31:35.961 [2024-10-01 17:31:34.366138] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.961 [2024-10-01 17:31:34.366142] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.961 [2024-10-01 17:31:34.366145] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4f0d0) 00:31:35.961 [2024-10-01 17:31:34.366152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.961 [2024-10-01 17:31:34.366163] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb99c0, cid 3, qid 0 00:31:35.961 [2024-10-01 17:31:34.366348] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.961 [2024-10-01 17:31:34.366354] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.961 [2024-10-01 17:31:34.366358] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.961 [2024-10-01 17:31:34.366361] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb99c0) on tqpair=0xc4f0d0 00:31:35.961 [2024-10-01 17:31:34.366371] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.961 [2024-10-01 17:31:34.366375] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.961 [2024-10-01 17:31:34.366378] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4f0d0) 00:31:35.961 [2024-10-01 17:31:34.366385] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.961 [2024-10-01 17:31:34.366395] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb99c0, cid 3, qid 0 00:31:35.961 [2024-10-01 17:31:34.366562] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.961 [2024-10-01 17:31:34.366568] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.961 [2024-10-01 17:31:34.366571] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.961 [2024-10-01 17:31:34.366575] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb99c0) on tqpair=0xc4f0d0 00:31:35.961 [2024-10-01 17:31:34.366585] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.961 [2024-10-01 17:31:34.366589] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.961 [2024-10-01 17:31:34.366592] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4f0d0) 00:31:35.961 [2024-10-01 17:31:34.366599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.961 [2024-10-01 17:31:34.366609] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb99c0, cid 3, qid 0 00:31:35.961 [2024-10-01 17:31:34.366777] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.961 [2024-10-01 17:31:34.366784] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.961 [2024-10-01 17:31:34.366787] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.961 [2024-10-01 17:31:34.366791] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb99c0) on tqpair=0xc4f0d0 00:31:35.961 [2024-10-01 17:31:34.366801] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.961 [2024-10-01 17:31:34.366805] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.962 [2024-10-01 17:31:34.366808] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4f0d0) 00:31:35.962 [2024-10-01 17:31:34.366815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.962 [2024-10-01 17:31:34.366825] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb99c0, cid 3, qid 0 00:31:35.962 [2024-10-01 17:31:34.366999] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.962 [2024-10-01 17:31:34.367005] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.962 [2024-10-01 17:31:34.367009] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.962 [2024-10-01 17:31:34.367013] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb99c0) on tqpair=0xc4f0d0 00:31:35.962 [2024-10-01 17:31:34.367022] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.962 [2024-10-01 17:31:34.367026] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.962 [2024-10-01 17:31:34.367030] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4f0d0) 00:31:35.962 [2024-10-01 17:31:34.367036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.962 [2024-10-01 17:31:34.367047] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb99c0, cid 3, qid 0 00:31:35.962 [2024-10-01 17:31:34.367217] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.962 [2024-10-01 17:31:34.367223] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.962 [2024-10-01 17:31:34.367227] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.962 [2024-10-01 17:31:34.367230] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb99c0) on tqpair=0xc4f0d0 00:31:35.962 [2024-10-01 17:31:34.367240] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.962 [2024-10-01 17:31:34.367244] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.962 [2024-10-01 17:31:34.367247] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4f0d0) 00:31:35.962 [2024-10-01 17:31:34.367254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.962 [2024-10-01 17:31:34.367264] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb99c0, cid 3, qid 0 00:31:35.962 [2024-10-01 17:31:34.367440] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.962 [2024-10-01 17:31:34.367447] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.962 [2024-10-01 17:31:34.367450] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.962 [2024-10-01 17:31:34.367454] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb99c0) on tqpair=0xc4f0d0 00:31:35.962 [2024-10-01 17:31:34.367464] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.962 [2024-10-01 17:31:34.367467] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.962 [2024-10-01 17:31:34.367471] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4f0d0) 00:31:35.962 [2024-10-01 17:31:34.367478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.962 [2024-10-01 17:31:34.367488] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb99c0, cid 3, qid 0 00:31:35.962 [2024-10-01 17:31:34.367660] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.962 [2024-10-01 17:31:34.367671] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.962 [2024-10-01 17:31:34.367675] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.962 [2024-10-01 17:31:34.367679] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb99c0) on tqpair=0xc4f0d0 00:31:35.962 [2024-10-01 17:31:34.367688] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.962 [2024-10-01 17:31:34.367692] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.962 [2024-10-01 17:31:34.367696] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4f0d0) 00:31:35.962 [2024-10-01 17:31:34.367702] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.962 [2024-10-01 17:31:34.367713] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb99c0, cid 3, qid 0 00:31:35.962 [2024-10-01 17:31:34.367880] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.962 [2024-10-01 17:31:34.367886] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.962 [2024-10-01 17:31:34.367890] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.962 [2024-10-01 17:31:34.367894] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb99c0) on tqpair=0xc4f0d0 00:31:35.962 [2024-10-01 17:31:34.367903] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.962 [2024-10-01 17:31:34.367907] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.962 [2024-10-01 17:31:34.367911] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4f0d0) 00:31:35.962 [2024-10-01 17:31:34.367917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.962 [2024-10-01 17:31:34.367927] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb99c0, cid 3, qid 0 00:31:35.962 [2024-10-01 17:31:34.368138] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.962 [2024-10-01 17:31:34.368145] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.962 [2024-10-01 17:31:34.368149] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.962 [2024-10-01 17:31:34.368152] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb99c0) on tqpair=0xc4f0d0 00:31:35.962 [2024-10-01 17:31:34.368162] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.962 [2024-10-01 17:31:34.368166] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.962 [2024-10-01 17:31:34.368170] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4f0d0) 00:31:35.962 [2024-10-01 17:31:34.368176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.962 [2024-10-01 17:31:34.368187] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb99c0, cid 3, qid 0 00:31:35.962 [2024-10-01 17:31:34.368362] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.962 [2024-10-01 17:31:34.368369] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.962 [2024-10-01 17:31:34.368373] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.962 [2024-10-01 17:31:34.368377] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb99c0) on tqpair=0xc4f0d0 00:31:35.962 [2024-10-01 17:31:34.368386] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.962 [2024-10-01 17:31:34.368390] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.962 [2024-10-01 17:31:34.368394] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4f0d0) 00:31:35.962 [2024-10-01 17:31:34.368401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.962 [2024-10-01 17:31:34.368411] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb99c0, cid 3, qid 0 00:31:35.962 [2024-10-01 17:31:34.368586] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.962 [2024-10-01 17:31:34.368593] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.962 [2024-10-01 17:31:34.368598] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.962 [2024-10-01 17:31:34.368602] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb99c0) on tqpair=0xc4f0d0 00:31:35.962 [2024-10-01 17:31:34.368612] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.962 [2024-10-01 17:31:34.368615] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.962 [2024-10-01 17:31:34.368619] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4f0d0) 00:31:35.962 [2024-10-01 17:31:34.368626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.962 [2024-10-01 17:31:34.368636] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb99c0, cid 3, qid 0 00:31:35.962 [2024-10-01 17:31:34.368809] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.962 [2024-10-01 17:31:34.368815] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.962 [2024-10-01 17:31:34.368819] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.962 [2024-10-01 17:31:34.368823] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb99c0) on tqpair=0xc4f0d0 00:31:35.962 [2024-10-01 17:31:34.368832] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.962 [2024-10-01 17:31:34.368836] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.962 [2024-10-01 17:31:34.368840] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4f0d0) 00:31:35.962 [2024-10-01 17:31:34.368847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.962 [2024-10-01 17:31:34.368857] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb99c0, cid 3, qid 0 00:31:35.962 [2024-10-01 17:31:34.369037] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.962 [2024-10-01 17:31:34.369044] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.962 [2024-10-01 17:31:34.369048] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.962 [2024-10-01 17:31:34.369051] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb99c0) on tqpair=0xc4f0d0 00:31:35.962 [2024-10-01 17:31:34.369061] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.962 [2024-10-01 17:31:34.369065] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.962 [2024-10-01 17:31:34.369069] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4f0d0) 00:31:35.962 [2024-10-01 17:31:34.369075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.962 [2024-10-01 17:31:34.369086] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb99c0, cid 3, qid 0 00:31:35.962 [2024-10-01 17:31:34.369255] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.962 [2024-10-01 17:31:34.369262] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.963 [2024-10-01 17:31:34.369265] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.963 [2024-10-01 17:31:34.369269] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb99c0) on tqpair=0xc4f0d0 00:31:35.963 [2024-10-01 17:31:34.369279] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.963 [2024-10-01 17:31:34.369282] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.963 [2024-10-01 17:31:34.369286] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4f0d0) 00:31:35.963 [2024-10-01 17:31:34.369293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.963 [2024-10-01 17:31:34.369303] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb99c0, cid 3, qid 0 00:31:35.963 [2024-10-01 17:31:34.369507] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.963 [2024-10-01 17:31:34.369513] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.963 [2024-10-01 17:31:34.369516] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.963 [2024-10-01 17:31:34.369520] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb99c0) on tqpair=0xc4f0d0 00:31:35.963 [2024-10-01 17:31:34.369531] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.963 [2024-10-01 17:31:34.369535] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.963 [2024-10-01 17:31:34.369539] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4f0d0) 00:31:35.963 [2024-10-01 17:31:34.369546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.963 [2024-10-01 17:31:34.369556] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb99c0, cid 3, qid 0 00:31:35.963 [2024-10-01 17:31:34.369729] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.963 [2024-10-01 17:31:34.369735] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.963 [2024-10-01 17:31:34.369739] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.963 [2024-10-01 17:31:34.369742] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb99c0) on tqpair=0xc4f0d0 00:31:35.963 [2024-10-01 17:31:34.369752] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.963 [2024-10-01 17:31:34.369756] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.963 [2024-10-01 17:31:34.369759] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4f0d0) 00:31:35.963 [2024-10-01 17:31:34.369766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.963 [2024-10-01 17:31:34.369776] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb99c0, cid 3, qid 0 00:31:35.963 [2024-10-01 17:31:34.369952] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.963 [2024-10-01 17:31:34.369959] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.963 [2024-10-01 17:31:34.369962] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.963 [2024-10-01 17:31:34.369966] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb99c0) on tqpair=0xc4f0d0 00:31:35.963 [2024-10-01 17:31:34.369976] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.963 [2024-10-01 17:31:34.369980] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.963 [2024-10-01 17:31:34.369983] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4f0d0) 00:31:35.963 [2024-10-01 17:31:34.369990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.963 [2024-10-01 17:31:34.370004] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb99c0, cid 3, qid 0 00:31:35.963 [2024-10-01 17:31:34.370174] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.963 [2024-10-01 17:31:34.370181] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.963 [2024-10-01 17:31:34.370184] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.963 [2024-10-01 17:31:34.370188] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb99c0) on tqpair=0xc4f0d0 00:31:35.963 [2024-10-01 17:31:34.370198] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.963 [2024-10-01 17:31:34.370202] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.963 [2024-10-01 17:31:34.370205] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4f0d0) 00:31:35.963 [2024-10-01 17:31:34.370212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.963 [2024-10-01 17:31:34.370222] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb99c0, cid 3, qid 0 00:31:35.963 [2024-10-01 17:31:34.370395] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.963 [2024-10-01 17:31:34.370402] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.963 [2024-10-01 17:31:34.370406] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.963 [2024-10-01 17:31:34.370410] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb99c0) on tqpair=0xc4f0d0 00:31:35.963 [2024-10-01 17:31:34.370420] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.963 [2024-10-01 17:31:34.370425] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.963 [2024-10-01 17:31:34.370429] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4f0d0) 00:31:35.963 [2024-10-01 17:31:34.370436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.963 [2024-10-01 17:31:34.370446] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb99c0, cid 3, qid 0 00:31:35.963 [2024-10-01 17:31:34.370625] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.963 [2024-10-01 17:31:34.370631] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.963 [2024-10-01 17:31:34.370635] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.963 [2024-10-01 17:31:34.370638] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb99c0) on tqpair=0xc4f0d0 00:31:35.963 [2024-10-01 17:31:34.370648] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.963 [2024-10-01 17:31:34.370652] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.963 [2024-10-01 17:31:34.370656] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4f0d0) 00:31:35.963 [2024-10-01 17:31:34.370662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.963 [2024-10-01 17:31:34.370672] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb99c0, cid 3, qid 0 00:31:35.963 [2024-10-01 17:31:34.370876] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.963 [2024-10-01 17:31:34.370883] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.963 [2024-10-01 17:31:34.370886] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.963 [2024-10-01 17:31:34.370890] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb99c0) on tqpair=0xc4f0d0 00:31:35.963 [2024-10-01 17:31:34.370900] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.963 [2024-10-01 17:31:34.370904] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.963 [2024-10-01 17:31:34.370907] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4f0d0) 00:31:35.963 [2024-10-01 17:31:34.370914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.963 [2024-10-01 17:31:34.370924] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb99c0, cid 3, qid 0 00:31:35.963 [2024-10-01 17:31:34.371094] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.963 [2024-10-01 17:31:34.371101] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.963 [2024-10-01 17:31:34.371105] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.963 [2024-10-01 17:31:34.371108] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb99c0) on tqpair=0xc4f0d0 00:31:35.963 [2024-10-01 17:31:34.371118] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.963 [2024-10-01 17:31:34.371122] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.963 [2024-10-01 17:31:34.371126] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4f0d0) 00:31:35.963 [2024-10-01 17:31:34.371133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.963 [2024-10-01 17:31:34.371143] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb99c0, cid 3, qid 0 00:31:35.963 [2024-10-01 17:31:34.371316] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.963 [2024-10-01 17:31:34.371322] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.963 [2024-10-01 17:31:34.371326] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.963 [2024-10-01 17:31:34.371329] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb99c0) on tqpair=0xc4f0d0 00:31:35.963 [2024-10-01 17:31:34.371339] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.963 [2024-10-01 17:31:34.371343] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.963 [2024-10-01 17:31:34.371348] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4f0d0) 00:31:35.963 [2024-10-01 17:31:34.371355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.963 [2024-10-01 17:31:34.371366] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb99c0, cid 3, qid 0 00:31:35.963 [2024-10-01 17:31:34.371538] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.963 [2024-10-01 17:31:34.371544] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.963 [2024-10-01 17:31:34.371548] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.963 [2024-10-01 17:31:34.371551] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb99c0) on tqpair=0xc4f0d0 00:31:35.963 [2024-10-01 17:31:34.371561] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.963 [2024-10-01 17:31:34.371565] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.963 [2024-10-01 17:31:34.371569] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4f0d0) 00:31:35.963 [2024-10-01 17:31:34.371575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.963 [2024-10-01 17:31:34.371586] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb99c0, cid 3, qid 0 00:31:35.963 [2024-10-01 17:31:34.371762] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.963 [2024-10-01 17:31:34.371768] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.963 [2024-10-01 17:31:34.371772] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.963 [2024-10-01 17:31:34.371775] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb99c0) on tqpair=0xc4f0d0 00:31:35.963 [2024-10-01 17:31:34.371785] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.963 [2024-10-01 17:31:34.371789] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.963 [2024-10-01 17:31:34.371793] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4f0d0) 00:31:35.963 [2024-10-01 17:31:34.371799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.963 [2024-10-01 17:31:34.371809] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb99c0, cid 3, qid 0 00:31:35.963 [2024-10-01 17:31:34.371985] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.964 [2024-10-01 17:31:34.371991] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.964 [2024-10-01 17:31:34.371998] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.964 [2024-10-01 17:31:34.372002] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb99c0) on tqpair=0xc4f0d0 00:31:35.964 [2024-10-01 17:31:34.372012] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.964 [2024-10-01 17:31:34.372016] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.964 [2024-10-01 17:31:34.372019] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4f0d0) 00:31:35.964 [2024-10-01 17:31:34.372026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.964 [2024-10-01 17:31:34.372036] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb99c0, cid 3, qid 0 00:31:35.964 [2024-10-01 17:31:34.372243] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.964 [2024-10-01 17:31:34.372249] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.964 [2024-10-01 17:31:34.372253] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.964 [2024-10-01 17:31:34.372257] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb99c0) on tqpair=0xc4f0d0 00:31:35.964 [2024-10-01 17:31:34.372266] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.964 [2024-10-01 17:31:34.372270] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.964 [2024-10-01 17:31:34.372274] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4f0d0) 00:31:35.964 [2024-10-01 17:31:34.372282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.964 [2024-10-01 17:31:34.372293] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb99c0, cid 3, qid 0 00:31:35.964 [2024-10-01 17:31:34.372469] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.964 [2024-10-01 17:31:34.372476] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.964 [2024-10-01 17:31:34.372480] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.964 [2024-10-01 17:31:34.372483] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb99c0) on tqpair=0xc4f0d0 00:31:35.964 [2024-10-01 17:31:34.372493] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.964 [2024-10-01 17:31:34.372497] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.964 [2024-10-01 17:31:34.372501] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4f0d0) 00:31:35.964 [2024-10-01 17:31:34.372507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.964 [2024-10-01 17:31:34.372517] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb99c0, cid 3, qid 0 00:31:35.964 [2024-10-01 17:31:34.372696] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.964 [2024-10-01 17:31:34.372703] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.964 [2024-10-01 17:31:34.372706] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.964 [2024-10-01 17:31:34.372710] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb99c0) on tqpair=0xc4f0d0 00:31:35.964 [2024-10-01 17:31:34.372720] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.964 [2024-10-01 17:31:34.372724] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.964 [2024-10-01 17:31:34.372727] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4f0d0) 00:31:35.964 [2024-10-01 17:31:34.372734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.964 [2024-10-01 17:31:34.372744] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb99c0, cid 3, qid 0 00:31:35.964 [2024-10-01 17:31:34.372929] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.964 [2024-10-01 17:31:34.372935] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.964 [2024-10-01 17:31:34.372939] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.964 [2024-10-01 17:31:34.372943] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb99c0) on tqpair=0xc4f0d0 00:31:35.964 [2024-10-01 17:31:34.372952] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.964 [2024-10-01 17:31:34.372956] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.964 [2024-10-01 17:31:34.372960] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4f0d0) 00:31:35.964 [2024-10-01 17:31:34.372966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.964 [2024-10-01 17:31:34.372977] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb99c0, cid 3, qid 0 00:31:35.964 [2024-10-01 17:31:34.377002] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.964 [2024-10-01 17:31:34.377011] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.964 [2024-10-01 17:31:34.377014] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.964 [2024-10-01 17:31:34.377018] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb99c0) on tqpair=0xc4f0d0 00:31:35.964 [2024-10-01 17:31:34.377026] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 11 milliseconds 00:31:35.964 00:31:35.964 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:31:35.964 [2024-10-01 17:31:34.416200] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:31:35.964 [2024-10-01 17:31:34.416243] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3188254 ] 00:31:35.964 [2024-10-01 17:31:34.446513] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:31:35.964 [2024-10-01 17:31:34.446557] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:31:35.964 [2024-10-01 17:31:34.446562] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:31:35.964 [2024-10-01 17:31:34.446572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:31:35.964 [2024-10-01 17:31:34.446581] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:31:35.964 [2024-10-01 17:31:34.454222] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:31:35.964 [2024-10-01 17:31:34.454249] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x12c20d0 0 00:31:35.964 [2024-10-01 17:31:34.462004] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:31:35.964 [2024-10-01 17:31:34.462015] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:31:35.964 [2024-10-01 17:31:34.462019] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:31:35.964 [2024-10-01 17:31:34.462023] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:31:35.964 [2024-10-01 17:31:34.462050] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.964 [2024-10-01 17:31:34.462055] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.964 [2024-10-01 17:31:34.462059] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12c20d0) 00:31:35.964 [2024-10-01 17:31:34.462069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:35.964 [2024-10-01 17:31:34.462086] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132c540, cid 0, qid 0 00:31:35.964 [2024-10-01 17:31:34.470006] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.964 [2024-10-01 17:31:34.470014] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.964 [2024-10-01 17:31:34.470018] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.964 [2024-10-01 17:31:34.470023] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132c540) on tqpair=0x12c20d0 00:31:35.964 [2024-10-01 17:31:34.470031] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:35.964 [2024-10-01 17:31:34.470037] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:31:35.964 [2024-10-01 17:31:34.470043] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:31:35.964 [2024-10-01 17:31:34.470054] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.964 [2024-10-01 17:31:34.470058] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.964 [2024-10-01 17:31:34.470062] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12c20d0) 00:31:35.964 [2024-10-01 17:31:34.470069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.964 [2024-10-01 17:31:34.470083] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132c540, cid 0, qid 0 00:31:35.964 [2024-10-01 17:31:34.470239] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.964 [2024-10-01 17:31:34.470246] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.964 [2024-10-01 17:31:34.470253] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.964 [2024-10-01 17:31:34.470257] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132c540) on tqpair=0x12c20d0 00:31:35.964 [2024-10-01 17:31:34.470262] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:31:35.964 [2024-10-01 17:31:34.470269] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:31:35.964 [2024-10-01 17:31:34.470276] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.964 [2024-10-01 17:31:34.470280] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.964 [2024-10-01 17:31:34.470284] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12c20d0) 00:31:35.964 [2024-10-01 17:31:34.470290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.964 [2024-10-01 17:31:34.470301] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132c540, cid 0, qid 0 00:31:35.964 [2024-10-01 17:31:34.470461] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.964 [2024-10-01 17:31:34.470468] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.964 [2024-10-01 17:31:34.470471] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.964 [2024-10-01 17:31:34.470475] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132c540) on tqpair=0x12c20d0 00:31:35.964 [2024-10-01 17:31:34.470480] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:31:35.964 [2024-10-01 17:31:34.470488] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:31:35.964 [2024-10-01 17:31:34.470495] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.964 [2024-10-01 17:31:34.470499] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.964 [2024-10-01 17:31:34.470502] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12c20d0) 00:31:35.964 [2024-10-01 17:31:34.470509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.964 [2024-10-01 17:31:34.470519] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132c540, cid 0, qid 0 00:31:35.965 [2024-10-01 17:31:34.470680] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.965 [2024-10-01 17:31:34.470686] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.965 [2024-10-01 17:31:34.470690] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.965 [2024-10-01 17:31:34.470694] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132c540) on tqpair=0x12c20d0 00:31:35.965 [2024-10-01 17:31:34.470698] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:35.965 [2024-10-01 17:31:34.470707] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.965 [2024-10-01 17:31:34.470711] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.965 [2024-10-01 17:31:34.470715] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12c20d0) 00:31:35.965 [2024-10-01 17:31:34.470722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.965 [2024-10-01 17:31:34.470732] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132c540, cid 0, qid 0 00:31:35.965 [2024-10-01 17:31:34.470902] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.965 [2024-10-01 17:31:34.470908] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.965 [2024-10-01 17:31:34.470911] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.965 [2024-10-01 17:31:34.470915] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132c540) on tqpair=0x12c20d0 00:31:35.965 [2024-10-01 17:31:34.470920] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:31:35.965 [2024-10-01 17:31:34.470926] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:31:35.965 [2024-10-01 17:31:34.470934] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:35.965 [2024-10-01 17:31:34.471039] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:31:35.965 [2024-10-01 17:31:34.471043] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:35.965 [2024-10-01 17:31:34.471051] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.965 [2024-10-01 17:31:34.471055] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.965 [2024-10-01 17:31:34.471059] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12c20d0) 00:31:35.965 [2024-10-01 17:31:34.471065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.965 [2024-10-01 17:31:34.471076] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132c540, cid 0, qid 0 00:31:35.965 [2024-10-01 17:31:34.471228] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.965 [2024-10-01 17:31:34.471234] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.965 [2024-10-01 17:31:34.471238] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.965 [2024-10-01 17:31:34.471242] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132c540) on tqpair=0x12c20d0 00:31:35.965 [2024-10-01 17:31:34.471246] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:35.965 [2024-10-01 17:31:34.471256] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.965 [2024-10-01 17:31:34.471260] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.965 [2024-10-01 17:31:34.471263] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12c20d0) 00:31:35.965 [2024-10-01 17:31:34.471270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.965 [2024-10-01 17:31:34.471280] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132c540, cid 0, qid 0 00:31:35.965 [2024-10-01 17:31:34.471516] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:35.965 [2024-10-01 17:31:34.471522] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:35.965 [2024-10-01 17:31:34.471525] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:35.965 [2024-10-01 17:31:34.471529] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132c540) on tqpair=0x12c20d0 00:31:35.965 [2024-10-01 17:31:34.471533] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:35.965 [2024-10-01 17:31:34.471538] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:31:35.965 [2024-10-01 17:31:34.471546] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:31:35.965 [2024-10-01 17:31:34.471553] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:31:35.965 [2024-10-01 17:31:34.471561] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:35.965 [2024-10-01 17:31:34.471565] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12c20d0) 00:31:35.965 [2024-10-01 17:31:34.471572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.965 [2024-10-01 17:31:34.471582] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132c540, cid 0, qid 0 00:31:35.965 [2024-10-01 17:31:34.471799] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:35.965 [2024-10-01 17:31:34.471806] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:35.965 [2024-10-01 17:31:34.471810] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:35.965 [2024-10-01 17:31:34.471814] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12c20d0): datao=0, datal=4096, cccid=0 00:31:35.965 [2024-10-01 17:31:34.471819] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x132c540) on tqpair(0x12c20d0): expected_datao=0, payload_size=4096 00:31:35.965 [2024-10-01 17:31:34.471823] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:35.965 [2024-10-01 17:31:34.471838] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:35.965 [2024-10-01 17:31:34.471842] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:36.228 [2024-10-01 17:31:34.517003] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.228 [2024-10-01 17:31:34.517014] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.228 [2024-10-01 17:31:34.517018] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.228 [2024-10-01 17:31:34.517022] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132c540) on tqpair=0x12c20d0 00:31:36.229 [2024-10-01 17:31:34.517030] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:31:36.229 [2024-10-01 17:31:34.517034] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:31:36.229 [2024-10-01 17:31:34.517039] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:31:36.229 [2024-10-01 17:31:34.517043] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:31:36.229 [2024-10-01 17:31:34.517048] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:31:36.229 [2024-10-01 17:31:34.517053] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:31:36.229 [2024-10-01 17:31:34.517061] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:31:36.229 [2024-10-01 17:31:34.517068] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.229 [2024-10-01 17:31:34.517072] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.229 [2024-10-01 17:31:34.517075] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12c20d0) 00:31:36.229 [2024-10-01 17:31:34.517083] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:36.229 [2024-10-01 17:31:34.517095] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132c540, cid 0, qid 0 00:31:36.229 [2024-10-01 17:31:34.517268] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.229 [2024-10-01 17:31:34.517275] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.229 [2024-10-01 17:31:34.517278] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.229 [2024-10-01 17:31:34.517282] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132c540) on tqpair=0x12c20d0 00:31:36.229 [2024-10-01 17:31:34.517289] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.229 [2024-10-01 17:31:34.517293] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.229 [2024-10-01 17:31:34.517296] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12c20d0) 00:31:36.229 [2024-10-01 17:31:34.517302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:36.229 [2024-10-01 17:31:34.517309] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.229 [2024-10-01 17:31:34.517312] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.229 [2024-10-01 17:31:34.517316] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x12c20d0) 00:31:36.229 [2024-10-01 17:31:34.517325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:36.229 [2024-10-01 17:31:34.517331] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.229 [2024-10-01 17:31:34.517335] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.229 [2024-10-01 17:31:34.517339] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x12c20d0) 00:31:36.229 [2024-10-01 17:31:34.517344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:36.229 [2024-10-01 17:31:34.517350] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.229 [2024-10-01 17:31:34.517354] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.229 [2024-10-01 17:31:34.517358] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12c20d0) 00:31:36.229 [2024-10-01 17:31:34.517363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:36.229 [2024-10-01 17:31:34.517368] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:31:36.229 [2024-10-01 17:31:34.517379] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:36.229 [2024-10-01 17:31:34.517385] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.229 [2024-10-01 17:31:34.517389] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12c20d0) 00:31:36.229 [2024-10-01 17:31:34.517396] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.229 [2024-10-01 17:31:34.517408] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132c540, cid 0, qid 0 00:31:36.229 [2024-10-01 17:31:34.517414] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132c6c0, cid 1, qid 0 00:31:36.229 [2024-10-01 17:31:34.517418] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132c840, cid 2, qid 0 00:31:36.229 [2024-10-01 17:31:34.517423] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132c9c0, cid 3, qid 0 00:31:36.229 [2024-10-01 17:31:34.517428] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132cb40, cid 4, qid 0 00:31:36.229 [2024-10-01 17:31:34.517637] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.229 [2024-10-01 17:31:34.517643] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.229 [2024-10-01 17:31:34.517647] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.229 [2024-10-01 17:31:34.517651] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132cb40) on tqpair=0x12c20d0 00:31:36.229 [2024-10-01 17:31:34.517655] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:31:36.229 [2024-10-01 17:31:34.517661] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:31:36.229 [2024-10-01 17:31:34.517669] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:31:36.229 [2024-10-01 17:31:34.517677] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:31:36.229 [2024-10-01 17:31:34.517683] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.229 [2024-10-01 17:31:34.517687] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.229 [2024-10-01 17:31:34.517691] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12c20d0) 00:31:36.229 [2024-10-01 17:31:34.517697] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:36.229 [2024-10-01 17:31:34.517707] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132cb40, cid 4, qid 0 00:31:36.229 [2024-10-01 17:31:34.517895] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.229 [2024-10-01 17:31:34.517902] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.229 [2024-10-01 17:31:34.517906] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.229 [2024-10-01 17:31:34.517910] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132cb40) on tqpair=0x12c20d0 00:31:36.229 [2024-10-01 17:31:34.517975] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:31:36.229 [2024-10-01 17:31:34.517984] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:31:36.229 [2024-10-01 17:31:34.517991] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.229 [2024-10-01 17:31:34.517999] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12c20d0) 00:31:36.229 [2024-10-01 17:31:34.518005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.229 [2024-10-01 17:31:34.518016] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132cb40, cid 4, qid 0 00:31:36.229 [2024-10-01 17:31:34.518172] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:36.229 [2024-10-01 17:31:34.518178] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:36.229 [2024-10-01 17:31:34.518182] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:36.229 [2024-10-01 17:31:34.518186] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12c20d0): datao=0, datal=4096, cccid=4 00:31:36.229 [2024-10-01 17:31:34.518190] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x132cb40) on tqpair(0x12c20d0): expected_datao=0, payload_size=4096 00:31:36.229 [2024-10-01 17:31:34.518195] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.229 [2024-10-01 17:31:34.518208] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:36.229 [2024-10-01 17:31:34.518213] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:36.229 [2024-10-01 17:31:34.560147] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.229 [2024-10-01 17:31:34.560157] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.229 [2024-10-01 17:31:34.560161] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.229 [2024-10-01 17:31:34.560165] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132cb40) on tqpair=0x12c20d0 00:31:36.229 [2024-10-01 17:31:34.560173] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:31:36.229 [2024-10-01 17:31:34.560185] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:31:36.229 [2024-10-01 17:31:34.560194] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:31:36.229 [2024-10-01 17:31:34.560201] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.229 [2024-10-01 17:31:34.560205] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12c20d0) 00:31:36.229 [2024-10-01 17:31:34.560212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.229 [2024-10-01 17:31:34.560223] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132cb40, cid 4, qid 0 00:31:36.229 [2024-10-01 17:31:34.560483] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:36.229 [2024-10-01 17:31:34.560490] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:36.229 [2024-10-01 17:31:34.560493] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:36.229 [2024-10-01 17:31:34.560497] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12c20d0): datao=0, datal=4096, cccid=4 00:31:36.229 [2024-10-01 17:31:34.560502] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x132cb40) on tqpair(0x12c20d0): expected_datao=0, payload_size=4096 00:31:36.229 [2024-10-01 17:31:34.560508] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.229 [2024-10-01 17:31:34.560522] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:36.229 [2024-10-01 17:31:34.560526] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:36.229 [2024-10-01 17:31:34.604001] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.229 [2024-10-01 17:31:34.604012] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.229 [2024-10-01 17:31:34.604016] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.230 [2024-10-01 17:31:34.604020] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132cb40) on tqpair=0x12c20d0 00:31:36.230 [2024-10-01 17:31:34.604033] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:31:36.230 [2024-10-01 17:31:34.604042] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:31:36.230 [2024-10-01 17:31:34.604050] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.230 [2024-10-01 17:31:34.604053] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12c20d0) 00:31:36.230 [2024-10-01 17:31:34.604060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.230 [2024-10-01 17:31:34.604073] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132cb40, cid 4, qid 0 00:31:36.230 [2024-10-01 17:31:34.604222] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:36.230 [2024-10-01 17:31:34.604228] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:36.230 [2024-10-01 17:31:34.604232] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:36.230 [2024-10-01 17:31:34.604236] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12c20d0): datao=0, datal=4096, cccid=4 00:31:36.230 [2024-10-01 17:31:34.604240] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x132cb40) on tqpair(0x12c20d0): expected_datao=0, payload_size=4096 00:31:36.230 [2024-10-01 17:31:34.604245] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.230 [2024-10-01 17:31:34.604258] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:36.230 [2024-10-01 17:31:34.604263] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:36.230 [2024-10-01 17:31:34.646130] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.230 [2024-10-01 17:31:34.646140] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.230 [2024-10-01 17:31:34.646143] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.230 [2024-10-01 17:31:34.646148] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132cb40) on tqpair=0x12c20d0 00:31:36.230 [2024-10-01 17:31:34.646155] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:31:36.230 [2024-10-01 17:31:34.646163] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:31:36.230 [2024-10-01 17:31:34.646171] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:31:36.230 [2024-10-01 17:31:34.646178] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:31:36.230 [2024-10-01 17:31:34.646183] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:31:36.230 [2024-10-01 17:31:34.646188] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:31:36.230 [2024-10-01 17:31:34.646194] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:31:36.230 [2024-10-01 17:31:34.646201] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:31:36.230 [2024-10-01 17:31:34.646206] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:31:36.230 [2024-10-01 17:31:34.646219] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.230 [2024-10-01 17:31:34.646223] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12c20d0) 00:31:36.230 [2024-10-01 17:31:34.646230] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.230 [2024-10-01 17:31:34.646236] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.230 [2024-10-01 17:31:34.646240] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.230 [2024-10-01 17:31:34.646244] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12c20d0) 00:31:36.230 [2024-10-01 17:31:34.646250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:36.230 [2024-10-01 17:31:34.646262] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132cb40, cid 4, qid 0 00:31:36.230 [2024-10-01 17:31:34.646267] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132ccc0, cid 5, qid 0 00:31:36.230 [2024-10-01 17:31:34.646446] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.230 [2024-10-01 17:31:34.646452] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.230 [2024-10-01 17:31:34.646456] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.230 [2024-10-01 17:31:34.646460] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132cb40) on tqpair=0x12c20d0 00:31:36.230 [2024-10-01 17:31:34.646466] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.230 [2024-10-01 17:31:34.646472] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.230 [2024-10-01 17:31:34.646476] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.230 [2024-10-01 17:31:34.646479] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132ccc0) on tqpair=0x12c20d0 00:31:36.230 [2024-10-01 17:31:34.646489] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.230 [2024-10-01 17:31:34.646492] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12c20d0) 00:31:36.230 [2024-10-01 17:31:34.646499] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.230 [2024-10-01 17:31:34.646509] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132ccc0, cid 5, qid 0 00:31:36.230 [2024-10-01 17:31:34.646690] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.230 [2024-10-01 17:31:34.646696] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.230 [2024-10-01 17:31:34.646700] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.230 [2024-10-01 17:31:34.646704] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132ccc0) on tqpair=0x12c20d0 00:31:36.230 [2024-10-01 17:31:34.646713] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.230 [2024-10-01 17:31:34.646717] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12c20d0) 00:31:36.230 [2024-10-01 17:31:34.646723] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.230 [2024-10-01 17:31:34.646733] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132ccc0, cid 5, qid 0 00:31:36.230 [2024-10-01 17:31:34.646976] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.230 [2024-10-01 17:31:34.646982] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.230 [2024-10-01 17:31:34.646986] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.230 [2024-10-01 17:31:34.646990] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132ccc0) on tqpair=0x12c20d0 00:31:36.230 [2024-10-01 17:31:34.647005] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.230 [2024-10-01 17:31:34.647010] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12c20d0) 00:31:36.230 [2024-10-01 17:31:34.647016] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.230 [2024-10-01 17:31:34.647026] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132ccc0, cid 5, qid 0 00:31:36.230 [2024-10-01 17:31:34.647222] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.230 [2024-10-01 17:31:34.647229] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.230 [2024-10-01 17:31:34.647232] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.230 [2024-10-01 17:31:34.647236] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132ccc0) on tqpair=0x12c20d0 00:31:36.230 [2024-10-01 17:31:34.647250] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.230 [2024-10-01 17:31:34.647254] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12c20d0) 00:31:36.230 [2024-10-01 17:31:34.647261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.230 [2024-10-01 17:31:34.647268] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.230 [2024-10-01 17:31:34.647272] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12c20d0) 00:31:36.230 [2024-10-01 17:31:34.647278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.230 [2024-10-01 17:31:34.647285] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.230 [2024-10-01 17:31:34.647288] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x12c20d0) 00:31:36.230 [2024-10-01 17:31:34.647295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.230 [2024-10-01 17:31:34.647303] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.230 [2024-10-01 17:31:34.647307] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x12c20d0) 00:31:36.230 [2024-10-01 17:31:34.647313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.230 [2024-10-01 17:31:34.647325] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132ccc0, cid 5, qid 0 00:31:36.230 [2024-10-01 17:31:34.647330] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132cb40, cid 4, qid 0 00:31:36.230 [2024-10-01 17:31:34.647335] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132ce40, cid 6, qid 0 00:31:36.230 [2024-10-01 17:31:34.647340] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132cfc0, cid 7, qid 0 00:31:36.230 [2024-10-01 17:31:34.647565] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:36.230 [2024-10-01 17:31:34.647572] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:36.230 [2024-10-01 17:31:34.647576] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:36.230 [2024-10-01 17:31:34.647579] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12c20d0): datao=0, datal=8192, cccid=5 00:31:36.230 [2024-10-01 17:31:34.647584] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x132ccc0) on tqpair(0x12c20d0): expected_datao=0, payload_size=8192 00:31:36.230 [2024-10-01 17:31:34.647588] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.230 [2024-10-01 17:31:34.647667] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:36.230 [2024-10-01 17:31:34.647671] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:36.230 [2024-10-01 17:31:34.647677] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:36.230 [2024-10-01 17:31:34.647683] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:36.230 [2024-10-01 17:31:34.647688] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:36.230 [2024-10-01 17:31:34.647692] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12c20d0): datao=0, datal=512, cccid=4 00:31:36.230 [2024-10-01 17:31:34.647697] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x132cb40) on tqpair(0x12c20d0): expected_datao=0, payload_size=512 00:31:36.230 [2024-10-01 17:31:34.647701] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.230 [2024-10-01 17:31:34.647708] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:36.230 [2024-10-01 17:31:34.647711] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:36.231 [2024-10-01 17:31:34.647717] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:36.231 [2024-10-01 17:31:34.647723] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:36.231 [2024-10-01 17:31:34.647726] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:36.231 [2024-10-01 17:31:34.647730] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12c20d0): datao=0, datal=512, cccid=6 00:31:36.231 [2024-10-01 17:31:34.647734] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x132ce40) on tqpair(0x12c20d0): expected_datao=0, payload_size=512 00:31:36.231 [2024-10-01 17:31:34.647738] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.231 [2024-10-01 17:31:34.647745] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:36.231 [2024-10-01 17:31:34.647748] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:36.231 [2024-10-01 17:31:34.647754] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:36.231 [2024-10-01 17:31:34.647760] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:36.231 [2024-10-01 17:31:34.647763] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:36.231 [2024-10-01 17:31:34.647767] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12c20d0): datao=0, datal=4096, cccid=7 00:31:36.231 [2024-10-01 17:31:34.647771] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x132cfc0) on tqpair(0x12c20d0): expected_datao=0, payload_size=4096 00:31:36.231 [2024-10-01 17:31:34.647776] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.231 [2024-10-01 17:31:34.647782] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:36.231 [2024-10-01 17:31:34.647786] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:36.231 [2024-10-01 17:31:34.647796] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.231 [2024-10-01 17:31:34.647802] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.231 [2024-10-01 17:31:34.647806] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.231 [2024-10-01 17:31:34.647810] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132ccc0) on tqpair=0x12c20d0 00:31:36.231 [2024-10-01 17:31:34.647821] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.231 [2024-10-01 17:31:34.647827] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.231 [2024-10-01 17:31:34.647830] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.231 [2024-10-01 17:31:34.647834] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132cb40) on tqpair=0x12c20d0 00:31:36.231 [2024-10-01 17:31:34.647844] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.231 [2024-10-01 17:31:34.647850] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.231 [2024-10-01 17:31:34.647854] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.231 [2024-10-01 17:31:34.647857] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132ce40) on tqpair=0x12c20d0 00:31:36.231 [2024-10-01 17:31:34.647864] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.231 [2024-10-01 17:31:34.647870] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.231 [2024-10-01 17:31:34.647874] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.231 [2024-10-01 17:31:34.647878] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132cfc0) on tqpair=0x12c20d0 00:31:36.231 ===================================================== 00:31:36.231 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:36.231 ===================================================== 00:31:36.231 Controller Capabilities/Features 00:31:36.231 ================================ 00:31:36.231 Vendor ID: 8086 00:31:36.231 Subsystem Vendor ID: 8086 00:31:36.231 Serial Number: SPDK00000000000001 00:31:36.231 Model Number: SPDK bdev Controller 00:31:36.231 Firmware Version: 25.01 00:31:36.231 Recommended Arb Burst: 6 00:31:36.231 IEEE OUI Identifier: e4 d2 5c 00:31:36.231 Multi-path I/O 00:31:36.231 May have multiple subsystem ports: Yes 00:31:36.231 May have multiple controllers: Yes 00:31:36.231 Associated with SR-IOV VF: No 00:31:36.231 Max Data Transfer Size: 131072 00:31:36.231 Max Number of Namespaces: 32 00:31:36.231 Max Number of I/O Queues: 127 00:31:36.231 NVMe Specification Version (VS): 1.3 00:31:36.231 NVMe Specification Version (Identify): 1.3 00:31:36.231 Maximum Queue Entries: 128 00:31:36.231 Contiguous Queues Required: Yes 00:31:36.231 Arbitration Mechanisms Supported 00:31:36.231 Weighted Round Robin: Not Supported 00:31:36.231 Vendor Specific: Not Supported 00:31:36.231 Reset Timeout: 15000 ms 00:31:36.231 Doorbell Stride: 4 bytes 00:31:36.231 NVM Subsystem Reset: Not Supported 00:31:36.231 Command Sets Supported 00:31:36.231 NVM Command Set: Supported 00:31:36.231 Boot Partition: Not Supported 00:31:36.231 Memory Page Size Minimum: 4096 bytes 00:31:36.231 Memory Page Size Maximum: 4096 bytes 00:31:36.231 Persistent Memory Region: Not Supported 00:31:36.231 Optional Asynchronous Events Supported 00:31:36.231 Namespace Attribute Notices: Supported 00:31:36.231 Firmware Activation Notices: Not Supported 00:31:36.231 ANA Change Notices: Not Supported 00:31:36.231 PLE Aggregate Log Change Notices: Not Supported 00:31:36.231 LBA Status Info Alert Notices: Not Supported 00:31:36.231 EGE Aggregate Log Change Notices: Not Supported 00:31:36.231 Normal NVM Subsystem Shutdown event: Not Supported 00:31:36.231 Zone Descriptor Change Notices: Not Supported 00:31:36.231 Discovery Log Change Notices: Not Supported 00:31:36.231 Controller Attributes 00:31:36.231 128-bit Host Identifier: Supported 00:31:36.231 Non-Operational Permissive Mode: Not Supported 00:31:36.231 NVM Sets: Not Supported 00:31:36.231 Read Recovery Levels: Not Supported 00:31:36.231 Endurance Groups: Not Supported 00:31:36.231 Predictable Latency Mode: Not Supported 00:31:36.231 Traffic Based Keep ALive: Not Supported 00:31:36.231 Namespace Granularity: Not Supported 00:31:36.231 SQ Associations: Not Supported 00:31:36.231 UUID List: Not Supported 00:31:36.231 Multi-Domain Subsystem: Not Supported 00:31:36.231 Fixed Capacity Management: Not Supported 00:31:36.231 Variable Capacity Management: Not Supported 00:31:36.231 Delete Endurance Group: Not Supported 00:31:36.231 Delete NVM Set: Not Supported 00:31:36.231 Extended LBA Formats Supported: Not Supported 00:31:36.231 Flexible Data Placement Supported: Not Supported 00:31:36.231 00:31:36.231 Controller Memory Buffer Support 00:31:36.231 ================================ 00:31:36.231 Supported: No 00:31:36.231 00:31:36.231 Persistent Memory Region Support 00:31:36.231 ================================ 00:31:36.231 Supported: No 00:31:36.231 00:31:36.231 Admin Command Set Attributes 00:31:36.231 ============================ 00:31:36.231 Security Send/Receive: Not Supported 00:31:36.231 Format NVM: Not Supported 00:31:36.231 Firmware Activate/Download: Not Supported 00:31:36.231 Namespace Management: Not Supported 00:31:36.231 Device Self-Test: Not Supported 00:31:36.231 Directives: Not Supported 00:31:36.231 NVMe-MI: Not Supported 00:31:36.231 Virtualization Management: Not Supported 00:31:36.231 Doorbell Buffer Config: Not Supported 00:31:36.231 Get LBA Status Capability: Not Supported 00:31:36.231 Command & Feature Lockdown Capability: Not Supported 00:31:36.231 Abort Command Limit: 4 00:31:36.231 Async Event Request Limit: 4 00:31:36.231 Number of Firmware Slots: N/A 00:31:36.231 Firmware Slot 1 Read-Only: N/A 00:31:36.231 Firmware Activation Without Reset: N/A 00:31:36.231 Multiple Update Detection Support: N/A 00:31:36.231 Firmware Update Granularity: No Information Provided 00:31:36.231 Per-Namespace SMART Log: No 00:31:36.231 Asymmetric Namespace Access Log Page: Not Supported 00:31:36.231 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:31:36.231 Command Effects Log Page: Supported 00:31:36.231 Get Log Page Extended Data: Supported 00:31:36.231 Telemetry Log Pages: Not Supported 00:31:36.231 Persistent Event Log Pages: Not Supported 00:31:36.231 Supported Log Pages Log Page: May Support 00:31:36.231 Commands Supported & Effects Log Page: Not Supported 00:31:36.231 Feature Identifiers & Effects Log Page:May Support 00:31:36.231 NVMe-MI Commands & Effects Log Page: May Support 00:31:36.231 Data Area 4 for Telemetry Log: Not Supported 00:31:36.231 Error Log Page Entries Supported: 128 00:31:36.231 Keep Alive: Supported 00:31:36.231 Keep Alive Granularity: 10000 ms 00:31:36.231 00:31:36.231 NVM Command Set Attributes 00:31:36.231 ========================== 00:31:36.231 Submission Queue Entry Size 00:31:36.231 Max: 64 00:31:36.231 Min: 64 00:31:36.231 Completion Queue Entry Size 00:31:36.231 Max: 16 00:31:36.231 Min: 16 00:31:36.231 Number of Namespaces: 32 00:31:36.231 Compare Command: Supported 00:31:36.231 Write Uncorrectable Command: Not Supported 00:31:36.231 Dataset Management Command: Supported 00:31:36.231 Write Zeroes Command: Supported 00:31:36.231 Set Features Save Field: Not Supported 00:31:36.231 Reservations: Supported 00:31:36.231 Timestamp: Not Supported 00:31:36.231 Copy: Supported 00:31:36.231 Volatile Write Cache: Present 00:31:36.231 Atomic Write Unit (Normal): 1 00:31:36.231 Atomic Write Unit (PFail): 1 00:31:36.231 Atomic Compare & Write Unit: 1 00:31:36.231 Fused Compare & Write: Supported 00:31:36.231 Scatter-Gather List 00:31:36.231 SGL Command Set: Supported 00:31:36.231 SGL Keyed: Supported 00:31:36.231 SGL Bit Bucket Descriptor: Not Supported 00:31:36.231 SGL Metadata Pointer: Not Supported 00:31:36.231 Oversized SGL: Not Supported 00:31:36.231 SGL Metadata Address: Not Supported 00:31:36.231 SGL Offset: Supported 00:31:36.231 Transport SGL Data Block: Not Supported 00:31:36.231 Replay Protected Memory Block: Not Supported 00:31:36.231 00:31:36.231 Firmware Slot Information 00:31:36.232 ========================= 00:31:36.232 Active slot: 1 00:31:36.232 Slot 1 Firmware Revision: 25.01 00:31:36.232 00:31:36.232 00:31:36.232 Commands Supported and Effects 00:31:36.232 ============================== 00:31:36.232 Admin Commands 00:31:36.232 -------------- 00:31:36.232 Get Log Page (02h): Supported 00:31:36.232 Identify (06h): Supported 00:31:36.232 Abort (08h): Supported 00:31:36.232 Set Features (09h): Supported 00:31:36.232 Get Features (0Ah): Supported 00:31:36.232 Asynchronous Event Request (0Ch): Supported 00:31:36.232 Keep Alive (18h): Supported 00:31:36.232 I/O Commands 00:31:36.232 ------------ 00:31:36.232 Flush (00h): Supported LBA-Change 00:31:36.232 Write (01h): Supported LBA-Change 00:31:36.232 Read (02h): Supported 00:31:36.232 Compare (05h): Supported 00:31:36.232 Write Zeroes (08h): Supported LBA-Change 00:31:36.232 Dataset Management (09h): Supported LBA-Change 00:31:36.232 Copy (19h): Supported LBA-Change 00:31:36.232 00:31:36.232 Error Log 00:31:36.232 ========= 00:31:36.232 00:31:36.232 Arbitration 00:31:36.232 =========== 00:31:36.232 Arbitration Burst: 1 00:31:36.232 00:31:36.232 Power Management 00:31:36.232 ================ 00:31:36.232 Number of Power States: 1 00:31:36.232 Current Power State: Power State #0 00:31:36.232 Power State #0: 00:31:36.232 Max Power: 0.00 W 00:31:36.232 Non-Operational State: Operational 00:31:36.232 Entry Latency: Not Reported 00:31:36.232 Exit Latency: Not Reported 00:31:36.232 Relative Read Throughput: 0 00:31:36.232 Relative Read Latency: 0 00:31:36.232 Relative Write Throughput: 0 00:31:36.232 Relative Write Latency: 0 00:31:36.232 Idle Power: Not Reported 00:31:36.232 Active Power: Not Reported 00:31:36.232 Non-Operational Permissive Mode: Not Supported 00:31:36.232 00:31:36.232 Health Information 00:31:36.232 ================== 00:31:36.232 Critical Warnings: 00:31:36.232 Available Spare Space: OK 00:31:36.232 Temperature: OK 00:31:36.232 Device Reliability: OK 00:31:36.232 Read Only: No 00:31:36.232 Volatile Memory Backup: OK 00:31:36.232 Current Temperature: 0 Kelvin (-273 Celsius) 00:31:36.232 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:31:36.232 Available Spare: 0% 00:31:36.232 Available Spare Threshold: 0% 00:31:36.232 Life Percentage Used:[2024-10-01 17:31:34.647974] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.232 [2024-10-01 17:31:34.647981] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x12c20d0) 00:31:36.232 [2024-10-01 17:31:34.647988] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.232 [2024-10-01 17:31:34.652004] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132cfc0, cid 7, qid 0 00:31:36.232 [2024-10-01 17:31:34.652159] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.232 [2024-10-01 17:31:34.652165] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.232 [2024-10-01 17:31:34.652169] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.232 [2024-10-01 17:31:34.652173] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132cfc0) on tqpair=0x12c20d0 00:31:36.232 [2024-10-01 17:31:34.652204] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:31:36.232 [2024-10-01 17:31:34.652213] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132c540) on tqpair=0x12c20d0 00:31:36.232 [2024-10-01 17:31:34.652219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.232 [2024-10-01 17:31:34.652225] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132c6c0) on tqpair=0x12c20d0 00:31:36.232 [2024-10-01 17:31:34.652229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.232 [2024-10-01 17:31:34.652234] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132c840) on tqpair=0x12c20d0 00:31:36.232 [2024-10-01 17:31:34.652239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.232 [2024-10-01 17:31:34.652244] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132c9c0) on tqpair=0x12c20d0 00:31:36.232 [2024-10-01 17:31:34.652249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.232 [2024-10-01 17:31:34.652257] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.232 [2024-10-01 17:31:34.652261] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.232 [2024-10-01 17:31:34.652264] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12c20d0) 00:31:36.232 [2024-10-01 17:31:34.652271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.232 [2024-10-01 17:31:34.652283] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132c9c0, cid 3, qid 0 00:31:36.232 [2024-10-01 17:31:34.652461] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.232 [2024-10-01 17:31:34.652467] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.232 [2024-10-01 17:31:34.652471] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.232 [2024-10-01 17:31:34.652475] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132c9c0) on tqpair=0x12c20d0 00:31:36.232 [2024-10-01 17:31:34.652481] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.232 [2024-10-01 17:31:34.652485] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.232 [2024-10-01 17:31:34.652489] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12c20d0) 00:31:36.232 [2024-10-01 17:31:34.652496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.232 [2024-10-01 17:31:34.652508] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132c9c0, cid 3, qid 0 00:31:36.232 [2024-10-01 17:31:34.652674] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.232 [2024-10-01 17:31:34.652681] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.232 [2024-10-01 17:31:34.652684] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.232 [2024-10-01 17:31:34.652688] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132c9c0) on tqpair=0x12c20d0 00:31:36.232 [2024-10-01 17:31:34.652695] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:31:36.232 [2024-10-01 17:31:34.652700] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:31:36.232 [2024-10-01 17:31:34.652709] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.232 [2024-10-01 17:31:34.652713] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.232 [2024-10-01 17:31:34.652716] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12c20d0) 00:31:36.232 [2024-10-01 17:31:34.652723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.232 [2024-10-01 17:31:34.652734] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132c9c0, cid 3, qid 0 00:31:36.232 [2024-10-01 17:31:34.652890] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.232 [2024-10-01 17:31:34.652896] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.232 [2024-10-01 17:31:34.652900] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.232 [2024-10-01 17:31:34.652904] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132c9c0) on tqpair=0x12c20d0 00:31:36.232 [2024-10-01 17:31:34.652913] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.232 [2024-10-01 17:31:34.652917] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.232 [2024-10-01 17:31:34.652921] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12c20d0) 00:31:36.232 [2024-10-01 17:31:34.652928] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.232 [2024-10-01 17:31:34.652938] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132c9c0, cid 3, qid 0 00:31:36.232 [2024-10-01 17:31:34.653117] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.232 [2024-10-01 17:31:34.653124] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.232 [2024-10-01 17:31:34.653128] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.232 [2024-10-01 17:31:34.653132] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132c9c0) on tqpair=0x12c20d0 00:31:36.232 [2024-10-01 17:31:34.653141] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.232 [2024-10-01 17:31:34.653145] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.232 [2024-10-01 17:31:34.653149] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12c20d0) 00:31:36.232 [2024-10-01 17:31:34.653155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.232 [2024-10-01 17:31:34.653166] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132c9c0, cid 3, qid 0 00:31:36.232 [2024-10-01 17:31:34.653343] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.232 [2024-10-01 17:31:34.653349] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.232 [2024-10-01 17:31:34.653353] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.232 [2024-10-01 17:31:34.653357] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132c9c0) on tqpair=0x12c20d0 00:31:36.232 [2024-10-01 17:31:34.653366] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.232 [2024-10-01 17:31:34.653370] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.232 [2024-10-01 17:31:34.653374] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12c20d0) 00:31:36.232 [2024-10-01 17:31:34.653381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.232 [2024-10-01 17:31:34.653390] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132c9c0, cid 3, qid 0 00:31:36.232 [2024-10-01 17:31:34.653552] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.232 [2024-10-01 17:31:34.653558] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.232 [2024-10-01 17:31:34.653564] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.232 [2024-10-01 17:31:34.653568] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132c9c0) on tqpair=0x12c20d0 00:31:36.232 [2024-10-01 17:31:34.653577] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.232 [2024-10-01 17:31:34.653582] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.232 [2024-10-01 17:31:34.653585] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12c20d0) 00:31:36.233 [2024-10-01 17:31:34.653592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.233 [2024-10-01 17:31:34.653602] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132c9c0, cid 3, qid 0 00:31:36.233 [2024-10-01 17:31:34.653791] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.233 [2024-10-01 17:31:34.653798] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.233 [2024-10-01 17:31:34.653801] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.233 [2024-10-01 17:31:34.653805] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132c9c0) on tqpair=0x12c20d0 00:31:36.233 [2024-10-01 17:31:34.653815] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.233 [2024-10-01 17:31:34.653819] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.233 [2024-10-01 17:31:34.653822] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12c20d0) 00:31:36.233 [2024-10-01 17:31:34.653829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.233 [2024-10-01 17:31:34.653839] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132c9c0, cid 3, qid 0 00:31:36.233 [2024-10-01 17:31:34.654016] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.233 [2024-10-01 17:31:34.654023] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.233 [2024-10-01 17:31:34.654027] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.233 [2024-10-01 17:31:34.654031] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132c9c0) on tqpair=0x12c20d0 00:31:36.233 [2024-10-01 17:31:34.654040] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.233 [2024-10-01 17:31:34.654044] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.233 [2024-10-01 17:31:34.654047] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12c20d0) 00:31:36.233 [2024-10-01 17:31:34.654055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.233 [2024-10-01 17:31:34.654067] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132c9c0, cid 3, qid 0 00:31:36.233 [2024-10-01 17:31:34.654260] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.233 [2024-10-01 17:31:34.654267] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.233 [2024-10-01 17:31:34.654271] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.233 [2024-10-01 17:31:34.654275] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132c9c0) on tqpair=0x12c20d0 00:31:36.233 [2024-10-01 17:31:34.654284] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.233 [2024-10-01 17:31:34.654288] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.233 [2024-10-01 17:31:34.654291] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12c20d0) 00:31:36.233 [2024-10-01 17:31:34.654298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.233 [2024-10-01 17:31:34.654309] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132c9c0, cid 3, qid 0 00:31:36.233 [2024-10-01 17:31:34.654487] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.233 [2024-10-01 17:31:34.654494] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.233 [2024-10-01 17:31:34.654498] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.233 [2024-10-01 17:31:34.654504] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132c9c0) on tqpair=0x12c20d0 00:31:36.233 [2024-10-01 17:31:34.654513] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.233 [2024-10-01 17:31:34.654517] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.233 [2024-10-01 17:31:34.654521] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12c20d0) 00:31:36.233 [2024-10-01 17:31:34.654527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.233 [2024-10-01 17:31:34.654537] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132c9c0, cid 3, qid 0 00:31:36.233 [2024-10-01 17:31:34.654707] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.233 [2024-10-01 17:31:34.654713] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.233 [2024-10-01 17:31:34.654717] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.233 [2024-10-01 17:31:34.654721] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132c9c0) on tqpair=0x12c20d0 00:31:36.233 [2024-10-01 17:31:34.654730] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.233 [2024-10-01 17:31:34.654734] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.233 [2024-10-01 17:31:34.654737] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12c20d0) 00:31:36.233 [2024-10-01 17:31:34.654744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.233 [2024-10-01 17:31:34.654754] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132c9c0, cid 3, qid 0 00:31:36.233 [2024-10-01 17:31:34.654928] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.233 [2024-10-01 17:31:34.654934] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.233 [2024-10-01 17:31:34.654938] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.233 [2024-10-01 17:31:34.654942] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132c9c0) on tqpair=0x12c20d0 00:31:36.233 [2024-10-01 17:31:34.654952] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.233 [2024-10-01 17:31:34.654955] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.233 [2024-10-01 17:31:34.654959] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12c20d0) 00:31:36.233 [2024-10-01 17:31:34.654966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.233 [2024-10-01 17:31:34.654976] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132c9c0, cid 3, qid 0 00:31:36.233 [2024-10-01 17:31:34.655134] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.233 [2024-10-01 17:31:34.655141] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.233 [2024-10-01 17:31:34.655145] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.233 [2024-10-01 17:31:34.655149] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132c9c0) on tqpair=0x12c20d0 00:31:36.233 [2024-10-01 17:31:34.655158] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.233 [2024-10-01 17:31:34.655162] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.233 [2024-10-01 17:31:34.655166] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12c20d0) 00:31:36.233 [2024-10-01 17:31:34.655173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.233 [2024-10-01 17:31:34.655183] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132c9c0, cid 3, qid 0 00:31:36.233 [2024-10-01 17:31:34.655358] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.233 [2024-10-01 17:31:34.655365] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.233 [2024-10-01 17:31:34.655368] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.233 [2024-10-01 17:31:34.655372] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132c9c0) on tqpair=0x12c20d0 00:31:36.233 [2024-10-01 17:31:34.655383] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.233 [2024-10-01 17:31:34.655387] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.233 [2024-10-01 17:31:34.655392] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12c20d0) 00:31:36.233 [2024-10-01 17:31:34.655399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.233 [2024-10-01 17:31:34.655410] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132c9c0, cid 3, qid 0 00:31:36.233 [2024-10-01 17:31:34.655554] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.233 [2024-10-01 17:31:34.655562] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.233 [2024-10-01 17:31:34.655566] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.233 [2024-10-01 17:31:34.655570] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132c9c0) on tqpair=0x12c20d0 00:31:36.233 [2024-10-01 17:31:34.655580] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.233 [2024-10-01 17:31:34.655584] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.233 [2024-10-01 17:31:34.655587] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12c20d0) 00:31:36.233 [2024-10-01 17:31:34.655594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.233 [2024-10-01 17:31:34.655606] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132c9c0, cid 3, qid 0 00:31:36.233 [2024-10-01 17:31:34.655776] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.233 [2024-10-01 17:31:34.655783] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.233 [2024-10-01 17:31:34.655787] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.233 [2024-10-01 17:31:34.655791] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132c9c0) on tqpair=0x12c20d0 00:31:36.233 [2024-10-01 17:31:34.655801] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.233 [2024-10-01 17:31:34.655805] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.233 [2024-10-01 17:31:34.655808] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12c20d0) 00:31:36.233 [2024-10-01 17:31:34.655815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.233 [2024-10-01 17:31:34.655827] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132c9c0, cid 3, qid 0 00:31:36.234 [2024-10-01 17:31:34.655971] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.234 [2024-10-01 17:31:34.655977] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.234 [2024-10-01 17:31:34.655981] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.234 [2024-10-01 17:31:34.655985] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132c9c0) on tqpair=0x12c20d0 00:31:36.234 [2024-10-01 17:31:34.659999] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.234 [2024-10-01 17:31:34.660005] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.234 [2024-10-01 17:31:34.660009] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12c20d0) 00:31:36.234 [2024-10-01 17:31:34.660015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.234 [2024-10-01 17:31:34.660027] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132c9c0, cid 3, qid 0 00:31:36.234 [2024-10-01 17:31:34.660170] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.234 [2024-10-01 17:31:34.660176] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.234 [2024-10-01 17:31:34.660180] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.234 [2024-10-01 17:31:34.660184] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132c9c0) on tqpair=0x12c20d0 00:31:36.234 [2024-10-01 17:31:34.660191] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:31:36.234 0% 00:31:36.234 Data Units Read: 0 00:31:36.234 Data Units Written: 0 00:31:36.234 Host Read Commands: 0 00:31:36.234 Host Write Commands: 0 00:31:36.234 Controller Busy Time: 0 minutes 00:31:36.234 Power Cycles: 0 00:31:36.234 Power On Hours: 0 hours 00:31:36.234 Unsafe Shutdowns: 0 00:31:36.234 Unrecoverable Media Errors: 0 00:31:36.234 Lifetime Error Log Entries: 0 00:31:36.234 Warning Temperature Time: 0 minutes 00:31:36.234 Critical Temperature Time: 0 minutes 00:31:36.234 00:31:36.234 Number of Queues 00:31:36.234 ================ 00:31:36.234 Number of I/O Submission Queues: 127 00:31:36.234 Number of I/O Completion Queues: 127 00:31:36.234 00:31:36.234 Active Namespaces 00:31:36.234 ================= 00:31:36.234 Namespace ID:1 00:31:36.234 Error Recovery Timeout: Unlimited 00:31:36.234 Command Set Identifier: NVM (00h) 00:31:36.234 Deallocate: Supported 00:31:36.234 Deallocated/Unwritten Error: Not Supported 00:31:36.234 Deallocated Read Value: Unknown 00:31:36.234 Deallocate in Write Zeroes: Not Supported 00:31:36.234 Deallocated Guard Field: 0xFFFF 00:31:36.234 Flush: Supported 00:31:36.234 Reservation: Supported 00:31:36.234 Namespace Sharing Capabilities: Multiple Controllers 00:31:36.234 Size (in LBAs): 131072 (0GiB) 00:31:36.234 Capacity (in LBAs): 131072 (0GiB) 00:31:36.234 Utilization (in LBAs): 131072 (0GiB) 00:31:36.234 NGUID: ABCDEF0123456789ABCDEF0123456789 00:31:36.234 EUI64: ABCDEF0123456789 00:31:36.234 UUID: d21b665d-302e-4bca-afcb-1e958ee59228 00:31:36.234 Thin Provisioning: Not Supported 00:31:36.234 Per-NS Atomic Units: Yes 00:31:36.234 Atomic Boundary Size (Normal): 0 00:31:36.234 Atomic Boundary Size (PFail): 0 00:31:36.234 Atomic Boundary Offset: 0 00:31:36.234 Maximum Single Source Range Length: 65535 00:31:36.234 Maximum Copy Length: 65535 00:31:36.234 Maximum Source Range Count: 1 00:31:36.234 NGUID/EUI64 Never Reused: No 00:31:36.234 Namespace Write Protected: No 00:31:36.234 Number of LBA Formats: 1 00:31:36.234 Current LBA Format: LBA Format #00 00:31:36.234 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:36.234 00:31:36.234 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:31:36.234 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:36.234 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.234 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:36.234 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.234 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:31:36.234 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:31:36.234 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:36.234 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:31:36.234 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:36.234 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:31:36.234 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:36.234 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:36.234 rmmod nvme_tcp 00:31:36.234 rmmod nvme_fabrics 00:31:36.234 rmmod nvme_keyring 00:31:36.234 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:36.234 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:31:36.234 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:31:36.234 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 3187924 ']' 00:31:36.234 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 3187924 00:31:36.234 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 3187924 ']' 00:31:36.234 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 3187924 00:31:36.234 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:31:36.234 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:36.234 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3187924 00:31:36.494 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:36.494 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:36.494 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3187924' 00:31:36.494 killing process with pid 3187924 00:31:36.494 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 3187924 00:31:36.494 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 3187924 00:31:36.494 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:36.494 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:36.494 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:36.494 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:31:36.494 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-save 00:31:36.494 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:36.494 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-restore 00:31:36.494 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:36.494 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:36.494 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:36.494 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:36.494 17:31:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:39.036 00:31:39.036 real 0m11.621s 00:31:39.036 user 0m8.559s 00:31:39.036 sys 0m6.183s 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:39.036 ************************************ 00:31:39.036 END TEST nvmf_identify 00:31:39.036 ************************************ 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.036 ************************************ 00:31:39.036 START TEST nvmf_perf 00:31:39.036 ************************************ 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:31:39.036 * Looking for test storage... 00:31:39.036 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:39.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.036 --rc genhtml_branch_coverage=1 00:31:39.036 --rc genhtml_function_coverage=1 00:31:39.036 --rc genhtml_legend=1 00:31:39.036 --rc geninfo_all_blocks=1 00:31:39.036 --rc geninfo_unexecuted_blocks=1 00:31:39.036 00:31:39.036 ' 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:39.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.036 --rc genhtml_branch_coverage=1 00:31:39.036 --rc genhtml_function_coverage=1 00:31:39.036 --rc genhtml_legend=1 00:31:39.036 --rc geninfo_all_blocks=1 00:31:39.036 --rc geninfo_unexecuted_blocks=1 00:31:39.036 00:31:39.036 ' 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:39.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.036 --rc genhtml_branch_coverage=1 00:31:39.036 --rc genhtml_function_coverage=1 00:31:39.036 --rc genhtml_legend=1 00:31:39.036 --rc geninfo_all_blocks=1 00:31:39.036 --rc geninfo_unexecuted_blocks=1 00:31:39.036 00:31:39.036 ' 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:39.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.036 --rc genhtml_branch_coverage=1 00:31:39.036 --rc genhtml_function_coverage=1 00:31:39.036 --rc genhtml_legend=1 00:31:39.036 --rc geninfo_all_blocks=1 00:31:39.036 --rc geninfo_unexecuted_blocks=1 00:31:39.036 00:31:39.036 ' 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.036 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.037 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.037 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:31:39.037 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.037 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:31:39.037 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:39.037 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:39.037 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:39.037 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:39.037 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:39.037 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:39.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:39.037 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:39.037 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:39.037 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:39.037 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:39.037 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:39.037 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:39.037 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:31:39.037 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:39.037 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:39.037 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:39.037 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:39.037 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:39.037 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:39.037 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:39.037 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:39.037 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:39.037 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:39.037 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:31:39.037 17:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:47.181 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:47.181 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:47.181 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:47.182 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:47.182 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # is_hw=yes 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:47.182 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:47.182 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:31:47.182 00:31:47.182 --- 10.0.0.2 ping statistics --- 00:31:47.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:47.182 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:47.182 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:47.182 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:31:47.182 00:31:47.182 --- 10.0.0.1 ping statistics --- 00:31:47.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:47.182 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # return 0 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=3192299 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 3192299 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 3192299 ']' 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:47.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:47.182 17:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:47.182 [2024-10-01 17:31:44.758261] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:31:47.182 [2024-10-01 17:31:44.758329] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:47.182 [2024-10-01 17:31:44.834531] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:47.182 [2024-10-01 17:31:44.874134] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:47.182 [2024-10-01 17:31:44.874178] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:47.182 [2024-10-01 17:31:44.874186] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:47.182 [2024-10-01 17:31:44.874193] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:47.182 [2024-10-01 17:31:44.874199] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:47.182 [2024-10-01 17:31:44.874284] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:31:47.182 [2024-10-01 17:31:44.874428] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:31:47.182 [2024-10-01 17:31:44.874592] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:47.182 [2024-10-01 17:31:44.874592] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:31:47.182 17:31:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:47.182 17:31:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:31:47.182 17:31:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:47.182 17:31:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:47.182 17:31:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:47.182 17:31:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:47.182 17:31:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:47.182 17:31:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:31:47.754 17:31:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:31:47.754 17:31:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:31:47.754 17:31:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:31:47.754 17:31:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:48.013 17:31:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:31:48.013 17:31:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:31:48.013 17:31:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:31:48.013 17:31:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:31:48.013 17:31:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:31:48.273 [2024-10-01 17:31:46.628178] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:48.273 17:31:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:48.532 17:31:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:48.532 17:31:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:48.532 17:31:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:48.532 17:31:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:48.792 17:31:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:48.792 [2024-10-01 17:31:47.338800] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:49.053 17:31:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:49.053 17:31:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:31:49.053 17:31:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:31:49.053 17:31:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:31:49.053 17:31:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:31:50.438 Initializing NVMe Controllers 00:31:50.438 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:31:50.438 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:31:50.438 Initialization complete. Launching workers. 00:31:50.438 ======================================================== 00:31:50.438 Latency(us) 00:31:50.438 Device Information : IOPS MiB/s Average min max 00:31:50.438 PCIE (0000:65:00.0) NSID 1 from core 0: 78750.92 307.62 405.67 13.40 5274.99 00:31:50.438 ======================================================== 00:31:50.438 Total : 78750.92 307.62 405.67 13.40 5274.99 00:31:50.438 00:31:50.438 17:31:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:51.823 Initializing NVMe Controllers 00:31:51.823 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:51.823 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:51.823 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:51.823 Initialization complete. Launching workers. 00:31:51.823 ======================================================== 00:31:51.823 Latency(us) 00:31:51.823 Device Information : IOPS MiB/s Average min max 00:31:51.823 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 109.00 0.43 9399.47 244.99 44915.84 00:31:51.823 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 51.00 0.20 19698.08 7945.31 47889.65 00:31:51.823 ======================================================== 00:31:51.823 Total : 160.00 0.62 12682.15 244.99 47889.65 00:31:51.823 00:31:51.823 17:31:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:53.207 Initializing NVMe Controllers 00:31:53.207 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:53.207 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:53.207 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:53.207 Initialization complete. Launching workers. 00:31:53.207 ======================================================== 00:31:53.207 Latency(us) 00:31:53.207 Device Information : IOPS MiB/s Average min max 00:31:53.207 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10450.06 40.82 3073.39 484.30 45244.98 00:31:53.207 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3774.21 14.74 8602.73 5908.85 47843.79 00:31:53.207 ======================================================== 00:31:53.207 Total : 14224.27 55.56 4540.52 484.30 47843.79 00:31:53.207 00:31:53.208 17:31:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:31:53.208 17:31:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:31:53.208 17:31:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:55.751 Initializing NVMe Controllers 00:31:55.751 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:55.751 Controller IO queue size 128, less than required. 00:31:55.751 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:55.751 Controller IO queue size 128, less than required. 00:31:55.751 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:55.751 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:55.751 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:55.751 Initialization complete. Launching workers. 00:31:55.751 ======================================================== 00:31:55.751 Latency(us) 00:31:55.751 Device Information : IOPS MiB/s Average min max 00:31:55.751 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1583.21 395.80 82275.55 52406.16 125235.05 00:31:55.751 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 634.28 158.57 210934.88 90219.61 329074.51 00:31:55.751 ======================================================== 00:31:55.751 Total : 2217.50 554.37 119076.75 52406.16 329074.51 00:31:55.751 00:31:55.751 17:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:31:55.751 No valid NVMe controllers or AIO or URING devices found 00:31:55.751 Initializing NVMe Controllers 00:31:55.751 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:55.751 Controller IO queue size 128, less than required. 00:31:55.751 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:55.751 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:31:55.751 Controller IO queue size 128, less than required. 00:31:55.751 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:55.751 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:31:55.751 WARNING: Some requested NVMe devices were skipped 00:31:55.751 17:31:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:31:58.296 Initializing NVMe Controllers 00:31:58.296 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:58.296 Controller IO queue size 128, less than required. 00:31:58.296 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:58.296 Controller IO queue size 128, less than required. 00:31:58.296 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:58.296 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:58.296 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:58.296 Initialization complete. Launching workers. 00:31:58.296 00:31:58.296 ==================== 00:31:58.296 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:31:58.296 TCP transport: 00:31:58.296 polls: 20950 00:31:58.296 idle_polls: 11169 00:31:58.296 sock_completions: 9781 00:31:58.296 nvme_completions: 6759 00:31:58.296 submitted_requests: 10146 00:31:58.296 queued_requests: 1 00:31:58.296 00:31:58.296 ==================== 00:31:58.296 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:31:58.296 TCP transport: 00:31:58.296 polls: 21532 00:31:58.296 idle_polls: 10926 00:31:58.296 sock_completions: 10606 00:31:58.296 nvme_completions: 6453 00:31:58.296 submitted_requests: 9690 00:31:58.296 queued_requests: 1 00:31:58.296 ======================================================== 00:31:58.296 Latency(us) 00:31:58.296 Device Information : IOPS MiB/s Average min max 00:31:58.296 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1687.53 421.88 76626.49 42142.45 129600.82 00:31:58.297 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1611.12 402.78 81126.50 37976.44 132307.36 00:31:58.297 ======================================================== 00:31:58.297 Total : 3298.66 824.66 78824.38 37976.44 132307.36 00:31:58.297 00:31:58.297 17:31:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:31:58.297 17:31:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:58.297 17:31:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:31:58.297 17:31:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:65:00.0 ']' 00:31:58.297 17:31:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:31:59.680 17:31:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=5edb65c4-60b4-4fa3-8ca1-4ee7f3cc33eb 00:31:59.680 17:31:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 5edb65c4-60b4-4fa3-8ca1-4ee7f3cc33eb 00:31:59.680 17:31:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=5edb65c4-60b4-4fa3-8ca1-4ee7f3cc33eb 00:31:59.680 17:31:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:59.680 17:31:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:31:59.680 17:31:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:31:59.680 17:31:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:59.680 17:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:59.680 { 00:31:59.680 "uuid": "5edb65c4-60b4-4fa3-8ca1-4ee7f3cc33eb", 00:31:59.680 "name": "lvs_0", 00:31:59.680 "base_bdev": "Nvme0n1", 00:31:59.680 "total_data_clusters": 457407, 00:31:59.680 "free_clusters": 457407, 00:31:59.680 "block_size": 512, 00:31:59.680 "cluster_size": 4194304 00:31:59.680 } 00:31:59.680 ]' 00:31:59.680 17:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="5edb65c4-60b4-4fa3-8ca1-4ee7f3cc33eb") .free_clusters' 00:31:59.680 17:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=457407 00:31:59.680 17:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="5edb65c4-60b4-4fa3-8ca1-4ee7f3cc33eb") .cluster_size' 00:31:59.680 17:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:31:59.680 17:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=1829628 00:31:59.681 17:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 1829628 00:31:59.681 1829628 00:31:59.681 17:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 1829628 -gt 20480 ']' 00:31:59.681 17:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:31:59.681 17:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5edb65c4-60b4-4fa3-8ca1-4ee7f3cc33eb lbd_0 20480 00:31:59.941 17:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=ec43faa3-eb06-40d7-9767-d2e84f39824e 00:31:59.941 17:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore ec43faa3-eb06-40d7-9767-d2e84f39824e lvs_n_0 00:32:01.854 17:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=a5d40683-6c25-4667-93f1-e988c6b11a2f 00:32:01.854 17:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb a5d40683-6c25-4667-93f1-e988c6b11a2f 00:32:01.854 17:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=a5d40683-6c25-4667-93f1-e988c6b11a2f 00:32:01.854 17:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:32:01.854 17:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:32:01.854 17:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:32:01.854 17:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:01.854 17:32:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:32:01.854 { 00:32:01.854 "uuid": "5edb65c4-60b4-4fa3-8ca1-4ee7f3cc33eb", 00:32:01.854 "name": "lvs_0", 00:32:01.854 "base_bdev": "Nvme0n1", 00:32:01.854 "total_data_clusters": 457407, 00:32:01.854 "free_clusters": 452287, 00:32:01.854 "block_size": 512, 00:32:01.854 "cluster_size": 4194304 00:32:01.854 }, 00:32:01.854 { 00:32:01.854 "uuid": "a5d40683-6c25-4667-93f1-e988c6b11a2f", 00:32:01.854 "name": "lvs_n_0", 00:32:01.854 "base_bdev": "ec43faa3-eb06-40d7-9767-d2e84f39824e", 00:32:01.854 "total_data_clusters": 5114, 00:32:01.854 "free_clusters": 5114, 00:32:01.854 "block_size": 512, 00:32:01.854 "cluster_size": 4194304 00:32:01.854 } 00:32:01.854 ]' 00:32:01.854 17:32:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="a5d40683-6c25-4667-93f1-e988c6b11a2f") .free_clusters' 00:32:01.854 17:32:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:32:01.854 17:32:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="a5d40683-6c25-4667-93f1-e988c6b11a2f") .cluster_size' 00:32:01.854 17:32:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:32:01.854 17:32:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:32:01.854 17:32:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:32:01.854 20456 00:32:01.854 17:32:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:32:01.854 17:32:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a5d40683-6c25-4667-93f1-e988c6b11a2f lbd_nest_0 20456 00:32:01.854 17:32:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=082189b2-f70a-46a1-89ef-4e8ae795fead 00:32:01.854 17:32:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:02.114 17:32:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:32:02.114 17:32:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 082189b2-f70a-46a1-89ef-4e8ae795fead 00:32:02.374 17:32:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:02.634 17:32:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:32:02.634 17:32:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:32:02.634 17:32:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:02.634 17:32:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:02.634 17:32:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:14.861 Initializing NVMe Controllers 00:32:14.861 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:14.861 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:14.861 Initialization complete. Launching workers. 00:32:14.861 ======================================================== 00:32:14.861 Latency(us) 00:32:14.861 Device Information : IOPS MiB/s Average min max 00:32:14.861 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 46.30 0.02 21614.10 121.28 46602.77 00:32:14.861 ======================================================== 00:32:14.861 Total : 46.30 0.02 21614.10 121.28 46602.77 00:32:14.861 00:32:14.861 17:32:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:14.861 17:32:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:24.862 Initializing NVMe Controllers 00:32:24.862 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:24.862 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:24.862 Initialization complete. Launching workers. 00:32:24.862 ======================================================== 00:32:24.862 Latency(us) 00:32:24.862 Device Information : IOPS MiB/s Average min max 00:32:24.862 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 69.00 8.62 14516.21 4987.85 55867.55 00:32:24.862 ======================================================== 00:32:24.862 Total : 69.00 8.62 14516.21 4987.85 55867.55 00:32:24.862 00:32:24.862 17:32:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:24.862 17:32:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:24.862 17:32:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:34.862 Initializing NVMe Controllers 00:32:34.862 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:34.862 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:34.862 Initialization complete. Launching workers. 00:32:34.862 ======================================================== 00:32:34.862 Latency(us) 00:32:34.862 Device Information : IOPS MiB/s Average min max 00:32:34.862 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8657.85 4.23 3697.38 305.52 10141.58 00:32:34.862 ======================================================== 00:32:34.862 Total : 8657.85 4.23 3697.38 305.52 10141.58 00:32:34.862 00:32:34.862 17:32:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:34.862 17:32:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:44.947 Initializing NVMe Controllers 00:32:44.947 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:44.947 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:44.947 Initialization complete. Launching workers. 00:32:44.947 ======================================================== 00:32:44.947 Latency(us) 00:32:44.947 Device Information : IOPS MiB/s Average min max 00:32:44.947 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3844.10 480.51 8324.59 671.26 21714.95 00:32:44.947 ======================================================== 00:32:44.947 Total : 3844.10 480.51 8324.59 671.26 21714.95 00:32:44.947 00:32:44.947 17:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:44.947 17:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:44.947 17:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:54.954 Initializing NVMe Controllers 00:32:54.954 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:54.954 Controller IO queue size 128, less than required. 00:32:54.954 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:54.954 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:54.954 Initialization complete. Launching workers. 00:32:54.954 ======================================================== 00:32:54.954 Latency(us) 00:32:54.954 Device Information : IOPS MiB/s Average min max 00:32:54.954 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15920.43 7.77 8041.66 3297.79 15849.39 00:32:54.954 ======================================================== 00:32:54.954 Total : 15920.43 7.77 8041.66 3297.79 15849.39 00:32:54.954 00:32:54.954 17:32:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:54.954 17:32:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:04.953 Initializing NVMe Controllers 00:33:04.953 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:04.953 Controller IO queue size 128, less than required. 00:33:04.953 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:04.953 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:04.953 Initialization complete. Launching workers. 00:33:04.953 ======================================================== 00:33:04.953 Latency(us) 00:33:04.953 Device Information : IOPS MiB/s Average min max 00:33:04.953 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1188.02 148.50 108284.61 15266.54 238499.23 00:33:04.953 ======================================================== 00:33:04.953 Total : 1188.02 148.50 108284.61 15266.54 238499.23 00:33:04.953 00:33:04.953 17:33:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:04.953 17:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 082189b2-f70a-46a1-89ef-4e8ae795fead 00:33:06.333 17:33:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:33:06.592 17:33:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ec43faa3-eb06-40d7-9767-d2e84f39824e 00:33:06.852 17:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:33:06.852 17:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:33:06.852 17:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:33:06.852 17:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:06.852 17:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:33:06.852 17:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:06.852 17:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:33:06.852 17:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:06.852 17:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:06.852 rmmod nvme_tcp 00:33:06.852 rmmod nvme_fabrics 00:33:06.852 rmmod nvme_keyring 00:33:07.112 17:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:07.112 17:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:33:07.112 17:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:33:07.112 17:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 3192299 ']' 00:33:07.112 17:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 3192299 00:33:07.112 17:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 3192299 ']' 00:33:07.112 17:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 3192299 00:33:07.112 17:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:33:07.112 17:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:07.112 17:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3192299 00:33:07.112 17:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:07.112 17:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:07.112 17:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3192299' 00:33:07.112 killing process with pid 3192299 00:33:07.112 17:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 3192299 00:33:07.112 17:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 3192299 00:33:09.022 17:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:09.022 17:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:09.022 17:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:09.022 17:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:33:09.022 17:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-save 00:33:09.022 17:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:09.022 17:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-restore 00:33:09.022 17:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:09.022 17:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:09.022 17:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:09.022 17:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:09.022 17:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:11.566 17:33:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:11.566 00:33:11.566 real 1m32.442s 00:33:11.566 user 5m26.454s 00:33:11.566 sys 0m15.506s 00:33:11.566 17:33:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:11.566 17:33:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:33:11.566 ************************************ 00:33:11.566 END TEST nvmf_perf 00:33:11.566 ************************************ 00:33:11.566 17:33:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:33:11.566 17:33:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:11.566 17:33:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:11.566 17:33:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.566 ************************************ 00:33:11.566 START TEST nvmf_fio_host 00:33:11.566 ************************************ 00:33:11.566 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:33:11.566 * Looking for test storage... 00:33:11.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:11.566 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:11.566 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:33:11.566 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:11.566 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:11.566 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:11.566 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:11.566 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:11.566 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:33:11.566 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:33:11.566 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:33:11.566 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:33:11.566 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:33:11.566 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:33:11.566 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:33:11.566 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:11.566 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:33:11.566 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:33:11.566 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:11.566 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:11.566 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:33:11.566 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:33:11.566 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:11.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:11.567 --rc genhtml_branch_coverage=1 00:33:11.567 --rc genhtml_function_coverage=1 00:33:11.567 --rc genhtml_legend=1 00:33:11.567 --rc geninfo_all_blocks=1 00:33:11.567 --rc geninfo_unexecuted_blocks=1 00:33:11.567 00:33:11.567 ' 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:11.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:11.567 --rc genhtml_branch_coverage=1 00:33:11.567 --rc genhtml_function_coverage=1 00:33:11.567 --rc genhtml_legend=1 00:33:11.567 --rc geninfo_all_blocks=1 00:33:11.567 --rc geninfo_unexecuted_blocks=1 00:33:11.567 00:33:11.567 ' 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:11.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:11.567 --rc genhtml_branch_coverage=1 00:33:11.567 --rc genhtml_function_coverage=1 00:33:11.567 --rc genhtml_legend=1 00:33:11.567 --rc geninfo_all_blocks=1 00:33:11.567 --rc geninfo_unexecuted_blocks=1 00:33:11.567 00:33:11.567 ' 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:11.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:11.567 --rc genhtml_branch_coverage=1 00:33:11.567 --rc genhtml_function_coverage=1 00:33:11.567 --rc genhtml_legend=1 00:33:11.567 --rc geninfo_all_blocks=1 00:33:11.567 --rc geninfo_unexecuted_blocks=1 00:33:11.567 00:33:11.567 ' 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:11.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:11.567 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:11.568 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:11.568 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:33:11.568 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:11.568 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:11.568 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:11.568 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:11.568 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:11.568 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:11.568 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:11.568 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:11.568 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:11.568 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:11.568 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:33:11.568 17:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:19.707 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:19.707 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:19.707 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:19.708 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:19.708 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:19.708 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:19.708 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:19.708 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:19.708 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:19.708 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:19.708 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:19.708 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:19.708 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:19.708 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:19.708 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:19.708 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:19.708 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:19.708 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:19.708 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:19.708 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:19.708 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:19.708 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # is_hw=yes 00:33:19.708 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:19.708 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:19.708 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:19.708 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:19.708 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:19.708 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:19.708 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:19.708 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:19.708 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:19.708 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:19.708 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:19.708 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:19.708 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:19.708 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:19.708 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:19.708 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:19.708 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:19.708 17:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:19.708 17:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:19.708 17:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:19.708 17:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:19.708 17:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:19.708 17:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:19.708 17:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:19.708 17:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:19.708 17:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:19.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:19.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.481 ms 00:33:19.708 00:33:19.708 --- 10.0.0.2 ping statistics --- 00:33:19.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:19.708 rtt min/avg/max/mdev = 0.481/0.481/0.481/0.000 ms 00:33:19.708 17:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:19.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:19.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:33:19.708 00:33:19.708 --- 10.0.0.1 ping statistics --- 00:33:19.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:19.708 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:33:19.708 17:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:19.708 17:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # return 0 00:33:19.708 17:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:19.708 17:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:19.708 17:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:19.708 17:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:19.708 17:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:19.708 17:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:19.708 17:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:19.708 17:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:33:19.708 17:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:33:19.708 17:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:19.708 17:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.708 17:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3212637 00:33:19.708 17:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:19.708 17:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:19.708 17:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3212637 00:33:19.708 17:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 3212637 ']' 00:33:19.708 17:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:19.708 17:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:19.708 17:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:19.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:19.708 17:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:19.708 17:33:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.708 [2024-10-01 17:33:17.296128] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:33:19.708 [2024-10-01 17:33:17.296182] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:19.708 [2024-10-01 17:33:17.365731] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:19.708 [2024-10-01 17:33:17.401109] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:19.708 [2024-10-01 17:33:17.401148] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:19.708 [2024-10-01 17:33:17.401155] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:19.708 [2024-10-01 17:33:17.401162] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:19.708 [2024-10-01 17:33:17.401168] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:19.708 [2024-10-01 17:33:17.401320] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:33:19.708 [2024-10-01 17:33:17.401452] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:33:19.708 [2024-10-01 17:33:17.401617] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:19.708 [2024-10-01 17:33:17.401617] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:33:19.708 17:33:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:19.708 17:33:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:33:19.708 17:33:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:19.969 [2024-10-01 17:33:18.253172] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:19.969 17:33:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:33:19.969 17:33:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:19.969 17:33:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.969 17:33:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:33:19.969 Malloc1 00:33:20.229 17:33:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:20.229 17:33:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:20.490 17:33:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:20.749 [2024-10-01 17:33:19.038794] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:20.749 17:33:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:20.749 17:33:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:33:20.749 17:33:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:20.749 17:33:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:20.749 17:33:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:20.749 17:33:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:20.749 17:33:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:20.749 17:33:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:20.749 17:33:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:33:20.749 17:33:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:20.749 17:33:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:20.749 17:33:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:20.749 17:33:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:33:20.749 17:33:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:20.749 17:33:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:20.750 17:33:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:20.750 17:33:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:20.750 17:33:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:20.750 17:33:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:20.750 17:33:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:21.028 17:33:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:21.028 17:33:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:21.028 17:33:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:21.028 17:33:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:21.294 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:21.294 fio-3.35 00:33:21.294 Starting 1 thread 00:33:23.864 00:33:23.864 test: (groupid=0, jobs=1): err= 0: pid=3213171: Tue Oct 1 17:33:22 2024 00:33:23.864 read: IOPS=13.7k, BW=53.4MiB/s (56.0MB/s)(107MiB/2004msec) 00:33:23.864 slat (usec): min=2, max=293, avg= 2.17, stdev= 2.52 00:33:23.864 clat (usec): min=3742, max=9098, avg=5157.34, stdev=552.53 00:33:23.864 lat (usec): min=3744, max=9111, avg=5159.51, stdev=552.73 00:33:23.864 clat percentiles (usec): 00:33:23.864 | 1.00th=[ 4293], 5.00th=[ 4555], 10.00th=[ 4686], 20.00th=[ 4817], 00:33:23.864 | 30.00th=[ 4948], 40.00th=[ 5014], 50.00th=[ 5080], 60.00th=[ 5145], 00:33:23.864 | 70.00th=[ 5276], 80.00th=[ 5407], 90.00th=[ 5538], 95.00th=[ 5800], 00:33:23.864 | 99.00th=[ 7701], 99.50th=[ 7963], 99.90th=[ 8586], 99.95th=[ 8848], 00:33:23.864 | 99.99th=[ 9110] 00:33:23.864 bw ( KiB/s): min=50592, max=56136, per=99.91%, avg=54658.00, stdev=2714.33, samples=4 00:33:23.864 iops : min=12648, max=14034, avg=13664.50, stdev=678.58, samples=4 00:33:23.864 write: IOPS=13.7k, BW=53.3MiB/s (55.9MB/s)(107MiB/2004msec); 0 zone resets 00:33:23.864 slat (usec): min=2, max=267, avg= 2.23, stdev= 1.79 00:33:23.864 clat (usec): min=2920, max=7822, avg=4161.06, stdev=456.40 00:33:23.864 lat (usec): min=2938, max=7829, avg=4163.29, stdev=456.63 00:33:23.864 clat percentiles (usec): 00:33:23.864 | 1.00th=[ 3458], 5.00th=[ 3654], 10.00th=[ 3752], 20.00th=[ 3884], 00:33:23.864 | 30.00th=[ 3949], 40.00th=[ 4047], 50.00th=[ 4113], 60.00th=[ 4178], 00:33:23.864 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4686], 00:33:23.864 | 99.00th=[ 6259], 99.50th=[ 6456], 99.90th=[ 7046], 99.95th=[ 7308], 00:33:23.864 | 99.99th=[ 7701] 00:33:23.864 bw ( KiB/s): min=51024, max=56048, per=100.00%, avg=54628.00, stdev=2411.54, samples=4 00:33:23.864 iops : min=12756, max=14012, avg=13657.00, stdev=602.89, samples=4 00:33:23.864 lat (msec) : 4=17.65%, 10=82.35% 00:33:23.864 cpu : usr=75.29%, sys=23.36%, ctx=25, majf=0, minf=17 00:33:23.864 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:33:23.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:23.864 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:23.864 issued rwts: total=27409,27361,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:23.864 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:23.864 00:33:23.864 Run status group 0 (all jobs): 00:33:23.864 READ: bw=53.4MiB/s (56.0MB/s), 53.4MiB/s-53.4MiB/s (56.0MB/s-56.0MB/s), io=107MiB (112MB), run=2004-2004msec 00:33:23.864 WRITE: bw=53.3MiB/s (55.9MB/s), 53.3MiB/s-53.3MiB/s (55.9MB/s-55.9MB/s), io=107MiB (112MB), run=2004-2004msec 00:33:23.864 17:33:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:23.864 17:33:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:23.864 17:33:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:23.864 17:33:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:23.864 17:33:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:23.864 17:33:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:23.864 17:33:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:33:23.864 17:33:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:23.864 17:33:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:23.864 17:33:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:23.864 17:33:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:33:23.864 17:33:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:23.864 17:33:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:23.864 17:33:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:23.864 17:33:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:23.864 17:33:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:23.864 17:33:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:23.864 17:33:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:23.864 17:33:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:23.864 17:33:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:23.865 17:33:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:23.865 17:33:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:24.131 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:33:24.131 fio-3.35 00:33:24.131 Starting 1 thread 00:33:26.679 00:33:26.679 test: (groupid=0, jobs=1): err= 0: pid=3213992: Tue Oct 1 17:33:24 2024 00:33:26.679 read: IOPS=9476, BW=148MiB/s (155MB/s)(297MiB/2004msec) 00:33:26.679 slat (usec): min=3, max=109, avg= 3.59, stdev= 1.57 00:33:26.679 clat (usec): min=1422, max=16019, avg=8144.37, stdev=1881.35 00:33:26.679 lat (usec): min=1425, max=16022, avg=8147.96, stdev=1881.48 00:33:26.679 clat percentiles (usec): 00:33:26.679 | 1.00th=[ 4359], 5.00th=[ 5342], 10.00th=[ 5800], 20.00th=[ 6456], 00:33:26.679 | 30.00th=[ 6980], 40.00th=[ 7504], 50.00th=[ 8029], 60.00th=[ 8586], 00:33:26.679 | 70.00th=[ 9110], 80.00th=[ 9896], 90.00th=[10683], 95.00th=[11207], 00:33:26.679 | 99.00th=[12649], 99.50th=[13173], 99.90th=[14877], 99.95th=[15139], 00:33:26.679 | 99.99th=[15270] 00:33:26.679 bw ( KiB/s): min=67520, max=88064, per=49.23%, avg=74648.00, stdev=9139.29, samples=4 00:33:26.679 iops : min= 4220, max= 5504, avg=4665.50, stdev=571.21, samples=4 00:33:26.679 write: IOPS=5548, BW=86.7MiB/s (90.9MB/s)(153MiB/1765msec); 0 zone resets 00:33:26.679 slat (usec): min=39, max=359, avg=40.87, stdev= 6.89 00:33:26.679 clat (usec): min=2099, max=16570, avg=9428.88, stdev=1566.86 00:33:26.679 lat (usec): min=2139, max=16610, avg=9469.75, stdev=1567.93 00:33:26.679 clat percentiles (usec): 00:33:26.679 | 1.00th=[ 6325], 5.00th=[ 7177], 10.00th=[ 7635], 20.00th=[ 8160], 00:33:26.679 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9765], 00:33:26.679 | 70.00th=[10159], 80.00th=[10552], 90.00th=[11469], 95.00th=[12125], 00:33:26.679 | 99.00th=[13829], 99.50th=[14222], 99.90th=[15795], 99.95th=[16450], 00:33:26.679 | 99.99th=[16581] 00:33:26.679 bw ( KiB/s): min=71968, max=90848, per=87.54%, avg=77712.00, stdev=8812.87, samples=4 00:33:26.679 iops : min= 4498, max= 5678, avg=4857.00, stdev=550.80, samples=4 00:33:26.679 lat (msec) : 2=0.02%, 4=0.39%, 10=76.04%, 20=23.55% 00:33:26.679 cpu : usr=86.47%, sys=12.38%, ctx=14, majf=0, minf=35 00:33:26.679 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:33:26.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:26.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:26.679 issued rwts: total=18991,9793,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:26.679 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:26.679 00:33:26.679 Run status group 0 (all jobs): 00:33:26.679 READ: bw=148MiB/s (155MB/s), 148MiB/s-148MiB/s (155MB/s-155MB/s), io=297MiB (311MB), run=2004-2004msec 00:33:26.679 WRITE: bw=86.7MiB/s (90.9MB/s), 86.7MiB/s-86.7MiB/s (90.9MB/s-90.9MB/s), io=153MiB (160MB), run=1765-1765msec 00:33:26.679 17:33:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:26.679 17:33:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:33:26.679 17:33:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:33:26.679 17:33:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:33:26.679 17:33:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # bdfs=() 00:33:26.679 17:33:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # local bdfs 00:33:26.679 17:33:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:26.679 17:33:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:26.679 17:33:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:33:26.679 17:33:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:33:26.679 17:33:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:33:26.679 17:33:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 -i 10.0.0.2 00:33:27.252 Nvme0n1 00:33:27.252 17:33:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:33:27.823 17:33:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=682c0a03-a4dc-4035-862f-3e5eebed933d 00:33:27.823 17:33:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 682c0a03-a4dc-4035-862f-3e5eebed933d 00:33:27.823 17:33:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=682c0a03-a4dc-4035-862f-3e5eebed933d 00:33:27.823 17:33:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:33:27.823 17:33:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:33:27.823 17:33:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:33:27.823 17:33:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:28.083 17:33:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:33:28.083 { 00:33:28.083 "uuid": "682c0a03-a4dc-4035-862f-3e5eebed933d", 00:33:28.083 "name": "lvs_0", 00:33:28.083 "base_bdev": "Nvme0n1", 00:33:28.083 "total_data_clusters": 1787, 00:33:28.083 "free_clusters": 1787, 00:33:28.083 "block_size": 512, 00:33:28.083 "cluster_size": 1073741824 00:33:28.083 } 00:33:28.083 ]' 00:33:28.083 17:33:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="682c0a03-a4dc-4035-862f-3e5eebed933d") .free_clusters' 00:33:28.083 17:33:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=1787 00:33:28.083 17:33:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="682c0a03-a4dc-4035-862f-3e5eebed933d") .cluster_size' 00:33:28.083 17:33:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:33:28.083 17:33:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=1829888 00:33:28.083 17:33:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 1829888 00:33:28.083 1829888 00:33:28.083 17:33:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1829888 00:33:28.343 bc04c5f9-8d41-46fb-b22e-a38b0390d893 00:33:28.343 17:33:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:33:28.343 17:33:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:33:28.603 17:33:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:28.864 17:33:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:28.864 17:33:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:28.864 17:33:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:28.864 17:33:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:28.864 17:33:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:28.864 17:33:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:28.864 17:33:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:33:28.864 17:33:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:28.864 17:33:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:28.864 17:33:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:28.864 17:33:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:33:28.864 17:33:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:28.864 17:33:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:28.864 17:33:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:28.864 17:33:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:28.864 17:33:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:28.864 17:33:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:28.864 17:33:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:28.864 17:33:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:28.864 17:33:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:28.864 17:33:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:28.864 17:33:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:29.125 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:29.125 fio-3.35 00:33:29.125 Starting 1 thread 00:33:31.670 00:33:31.670 test: (groupid=0, jobs=1): err= 0: pid=3215189: Tue Oct 1 17:33:29 2024 00:33:31.670 read: IOPS=10.4k, BW=40.6MiB/s (42.6MB/s)(81.4MiB/2005msec) 00:33:31.670 slat (usec): min=2, max=138, avg= 2.21, stdev= 1.26 00:33:31.670 clat (usec): min=2547, max=11598, avg=6787.11, stdev=504.46 00:33:31.670 lat (usec): min=2564, max=11600, avg=6789.31, stdev=504.40 00:33:31.670 clat percentiles (usec): 00:33:31.670 | 1.00th=[ 5669], 5.00th=[ 5997], 10.00th=[ 6194], 20.00th=[ 6390], 00:33:31.670 | 30.00th=[ 6521], 40.00th=[ 6652], 50.00th=[ 6783], 60.00th=[ 6915], 00:33:31.670 | 70.00th=[ 7046], 80.00th=[ 7177], 90.00th=[ 7373], 95.00th=[ 7570], 00:33:31.670 | 99.00th=[ 7898], 99.50th=[ 8094], 99.90th=[ 9503], 99.95th=[10552], 00:33:31.670 | 99.99th=[11600] 00:33:31.670 bw ( KiB/s): min=40328, max=42232, per=99.89%, avg=41542.00, stdev=846.95, samples=4 00:33:31.670 iops : min=10082, max=10558, avg=10385.50, stdev=211.74, samples=4 00:33:31.670 write: IOPS=10.4k, BW=40.6MiB/s (42.6MB/s)(81.5MiB/2005msec); 0 zone resets 00:33:31.670 slat (nsec): min=2102, max=112339, avg=2280.69, stdev=821.35 00:33:31.670 clat (usec): min=1104, max=10542, avg=5425.84, stdev=430.77 00:33:31.670 lat (usec): min=1112, max=10544, avg=5428.12, stdev=430.75 00:33:31.670 clat percentiles (usec): 00:33:31.670 | 1.00th=[ 4424], 5.00th=[ 4752], 10.00th=[ 4883], 20.00th=[ 5080], 00:33:31.670 | 30.00th=[ 5211], 40.00th=[ 5342], 50.00th=[ 5407], 60.00th=[ 5538], 00:33:31.670 | 70.00th=[ 5669], 80.00th=[ 5735], 90.00th=[ 5932], 95.00th=[ 6063], 00:33:31.670 | 99.00th=[ 6390], 99.50th=[ 6456], 99.90th=[ 7504], 99.95th=[ 9241], 00:33:31.670 | 99.99th=[10028] 00:33:31.670 bw ( KiB/s): min=40880, max=42024, per=100.00%, avg=41604.00, stdev=501.33, samples=4 00:33:31.670 iops : min=10220, max=10506, avg=10401.00, stdev=125.33, samples=4 00:33:31.670 lat (msec) : 2=0.02%, 4=0.11%, 10=99.82%, 20=0.05% 00:33:31.670 cpu : usr=71.51%, sys=27.50%, ctx=39, majf=0, minf=20 00:33:31.670 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:31.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:31.670 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:31.670 issued rwts: total=20845,20855,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:31.670 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:31.670 00:33:31.670 Run status group 0 (all jobs): 00:33:31.670 READ: bw=40.6MiB/s (42.6MB/s), 40.6MiB/s-40.6MiB/s (42.6MB/s-42.6MB/s), io=81.4MiB (85.4MB), run=2005-2005msec 00:33:31.670 WRITE: bw=40.6MiB/s (42.6MB/s), 40.6MiB/s-40.6MiB/s (42.6MB/s-42.6MB/s), io=81.5MiB (85.4MB), run=2005-2005msec 00:33:31.670 17:33:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:31.670 17:33:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:33:32.615 17:33:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=c26faefc-c20f-4abb-94e7-54dd46ba1d0b 00:33:32.615 17:33:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb c26faefc-c20f-4abb-94e7-54dd46ba1d0b 00:33:32.615 17:33:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=c26faefc-c20f-4abb-94e7-54dd46ba1d0b 00:33:32.615 17:33:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:33:32.615 17:33:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:33:32.615 17:33:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:33:32.615 17:33:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:32.876 17:33:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:33:32.876 { 00:33:32.876 "uuid": "682c0a03-a4dc-4035-862f-3e5eebed933d", 00:33:32.876 "name": "lvs_0", 00:33:32.876 "base_bdev": "Nvme0n1", 00:33:32.876 "total_data_clusters": 1787, 00:33:32.876 "free_clusters": 0, 00:33:32.876 "block_size": 512, 00:33:32.876 "cluster_size": 1073741824 00:33:32.876 }, 00:33:32.876 { 00:33:32.876 "uuid": "c26faefc-c20f-4abb-94e7-54dd46ba1d0b", 00:33:32.876 "name": "lvs_n_0", 00:33:32.876 "base_bdev": "bc04c5f9-8d41-46fb-b22e-a38b0390d893", 00:33:32.876 "total_data_clusters": 457025, 00:33:32.876 "free_clusters": 457025, 00:33:32.876 "block_size": 512, 00:33:32.876 "cluster_size": 4194304 00:33:32.876 } 00:33:32.876 ]' 00:33:32.876 17:33:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="c26faefc-c20f-4abb-94e7-54dd46ba1d0b") .free_clusters' 00:33:32.876 17:33:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=457025 00:33:32.876 17:33:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="c26faefc-c20f-4abb-94e7-54dd46ba1d0b") .cluster_size' 00:33:32.876 17:33:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:33:32.876 17:33:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=1828100 00:33:32.876 17:33:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 1828100 00:33:32.876 1828100 00:33:32.876 17:33:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1828100 00:33:33.817 c0a8c0f0-f818-4a29-9ace-7844824a7341 00:33:33.817 17:33:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:33:34.079 17:33:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:33:34.339 17:33:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:33:34.339 17:33:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:34.340 17:33:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:34.340 17:33:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:34.340 17:33:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:34.340 17:33:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:34.340 17:33:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:34.340 17:33:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:33:34.340 17:33:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:34.340 17:33:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:34.340 17:33:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:34.340 17:33:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:33:34.340 17:33:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:34.627 17:33:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:34.627 17:33:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:34.627 17:33:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:34.627 17:33:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:34.627 17:33:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:34.627 17:33:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:34.627 17:33:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:34.627 17:33:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:34.627 17:33:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:34.627 17:33:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:34.890 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:34.890 fio-3.35 00:33:34.890 Starting 1 thread 00:33:37.430 00:33:37.430 test: (groupid=0, jobs=1): err= 0: pid=3216381: Tue Oct 1 17:33:35 2024 00:33:37.430 read: IOPS=9285, BW=36.3MiB/s (38.0MB/s)(72.8MiB/2006msec) 00:33:37.430 slat (usec): min=2, max=117, avg= 2.22, stdev= 1.13 00:33:37.430 clat (usec): min=2104, max=12628, avg=7624.15, stdev=587.33 00:33:37.430 lat (usec): min=2121, max=12631, avg=7626.37, stdev=587.27 00:33:37.430 clat percentiles (usec): 00:33:37.430 | 1.00th=[ 6259], 5.00th=[ 6718], 10.00th=[ 6915], 20.00th=[ 7177], 00:33:37.430 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 7767], 00:33:37.430 | 70.00th=[ 7898], 80.00th=[ 8094], 90.00th=[ 8356], 95.00th=[ 8586], 00:33:37.430 | 99.00th=[ 8979], 99.50th=[ 9110], 99.90th=[10683], 99.95th=[11600], 00:33:37.430 | 99.99th=[12518] 00:33:37.430 bw ( KiB/s): min=36000, max=37704, per=99.90%, avg=37106.00, stdev=755.21, samples=4 00:33:37.430 iops : min= 9000, max= 9426, avg=9276.50, stdev=188.80, samples=4 00:33:37.430 write: IOPS=9287, BW=36.3MiB/s (38.0MB/s)(72.8MiB/2006msec); 0 zone resets 00:33:37.430 slat (nsec): min=2093, max=97044, avg=2286.55, stdev=749.51 00:33:37.430 clat (usec): min=1032, max=10741, avg=6078.48, stdev=505.61 00:33:37.430 lat (usec): min=1040, max=10744, avg=6080.77, stdev=505.58 00:33:37.430 clat percentiles (usec): 00:33:37.430 | 1.00th=[ 4883], 5.00th=[ 5276], 10.00th=[ 5473], 20.00th=[ 5669], 00:33:37.430 | 30.00th=[ 5866], 40.00th=[ 5997], 50.00th=[ 6063], 60.00th=[ 6194], 00:33:37.430 | 70.00th=[ 6325], 80.00th=[ 6456], 90.00th=[ 6652], 95.00th=[ 6849], 00:33:37.430 | 99.00th=[ 7177], 99.50th=[ 7308], 99.90th=[ 8979], 99.95th=[ 9896], 00:33:37.430 | 99.99th=[10683] 00:33:37.430 bw ( KiB/s): min=36880, max=37440, per=100.00%, avg=37156.00, stdev=292.81, samples=4 00:33:37.430 iops : min= 9220, max= 9360, avg=9289.00, stdev=73.20, samples=4 00:33:37.430 lat (msec) : 2=0.01%, 4=0.10%, 10=99.80%, 20=0.09% 00:33:37.430 cpu : usr=70.62%, sys=28.43%, ctx=47, majf=0, minf=20 00:33:37.430 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:37.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.430 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:37.430 issued rwts: total=18627,18631,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:37.430 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:37.430 00:33:37.430 Run status group 0 (all jobs): 00:33:37.430 READ: bw=36.3MiB/s (38.0MB/s), 36.3MiB/s-36.3MiB/s (38.0MB/s-38.0MB/s), io=72.8MiB (76.3MB), run=2006-2006msec 00:33:37.430 WRITE: bw=36.3MiB/s (38.0MB/s), 36.3MiB/s-36.3MiB/s (38.0MB/s-38.0MB/s), io=72.8MiB (76.3MB), run=2006-2006msec 00:33:37.430 17:33:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:33:37.430 17:33:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:33:37.430 17:33:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:33:39.968 17:33:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:33:39.968 17:33:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:33:40.228 17:33:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:33:40.488 17:33:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:33:42.396 17:33:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:42.396 17:33:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:33:42.396 17:33:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:33:42.396 17:33:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:42.396 17:33:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:33:42.396 17:33:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:42.396 17:33:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:33:42.396 17:33:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:42.396 17:33:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:42.396 rmmod nvme_tcp 00:33:42.396 rmmod nvme_fabrics 00:33:42.396 rmmod nvme_keyring 00:33:42.656 17:33:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:42.656 17:33:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:33:42.656 17:33:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:33:42.656 17:33:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 3212637 ']' 00:33:42.656 17:33:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 3212637 00:33:42.656 17:33:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 3212637 ']' 00:33:42.656 17:33:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 3212637 00:33:42.656 17:33:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:33:42.656 17:33:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:42.656 17:33:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3212637 00:33:42.656 17:33:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:42.656 17:33:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:42.656 17:33:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3212637' 00:33:42.656 killing process with pid 3212637 00:33:42.656 17:33:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 3212637 00:33:42.656 17:33:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 3212637 00:33:42.656 17:33:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:42.656 17:33:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:42.656 17:33:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:42.656 17:33:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:33:42.656 17:33:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-save 00:33:42.656 17:33:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:42.656 17:33:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-restore 00:33:42.656 17:33:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:42.656 17:33:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:42.656 17:33:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:42.656 17:33:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:42.656 17:33:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:45.200 17:33:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:45.200 00:33:45.200 real 0m33.614s 00:33:45.200 user 2m36.507s 00:33:45.200 sys 0m9.749s 00:33:45.200 17:33:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:45.200 17:33:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.200 ************************************ 00:33:45.200 END TEST nvmf_fio_host 00:33:45.200 ************************************ 00:33:45.200 17:33:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:45.200 17:33:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:45.200 17:33:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:45.200 17:33:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.200 ************************************ 00:33:45.201 START TEST nvmf_failover 00:33:45.201 ************************************ 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:45.201 * Looking for test storage... 00:33:45.201 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:45.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:45.201 --rc genhtml_branch_coverage=1 00:33:45.201 --rc genhtml_function_coverage=1 00:33:45.201 --rc genhtml_legend=1 00:33:45.201 --rc geninfo_all_blocks=1 00:33:45.201 --rc geninfo_unexecuted_blocks=1 00:33:45.201 00:33:45.201 ' 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:45.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:45.201 --rc genhtml_branch_coverage=1 00:33:45.201 --rc genhtml_function_coverage=1 00:33:45.201 --rc genhtml_legend=1 00:33:45.201 --rc geninfo_all_blocks=1 00:33:45.201 --rc geninfo_unexecuted_blocks=1 00:33:45.201 00:33:45.201 ' 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:45.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:45.201 --rc genhtml_branch_coverage=1 00:33:45.201 --rc genhtml_function_coverage=1 00:33:45.201 --rc genhtml_legend=1 00:33:45.201 --rc geninfo_all_blocks=1 00:33:45.201 --rc geninfo_unexecuted_blocks=1 00:33:45.201 00:33:45.201 ' 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:45.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:45.201 --rc genhtml_branch_coverage=1 00:33:45.201 --rc genhtml_function_coverage=1 00:33:45.201 --rc genhtml_legend=1 00:33:45.201 --rc geninfo_all_blocks=1 00:33:45.201 --rc geninfo_unexecuted_blocks=1 00:33:45.201 00:33:45.201 ' 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:45.201 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:45.201 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:45.202 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:45.202 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:45.202 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:45.202 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:45.202 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:33:45.202 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:45.202 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:45.202 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:45.202 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:45.202 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:45.202 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:45.202 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:45.202 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:45.202 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:45.202 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:45.202 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:33:45.202 17:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:53.351 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:53.351 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:53.351 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:53.351 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # is_hw=yes 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:53.351 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:53.352 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:53.352 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:53.352 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:53.352 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:53.352 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:53.352 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:53.352 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:53.352 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:53.352 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:53.352 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:53.352 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.594 ms 00:33:53.352 00:33:53.352 --- 10.0.0.2 ping statistics --- 00:33:53.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:53.352 rtt min/avg/max/mdev = 0.594/0.594/0.594/0.000 ms 00:33:53.352 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:53.352 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:53.352 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:33:53.352 00:33:53.352 --- 10.0.0.1 ping statistics --- 00:33:53.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:53.352 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:33:53.352 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:53.352 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # return 0 00:33:53.352 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:53.352 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:53.352 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:53.352 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:53.352 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:53.352 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:53.352 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:53.352 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:33:53.352 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:53.352 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:53.352 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:53.352 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=3221791 00:33:53.352 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 3221791 00:33:53.352 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:53.352 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3221791 ']' 00:33:53.352 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:53.352 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:53.352 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:53.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:53.352 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:53.352 17:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:53.352 [2024-10-01 17:33:50.859937] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:33:53.352 [2024-10-01 17:33:50.860016] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:53.352 [2024-10-01 17:33:50.948827] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:53.352 [2024-10-01 17:33:50.997209] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:53.352 [2024-10-01 17:33:50.997264] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:53.352 [2024-10-01 17:33:50.997272] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:53.352 [2024-10-01 17:33:50.997279] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:53.352 [2024-10-01 17:33:50.997285] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:53.352 [2024-10-01 17:33:50.997414] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:33:53.352 [2024-10-01 17:33:50.997581] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:33:53.352 [2024-10-01 17:33:50.997580] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:33:53.352 17:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:53.352 17:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:33:53.352 17:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:53.352 17:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:53.352 17:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:53.352 17:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:53.352 17:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:53.352 [2024-10-01 17:33:51.861881] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:53.676 17:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:53.676 Malloc0 00:33:53.676 17:33:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:53.955 17:33:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:53.955 17:33:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:54.247 [2024-10-01 17:33:52.595123] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:54.247 17:33:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:54.247 [2024-10-01 17:33:52.771610] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:54.508 17:33:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:54.508 [2024-10-01 17:33:52.956187] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:54.508 17:33:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3222325 00:33:54.508 17:33:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:33:54.508 17:33:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:54.508 17:33:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3222325 /var/tmp/bdevperf.sock 00:33:54.508 17:33:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3222325 ']' 00:33:54.508 17:33:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:54.508 17:33:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:54.508 17:33:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:54.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:54.508 17:33:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:54.508 17:33:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:54.768 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:54.768 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:33:54.768 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:55.339 NVMe0n1 00:33:55.339 17:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:55.600 00:33:55.600 17:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3222426 00:33:55.600 17:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:55.600 17:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:33:56.540 17:33:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:56.800 [2024-10-01 17:33:55.187279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f940 is same with the state(6) to be set 00:33:56.800 [2024-10-01 17:33:55.187320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f940 is same with the state(6) to be set 00:33:56.800 [2024-10-01 17:33:55.187327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f940 is same with the state(6) to be set 00:33:56.800 [2024-10-01 17:33:55.187332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f940 is same with the state(6) to be set 00:33:56.800 [2024-10-01 17:33:55.187337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f940 is same with the state(6) to be set 00:33:56.800 [2024-10-01 17:33:55.187342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f940 is same with the state(6) to be set 00:33:56.800 [2024-10-01 17:33:55.187346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f940 is same with the state(6) to be set 00:33:56.801 17:33:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:34:00.098 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:00.098 00:34:00.359 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:00.359 [2024-10-01 17:33:58.818070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5706f0 is same with the state(6) to be set 00:34:00.359 [2024-10-01 17:33:58.818103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5706f0 is same with the state(6) to be set 00:34:00.359 [2024-10-01 17:33:58.818108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5706f0 is same with the state(6) to be set 00:34:00.359 [2024-10-01 17:33:58.818113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5706f0 is same with the state(6) to be set 00:34:00.359 [2024-10-01 17:33:58.818118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5706f0 is same with the state(6) to be set 00:34:00.359 [2024-10-01 17:33:58.818123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5706f0 is same with the state(6) to be set 00:34:00.359 [2024-10-01 17:33:58.818128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5706f0 is same with the state(6) to be set 00:34:00.359 [2024-10-01 17:33:58.818132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5706f0 is same with the state(6) to be set 00:34:00.359 [2024-10-01 17:33:58.818137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5706f0 is same with the state(6) to be set 00:34:00.360 [2024-10-01 17:33:58.818142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5706f0 is same with the state(6) to be set 00:34:00.360 [2024-10-01 17:33:58.818147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5706f0 is same with the state(6) to be set 00:34:00.360 [2024-10-01 17:33:58.818151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5706f0 is same with the state(6) to be set 00:34:00.360 [2024-10-01 17:33:58.818156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5706f0 is same with the state(6) to be set 00:34:00.360 [2024-10-01 17:33:58.818160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5706f0 is same with the state(6) to be set 00:34:00.360 [2024-10-01 17:33:58.818165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5706f0 is same with the state(6) to be set 00:34:00.360 [2024-10-01 17:33:58.818169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5706f0 is same with the state(6) to be set 00:34:00.360 [2024-10-01 17:33:58.818174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5706f0 is same with the state(6) to be set 00:34:00.360 [2024-10-01 17:33:58.818179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5706f0 is same with the state(6) to be set 00:34:00.360 [2024-10-01 17:33:58.818183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5706f0 is same with the state(6) to be set 00:34:00.360 [2024-10-01 17:33:58.818187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5706f0 is same with the state(6) to be set 00:34:00.360 [2024-10-01 17:33:58.818192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5706f0 is same with the state(6) to be set 00:34:00.360 [2024-10-01 17:33:58.818197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5706f0 is same with the state(6) to be set 00:34:00.360 [2024-10-01 17:33:58.818201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5706f0 is same with the state(6) to be set 00:34:00.360 [2024-10-01 17:33:58.818206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5706f0 is same with the state(6) to be set 00:34:00.360 [2024-10-01 17:33:58.818211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5706f0 is same with the state(6) to be set 00:34:00.360 17:33:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:34:03.659 17:34:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:03.659 [2024-10-01 17:34:02.008336] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:03.659 17:34:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:34:04.602 17:34:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:34:04.862 17:34:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3222426 00:34:11.447 { 00:34:11.447 "results": [ 00:34:11.447 { 00:34:11.447 "job": "NVMe0n1", 00:34:11.447 "core_mask": "0x1", 00:34:11.447 "workload": "verify", 00:34:11.447 "status": "finished", 00:34:11.447 "verify_range": { 00:34:11.447 "start": 0, 00:34:11.447 "length": 16384 00:34:11.447 }, 00:34:11.447 "queue_depth": 128, 00:34:11.447 "io_size": 4096, 00:34:11.447 "runtime": 15.009887, 00:34:11.447 "iops": 11256.047430603574, 00:34:11.447 "mibps": 43.96893527579521, 00:34:11.447 "io_failed": 10149, 00:34:11.447 "io_timeout": 0, 00:34:11.447 "avg_latency_us": 10699.775243391532, 00:34:11.447 "min_latency_us": 539.3066666666666, 00:34:11.447 "max_latency_us": 16711.68 00:34:11.447 } 00:34:11.447 ], 00:34:11.447 "core_count": 1 00:34:11.447 } 00:34:11.447 17:34:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3222325 00:34:11.447 17:34:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3222325 ']' 00:34:11.447 17:34:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3222325 00:34:11.447 17:34:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:34:11.447 17:34:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:11.447 17:34:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3222325 00:34:11.447 17:34:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:11.447 17:34:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:11.447 17:34:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3222325' 00:34:11.447 killing process with pid 3222325 00:34:11.447 17:34:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3222325 00:34:11.447 17:34:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3222325 00:34:11.447 17:34:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:11.447 [2024-10-01 17:33:53.037175] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:34:11.447 [2024-10-01 17:33:53.037237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3222325 ] 00:34:11.447 [2024-10-01 17:33:53.098527] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:11.447 [2024-10-01 17:33:53.129555] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:11.447 Running I/O for 15 seconds... 00:34:11.447 11123.00 IOPS, 43.45 MiB/s [2024-10-01 17:33:55.187603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:95864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.447 [2024-10-01 17:33:55.187636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.447 [2024-10-01 17:33:55.187654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.447 [2024-10-01 17:33:55.187662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.447 [2024-10-01 17:33:55.187672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.447 [2024-10-01 17:33:55.187680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.447 [2024-10-01 17:33:55.187689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.447 [2024-10-01 17:33:55.187697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.447 [2024-10-01 17:33:55.187706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:95896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.447 [2024-10-01 17:33:55.187713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.447 [2024-10-01 17:33:55.187723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.447 [2024-10-01 17:33:55.187730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.447 [2024-10-01 17:33:55.187739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:95912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.447 [2024-10-01 17:33:55.187746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.447 [2024-10-01 17:33:55.187756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.447 [2024-10-01 17:33:55.187763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.447 [2024-10-01 17:33:55.187772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.447 [2024-10-01 17:33:55.187779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.447 [2024-10-01 17:33:55.187789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.447 [2024-10-01 17:33:55.187796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.447 [2024-10-01 17:33:55.187805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.447 [2024-10-01 17:33:55.187812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.447 [2024-10-01 17:33:55.187828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.447 [2024-10-01 17:33:55.187835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.447 [2024-10-01 17:33:55.187844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:95960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.447 [2024-10-01 17:33:55.187852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.447 [2024-10-01 17:33:55.187861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.447 [2024-10-01 17:33:55.187869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.447 [2024-10-01 17:33:55.187878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.447 [2024-10-01 17:33:55.187885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.447 [2024-10-01 17:33:55.187894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.447 [2024-10-01 17:33:55.187901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.447 [2024-10-01 17:33:55.187911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:95992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.447 [2024-10-01 17:33:55.187918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.447 [2024-10-01 17:33:55.187927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.447 [2024-10-01 17:33:55.187934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.447 [2024-10-01 17:33:55.187943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.447 [2024-10-01 17:33:55.187951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.447 [2024-10-01 17:33:55.187960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.448 [2024-10-01 17:33:55.187967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.448 [2024-10-01 17:33:55.187976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.448 [2024-10-01 17:33:55.187983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.448 [2024-10-01 17:33:55.187997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.448 [2024-10-01 17:33:55.188010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.448 [2024-10-01 17:33:55.188019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.448 [2024-10-01 17:33:55.188026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.448 [2024-10-01 17:33:55.188035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.448 [2024-10-01 17:33:55.188045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.448 [2024-10-01 17:33:55.188054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.448 [2024-10-01 17:33:55.188061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.448 [2024-10-01 17:33:55.188071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.448 [2024-10-01 17:33:55.188078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.448 [2024-10-01 17:33:55.188087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.448 [2024-10-01 17:33:55.188094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.448 [2024-10-01 17:33:55.188103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.448 [2024-10-01 17:33:55.188110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.448 [2024-10-01 17:33:55.188120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.448 [2024-10-01 17:33:55.188127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.448 [2024-10-01 17:33:55.188136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.448 [2024-10-01 17:33:55.188143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.448 [2024-10-01 17:33:55.188153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.448 [2024-10-01 17:33:55.188160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.448 [2024-10-01 17:33:55.188169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.448 [2024-10-01 17:33:55.188176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.448 [2024-10-01 17:33:55.188185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.448 [2024-10-01 17:33:55.188194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.448 [2024-10-01 17:33:55.188203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.448 [2024-10-01 17:33:55.188211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.448 [2024-10-01 17:33:55.188220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.448 [2024-10-01 17:33:55.188228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.448 [2024-10-01 17:33:55.188237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.448 [2024-10-01 17:33:55.188244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.448 [2024-10-01 17:33:55.188255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.448 [2024-10-01 17:33:55.188262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.448 [2024-10-01 17:33:55.188272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.448 [2024-10-01 17:33:55.188280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.448 [2024-10-01 17:33:55.188289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.448 [2024-10-01 17:33:55.188296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.448 [2024-10-01 17:33:55.188306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.448 [2024-10-01 17:33:55.188313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.448 [2024-10-01 17:33:55.188322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.448 [2024-10-01 17:33:55.188330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.448 [2024-10-01 17:33:55.188339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.448 [2024-10-01 17:33:55.188347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.448 [2024-10-01 17:33:55.188356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.448 [2024-10-01 17:33:55.188363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.448 [2024-10-01 17:33:55.188373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.448 [2024-10-01 17:33:55.188380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.448 [2024-10-01 17:33:55.188389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.448 [2024-10-01 17:33:55.188396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.448 [2024-10-01 17:33:55.188405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.448 [2024-10-01 17:33:55.188413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.448 [2024-10-01 17:33:55.188422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.448 [2024-10-01 17:33:55.188430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.448 [2024-10-01 17:33:55.188439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.448 [2024-10-01 17:33:55.188447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.448 [2024-10-01 17:33:55.188457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.448 [2024-10-01 17:33:55.188464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.448 [2024-10-01 17:33:55.188475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.448 [2024-10-01 17:33:55.188482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.448 [2024-10-01 17:33:55.188492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.448 [2024-10-01 17:33:55.188500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.448 [2024-10-01 17:33:55.188510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.448 [2024-10-01 17:33:55.188517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.448 [2024-10-01 17:33:55.188526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.448 [2024-10-01 17:33:55.188534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.448 [2024-10-01 17:33:55.188544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.448 [2024-10-01 17:33:55.188552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.448 [2024-10-01 17:33:55.188562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.448 [2024-10-01 17:33:55.188570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.448 [2024-10-01 17:33:55.188579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.448 [2024-10-01 17:33:55.188587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.448 [2024-10-01 17:33:55.188596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.448 [2024-10-01 17:33:55.188603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.449 [2024-10-01 17:33:55.188612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.449 [2024-10-01 17:33:55.188619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.449 [2024-10-01 17:33:55.188629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.449 [2024-10-01 17:33:55.188636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.449 [2024-10-01 17:33:55.188645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.449 [2024-10-01 17:33:55.188652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.449 [2024-10-01 17:33:55.188661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.449 [2024-10-01 17:33:55.188669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.449 [2024-10-01 17:33:55.188678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.449 [2024-10-01 17:33:55.188686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.449 [2024-10-01 17:33:55.188696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.449 [2024-10-01 17:33:55.188703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.449 [2024-10-01 17:33:55.188713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.449 [2024-10-01 17:33:55.188720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.449 [2024-10-01 17:33:55.188729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.449 [2024-10-01 17:33:55.188737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.449 [2024-10-01 17:33:55.188747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.449 [2024-10-01 17:33:55.188754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.449 [2024-10-01 17:33:55.188764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.449 [2024-10-01 17:33:55.188771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.449 [2024-10-01 17:33:55.188781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.449 [2024-10-01 17:33:55.188788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.449 [2024-10-01 17:33:55.188798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.449 [2024-10-01 17:33:55.188805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.449 [2024-10-01 17:33:55.188814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.449 [2024-10-01 17:33:55.188821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.449 [2024-10-01 17:33:55.188831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.449 [2024-10-01 17:33:55.188838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.449 [2024-10-01 17:33:55.188848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.449 [2024-10-01 17:33:55.188855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.449 [2024-10-01 17:33:55.188865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.449 [2024-10-01 17:33:55.188872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.449 [2024-10-01 17:33:55.188881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.449 [2024-10-01 17:33:55.188889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.449 [2024-10-01 17:33:55.188900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.449 [2024-10-01 17:33:55.188907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.449 [2024-10-01 17:33:55.188916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.449 [2024-10-01 17:33:55.188923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.449 [2024-10-01 17:33:55.188933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.449 [2024-10-01 17:33:55.188941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.449 [2024-10-01 17:33:55.188951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.449 [2024-10-01 17:33:55.188958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.449 [2024-10-01 17:33:55.188967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.449 [2024-10-01 17:33:55.188975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.449 [2024-10-01 17:33:55.188985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.449 [2024-10-01 17:33:55.188992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.449 [2024-10-01 17:33:55.189006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.449 [2024-10-01 17:33:55.189013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.449 [2024-10-01 17:33:55.189023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.449 [2024-10-01 17:33:55.189030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.449 [2024-10-01 17:33:55.189039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:95504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.449 [2024-10-01 17:33:55.189047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.449 [2024-10-01 17:33:55.189056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.449 [2024-10-01 17:33:55.189063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.449 [2024-10-01 17:33:55.189073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.449 [2024-10-01 17:33:55.189080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.449 [2024-10-01 17:33:55.189090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.449 [2024-10-01 17:33:55.189097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.449 [2024-10-01 17:33:55.189106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.449 [2024-10-01 17:33:55.189118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.449 [2024-10-01 17:33:55.189128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.449 [2024-10-01 17:33:55.189135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.449 [2024-10-01 17:33:55.189144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.449 [2024-10-01 17:33:55.189152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.449 [2024-10-01 17:33:55.189161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.449 [2024-10-01 17:33:55.189168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.449 [2024-10-01 17:33:55.189178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.449 [2024-10-01 17:33:55.189186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.449 [2024-10-01 17:33:55.189195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.449 [2024-10-01 17:33:55.189202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.449 [2024-10-01 17:33:55.189212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.449 [2024-10-01 17:33:55.189219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.449 [2024-10-01 17:33:55.189228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.449 [2024-10-01 17:33:55.189235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.449 [2024-10-01 17:33:55.189245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.449 [2024-10-01 17:33:55.189252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.450 [2024-10-01 17:33:55.189261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.450 [2024-10-01 17:33:55.189268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.450 [2024-10-01 17:33:55.189278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.450 [2024-10-01 17:33:55.189285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.450 [2024-10-01 17:33:55.189295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.450 [2024-10-01 17:33:55.189302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.450 [2024-10-01 17:33:55.189312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.450 [2024-10-01 17:33:55.189319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.450 [2024-10-01 17:33:55.189330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.450 [2024-10-01 17:33:55.189338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.450 [2024-10-01 17:33:55.189348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.450 [2024-10-01 17:33:55.189355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.450 [2024-10-01 17:33:55.189364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.450 [2024-10-01 17:33:55.189371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.450 [2024-10-01 17:33:55.189381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.450 [2024-10-01 17:33:55.189388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.450 [2024-10-01 17:33:55.189398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.450 [2024-10-01 17:33:55.189405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.450 [2024-10-01 17:33:55.189414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.450 [2024-10-01 17:33:55.189421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.450 [2024-10-01 17:33:55.189431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.450 [2024-10-01 17:33:55.189438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.450 [2024-10-01 17:33:55.189448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.450 [2024-10-01 17:33:55.189455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.450 [2024-10-01 17:33:55.189465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.450 [2024-10-01 17:33:55.189472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.450 [2024-10-01 17:33:55.189482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.450 [2024-10-01 17:33:55.189489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.450 [2024-10-01 17:33:55.189498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.450 [2024-10-01 17:33:55.189506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.450 [2024-10-01 17:33:55.189515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.450 [2024-10-01 17:33:55.189522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.450 [2024-10-01 17:33:55.189531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.450 [2024-10-01 17:33:55.189538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.450 [2024-10-01 17:33:55.189550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.450 [2024-10-01 17:33:55.189559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.450 [2024-10-01 17:33:55.189568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.450 [2024-10-01 17:33:55.189575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.450 [2024-10-01 17:33:55.189584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.450 [2024-10-01 17:33:55.189591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.450 [2024-10-01 17:33:55.189601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.450 [2024-10-01 17:33:55.189608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.450 [2024-10-01 17:33:55.189618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.450 [2024-10-01 17:33:55.189625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.450 [2024-10-01 17:33:55.189634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.450 [2024-10-01 17:33:55.189641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.450 [2024-10-01 17:33:55.189651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.450 [2024-10-01 17:33:55.189658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.450 [2024-10-01 17:33:55.189667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.450 [2024-10-01 17:33:55.189675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.450 [2024-10-01 17:33:55.189684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.450 [2024-10-01 17:33:55.189691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.450 [2024-10-01 17:33:55.189700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.450 [2024-10-01 17:33:55.189707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.450 [2024-10-01 17:33:55.189717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.450 [2024-10-01 17:33:55.189725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.450 [2024-10-01 17:33:55.189734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.450 [2024-10-01 17:33:55.189741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.450 [2024-10-01 17:33:55.189750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.450 [2024-10-01 17:33:55.189759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.450 [2024-10-01 17:33:55.189769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.450 [2024-10-01 17:33:55.189776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.450 [2024-10-01 17:33:55.189785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.450 [2024-10-01 17:33:55.189793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.450 [2024-10-01 17:33:55.189801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239d450 is same with the state(6) to be set 00:34:11.450 [2024-10-01 17:33:55.189810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:11.450 [2024-10-01 17:33:55.189816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:11.450 [2024-10-01 17:33:55.189823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95856 len:8 PRP1 0x0 PRP2 0x0 00:34:11.450 [2024-10-01 17:33:55.189831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.450 [2024-10-01 17:33:55.189865] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x239d450 was disconnected and freed. reset controller. 00:34:11.450 [2024-10-01 17:33:55.189874] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:34:11.450 [2024-10-01 17:33:55.189896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:11.450 [2024-10-01 17:33:55.189904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.450 [2024-10-01 17:33:55.189913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:11.450 [2024-10-01 17:33:55.189920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.450 [2024-10-01 17:33:55.189929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:11.450 [2024-10-01 17:33:55.189936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.450 [2024-10-01 17:33:55.189945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:11.451 [2024-10-01 17:33:55.189952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.451 [2024-10-01 17:33:55.189959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.451 [2024-10-01 17:33:55.193509] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.451 [2024-10-01 17:33:55.193533] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x237ce40 (9): Bad file descriptor 00:34:11.451 [2024-10-01 17:33:55.276135] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:11.451 10787.00 IOPS, 42.14 MiB/s 10967.00 IOPS, 42.84 MiB/s 11053.00 IOPS, 43.18 MiB/s [2024-10-01 17:33:58.820505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:11.451 [2024-10-01 17:33:58.820542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.451 [2024-10-01 17:33:58.820558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:11.451 [2024-10-01 17:33:58.820566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.451 [2024-10-01 17:33:58.820574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:11.451 [2024-10-01 17:33:58.820581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.451 [2024-10-01 17:33:58.820590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:11.451 [2024-10-01 17:33:58.820597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.451 [2024-10-01 17:33:58.820604] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237ce40 is same with the state(6) to be set 00:34:11.451 [2024-10-01 17:33:58.820665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.451 [2024-10-01 17:33:58.820675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.451 [2024-10-01 17:33:58.820689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.451 [2024-10-01 17:33:58.820696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.451 [2024-10-01 17:33:58.820706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.451 [2024-10-01 17:33:58.820713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.451 [2024-10-01 17:33:58.820723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.451 [2024-10-01 17:33:58.820730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.451 [2024-10-01 17:33:58.820740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.451 [2024-10-01 17:33:58.820747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.451 [2024-10-01 17:33:58.820757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:45728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.451 [2024-10-01 17:33:58.820765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.451 [2024-10-01 17:33:58.820774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:45736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.451 [2024-10-01 17:33:58.820782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.451 [2024-10-01 17:33:58.820791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.451 [2024-10-01 17:33:58.820798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.451 [2024-10-01 17:33:58.820807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:45752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.451 [2024-10-01 17:33:58.820815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.451 [2024-10-01 17:33:58.820824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.451 [2024-10-01 17:33:58.820834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.451 [2024-10-01 17:33:58.820844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:45768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.451 [2024-10-01 17:33:58.820851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.451 [2024-10-01 17:33:58.820861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.451 [2024-10-01 17:33:58.820868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.451 [2024-10-01 17:33:58.820878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.451 [2024-10-01 17:33:58.820885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.451 [2024-10-01 17:33:58.820895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.451 [2024-10-01 17:33:58.820902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.451 [2024-10-01 17:33:58.820912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.451 [2024-10-01 17:33:58.820920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.451 [2024-10-01 17:33:58.820930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:45808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.451 [2024-10-01 17:33:58.820937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.451 [2024-10-01 17:33:58.820946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:45816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.451 [2024-10-01 17:33:58.820954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.451 [2024-10-01 17:33:58.820963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:45824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.451 [2024-10-01 17:33:58.820970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.451 [2024-10-01 17:33:58.820980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:45832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.451 [2024-10-01 17:33:58.820987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.451 [2024-10-01 17:33:58.821004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:45840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.451 [2024-10-01 17:33:58.821012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.451 [2024-10-01 17:33:58.821021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.451 [2024-10-01 17:33:58.821028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.451 [2024-10-01 17:33:58.821038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:45856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.451 [2024-10-01 17:33:58.821045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.451 [2024-10-01 17:33:58.821056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:45864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.451 [2024-10-01 17:33:58.821063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.451 [2024-10-01 17:33:58.821072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.451 [2024-10-01 17:33:58.821080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.451 [2024-10-01 17:33:58.821089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.451 [2024-10-01 17:33:58.821096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.451 [2024-10-01 17:33:58.821105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.451 [2024-10-01 17:33:58.821112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.451 [2024-10-01 17:33:58.821121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:45896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.452 [2024-10-01 17:33:58.821129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.452 [2024-10-01 17:33:58.821138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:45904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.452 [2024-10-01 17:33:58.821145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.452 [2024-10-01 17:33:58.821156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:45912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.452 [2024-10-01 17:33:58.821163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.452 [2024-10-01 17:33:58.821173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.452 [2024-10-01 17:33:58.821180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.452 [2024-10-01 17:33:58.821189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:45928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.452 [2024-10-01 17:33:58.821197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.452 [2024-10-01 17:33:58.821206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:45936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.452 [2024-10-01 17:33:58.821213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.452 [2024-10-01 17:33:58.821223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.452 [2024-10-01 17:33:58.821230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.452 [2024-10-01 17:33:58.821239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:45952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.452 [2024-10-01 17:33:58.821246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.452 [2024-10-01 17:33:58.821255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:45960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.452 [2024-10-01 17:33:58.821268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.452 [2024-10-01 17:33:58.821278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.452 [2024-10-01 17:33:58.821285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.452 [2024-10-01 17:33:58.821294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:45976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.452 [2024-10-01 17:33:58.821302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.452 [2024-10-01 17:33:58.821311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.452 [2024-10-01 17:33:58.821318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.452 [2024-10-01 17:33:58.821327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.452 [2024-10-01 17:33:58.821334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.452 [2024-10-01 17:33:58.821344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:46000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.452 [2024-10-01 17:33:58.821351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.452 [2024-10-01 17:33:58.821360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.452 [2024-10-01 17:33:58.821368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.452 [2024-10-01 17:33:58.821377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:46016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.452 [2024-10-01 17:33:58.821384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.452 [2024-10-01 17:33:58.821393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:46024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.452 [2024-10-01 17:33:58.821400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.452 [2024-10-01 17:33:58.821410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:46032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.452 [2024-10-01 17:33:58.821417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.452 [2024-10-01 17:33:58.821426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:46040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.452 [2024-10-01 17:33:58.821434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.452 [2024-10-01 17:33:58.821443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:46048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.452 [2024-10-01 17:33:58.821450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.452 [2024-10-01 17:33:58.821459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:46056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.452 [2024-10-01 17:33:58.821467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.452 [2024-10-01 17:33:58.821476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:46064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.452 [2024-10-01 17:33:58.821485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.452 [2024-10-01 17:33:58.821494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:46072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.452 [2024-10-01 17:33:58.821501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.452 [2024-10-01 17:33:58.821511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:46080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.452 [2024-10-01 17:33:58.821518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.452 [2024-10-01 17:33:58.821527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:46088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.452 [2024-10-01 17:33:58.821534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.452 [2024-10-01 17:33:58.821544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:46096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.452 [2024-10-01 17:33:58.821551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.452 [2024-10-01 17:33:58.821560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:46104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.452 [2024-10-01 17:33:58.821567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.452 [2024-10-01 17:33:58.821576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:46112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.452 [2024-10-01 17:33:58.821583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.452 [2024-10-01 17:33:58.821593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:46120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.452 [2024-10-01 17:33:58.821600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.452 [2024-10-01 17:33:58.821609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:46128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.452 [2024-10-01 17:33:58.821616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.452 [2024-10-01 17:33:58.821625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:46136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.452 [2024-10-01 17:33:58.821632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.452 [2024-10-01 17:33:58.821641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:46144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.452 [2024-10-01 17:33:58.821649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.452 [2024-10-01 17:33:58.821658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:46152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.452 [2024-10-01 17:33:58.821665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.452 [2024-10-01 17:33:58.821674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:46160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.452 [2024-10-01 17:33:58.821682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.452 [2024-10-01 17:33:58.821693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:46168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.452 [2024-10-01 17:33:58.821700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.452 [2024-10-01 17:33:58.821711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:45432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.452 [2024-10-01 17:33:58.821718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.452 [2024-10-01 17:33:58.821728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:45440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.452 [2024-10-01 17:33:58.821735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.452 [2024-10-01 17:33:58.821744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:45448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.452 [2024-10-01 17:33:58.821751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.453 [2024-10-01 17:33:58.821761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:45456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.453 [2024-10-01 17:33:58.821768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.453 [2024-10-01 17:33:58.821778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:45464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.453 [2024-10-01 17:33:58.821785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.453 [2024-10-01 17:33:58.821794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:45472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.453 [2024-10-01 17:33:58.821802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.453 [2024-10-01 17:33:58.821811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:45480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.453 [2024-10-01 17:33:58.821819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.453 [2024-10-01 17:33:58.821828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:45488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.453 [2024-10-01 17:33:58.821835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.453 [2024-10-01 17:33:58.821844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:45496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.453 [2024-10-01 17:33:58.821851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.453 [2024-10-01 17:33:58.821861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:45504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.453 [2024-10-01 17:33:58.821868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.453 [2024-10-01 17:33:58.821877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:45512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.453 [2024-10-01 17:33:58.821885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.453 [2024-10-01 17:33:58.821894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:45520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.453 [2024-10-01 17:33:58.821902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.453 [2024-10-01 17:33:58.821912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:45528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.453 [2024-10-01 17:33:58.821919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.453 [2024-10-01 17:33:58.821929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:45536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.453 [2024-10-01 17:33:58.821936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.453 [2024-10-01 17:33:58.821945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:45544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.453 [2024-10-01 17:33:58.821952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.453 [2024-10-01 17:33:58.821961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:46176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.453 [2024-10-01 17:33:58.821969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.453 [2024-10-01 17:33:58.821978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:46184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.453 [2024-10-01 17:33:58.821985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.453 [2024-10-01 17:33:58.821998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:46192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.453 [2024-10-01 17:33:58.822005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.453 [2024-10-01 17:33:58.822014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.453 [2024-10-01 17:33:58.822022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.453 [2024-10-01 17:33:58.822031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:46208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.453 [2024-10-01 17:33:58.822038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.453 [2024-10-01 17:33:58.822048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:46216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.453 [2024-10-01 17:33:58.822055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.453 [2024-10-01 17:33:58.822064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:46224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.453 [2024-10-01 17:33:58.822071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.453 [2024-10-01 17:33:58.822080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:46232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.453 [2024-10-01 17:33:58.822087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.453 [2024-10-01 17:33:58.822096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:46240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.453 [2024-10-01 17:33:58.822108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.453 [2024-10-01 17:33:58.822119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:46248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.453 [2024-10-01 17:33:58.822126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.453 [2024-10-01 17:33:58.822135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:46256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.453 [2024-10-01 17:33:58.822142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.453 [2024-10-01 17:33:58.822151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.453 [2024-10-01 17:33:58.822159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.453 [2024-10-01 17:33:58.822168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:46272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.453 [2024-10-01 17:33:58.822175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.453 [2024-10-01 17:33:58.822184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:46280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.453 [2024-10-01 17:33:58.822191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.453 [2024-10-01 17:33:58.822201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:46288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.453 [2024-10-01 17:33:58.822208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.453 [2024-10-01 17:33:58.822217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:46296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.453 [2024-10-01 17:33:58.822224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.453 [2024-10-01 17:33:58.822233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:46304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.453 [2024-10-01 17:33:58.822241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.453 [2024-10-01 17:33:58.822250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:46312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.453 [2024-10-01 17:33:58.822258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.453 [2024-10-01 17:33:58.822267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:46320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.453 [2024-10-01 17:33:58.822274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.453 [2024-10-01 17:33:58.822283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:46328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.453 [2024-10-01 17:33:58.822291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.453 [2024-10-01 17:33:58.822300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:46336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.453 [2024-10-01 17:33:58.822307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.453 [2024-10-01 17:33:58.822316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:46344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.453 [2024-10-01 17:33:58.822323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.453 [2024-10-01 17:33:58.822334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:46352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.453 [2024-10-01 17:33:58.822341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.453 [2024-10-01 17:33:58.822351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:46360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.453 [2024-10-01 17:33:58.822358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.453 [2024-10-01 17:33:58.822367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.453 [2024-10-01 17:33:58.822374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.453 [2024-10-01 17:33:58.822384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:46376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.453 [2024-10-01 17:33:58.822391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.453 [2024-10-01 17:33:58.822400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:46384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.454 [2024-10-01 17:33:58.822407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.454 [2024-10-01 17:33:58.822416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:46392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.454 [2024-10-01 17:33:58.822424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.454 [2024-10-01 17:33:58.822433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:46400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.454 [2024-10-01 17:33:58.822440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.454 [2024-10-01 17:33:58.822449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:46408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.454 [2024-10-01 17:33:58.822456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.454 [2024-10-01 17:33:58.822466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:45552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.454 [2024-10-01 17:33:58.822473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.454 [2024-10-01 17:33:58.822482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:45560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.454 [2024-10-01 17:33:58.822489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.454 [2024-10-01 17:33:58.822498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:45568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.454 [2024-10-01 17:33:58.822505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.454 [2024-10-01 17:33:58.822515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:45576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.454 [2024-10-01 17:33:58.822522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.454 [2024-10-01 17:33:58.822531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:45584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.454 [2024-10-01 17:33:58.822540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.454 [2024-10-01 17:33:58.822549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:45592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.454 [2024-10-01 17:33:58.822556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.454 [2024-10-01 17:33:58.822566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:45600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.454 [2024-10-01 17:33:58.822573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.454 [2024-10-01 17:33:58.822582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:45608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.454 [2024-10-01 17:33:58.822589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.454 [2024-10-01 17:33:58.822598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:46416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.454 [2024-10-01 17:33:58.822605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.454 [2024-10-01 17:33:58.822614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:45616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.454 [2024-10-01 17:33:58.822622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.454 [2024-10-01 17:33:58.822631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:45624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.454 [2024-10-01 17:33:58.822640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.454 [2024-10-01 17:33:58.822649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:45632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.454 [2024-10-01 17:33:58.822657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.454 [2024-10-01 17:33:58.822666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:45640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.454 [2024-10-01 17:33:58.822673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.454 [2024-10-01 17:33:58.822683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:45648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.454 [2024-10-01 17:33:58.822690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.454 [2024-10-01 17:33:58.822699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:45656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.454 [2024-10-01 17:33:58.822706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.454 [2024-10-01 17:33:58.822715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:45664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.454 [2024-10-01 17:33:58.822723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.454 [2024-10-01 17:33:58.822732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:45672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.454 [2024-10-01 17:33:58.822739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.454 [2024-10-01 17:33:58.822750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:46424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.454 [2024-10-01 17:33:58.822757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.454 [2024-10-01 17:33:58.822766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:46432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.454 [2024-10-01 17:33:58.822773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.454 [2024-10-01 17:33:58.822784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:46440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.454 [2024-10-01 17:33:58.822791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.454 [2024-10-01 17:33:58.822800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:46448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.454 [2024-10-01 17:33:58.822808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.454 [2024-10-01 17:33:58.822826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:11.454 [2024-10-01 17:33:58.822833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:11.454 [2024-10-01 17:33:58.822839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45680 len:8 PRP1 0x0 PRP2 0x0 00:34:11.454 [2024-10-01 17:33:58.822847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.454 [2024-10-01 17:33:58.822882] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x239f5a0 was disconnected and freed. reset controller. 00:34:11.454 [2024-10-01 17:33:58.822891] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:34:11.454 [2024-10-01 17:33:58.822899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.454 [2024-10-01 17:33:58.826427] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.454 [2024-10-01 17:33:58.826449] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x237ce40 (9): Bad file descriptor 00:34:11.454 [2024-10-01 17:33:58.907471] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:11.454 11012.00 IOPS, 43.02 MiB/s 11159.50 IOPS, 43.59 MiB/s 11338.57 IOPS, 44.29 MiB/s 11390.38 IOPS, 44.49 MiB/s 11416.00 IOPS, 44.59 MiB/s [2024-10-01 17:34:03.198494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:11.454 [2024-10-01 17:34:03.198536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.454 [2024-10-01 17:34:03.198547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:11.454 [2024-10-01 17:34:03.198555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.454 [2024-10-01 17:34:03.198563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:11.454 [2024-10-01 17:34:03.198571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.454 [2024-10-01 17:34:03.198579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:11.454 [2024-10-01 17:34:03.198587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.454 [2024-10-01 17:34:03.198600] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237ce40 is same with the state(6) to be set 00:34:11.454 [2024-10-01 17:34:03.198662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:86240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.454 [2024-10-01 17:34:03.198672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.454 [2024-10-01 17:34:03.198686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:86248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.454 [2024-10-01 17:34:03.198695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.454 [2024-10-01 17:34:03.198704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:86256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.454 [2024-10-01 17:34:03.198712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.454 [2024-10-01 17:34:03.198721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:86264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.454 [2024-10-01 17:34:03.198728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.454 [2024-10-01 17:34:03.198738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:86272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.455 [2024-10-01 17:34:03.198746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.455 [2024-10-01 17:34:03.198756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:86280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.455 [2024-10-01 17:34:03.198763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.455 [2024-10-01 17:34:03.198773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:86288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.455 [2024-10-01 17:34:03.198780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.455 [2024-10-01 17:34:03.198789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:86296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.455 [2024-10-01 17:34:03.198797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.455 [2024-10-01 17:34:03.198807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:86304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.455 [2024-10-01 17:34:03.198814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.455 [2024-10-01 17:34:03.198824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:86312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.455 [2024-10-01 17:34:03.198831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.455 [2024-10-01 17:34:03.198841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:86320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.455 [2024-10-01 17:34:03.198848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.455 [2024-10-01 17:34:03.198858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:86328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.455 [2024-10-01 17:34:03.198865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.455 [2024-10-01 17:34:03.198877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:86336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.455 [2024-10-01 17:34:03.198885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.455 [2024-10-01 17:34:03.198895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:86344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.455 [2024-10-01 17:34:03.198902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.455 [2024-10-01 17:34:03.198911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:86352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.455 [2024-10-01 17:34:03.198919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.455 [2024-10-01 17:34:03.198929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:86360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.455 [2024-10-01 17:34:03.198937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.455 [2024-10-01 17:34:03.198947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:86368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.455 [2024-10-01 17:34:03.198954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.455 [2024-10-01 17:34:03.198964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:86376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.455 [2024-10-01 17:34:03.198972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.455 [2024-10-01 17:34:03.198982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:86384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.455 [2024-10-01 17:34:03.198990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.455 [2024-10-01 17:34:03.199005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:86392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.455 [2024-10-01 17:34:03.199013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.455 [2024-10-01 17:34:03.199022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:86400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.455 [2024-10-01 17:34:03.199030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.455 [2024-10-01 17:34:03.199039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:86408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.455 [2024-10-01 17:34:03.199046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.455 [2024-10-01 17:34:03.199056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.455 [2024-10-01 17:34:03.199063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.455 [2024-10-01 17:34:03.199073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.455 [2024-10-01 17:34:03.199081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.455 [2024-10-01 17:34:03.199091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:86432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.455 [2024-10-01 17:34:03.199099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.455 [2024-10-01 17:34:03.199111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:86440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.455 [2024-10-01 17:34:03.199118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.455 [2024-10-01 17:34:03.199128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:86448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.455 [2024-10-01 17:34:03.199135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.455 [2024-10-01 17:34:03.199144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:86456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.455 [2024-10-01 17:34:03.199152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.455 [2024-10-01 17:34:03.199161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:86464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.455 [2024-10-01 17:34:03.199169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.455 [2024-10-01 17:34:03.199179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:86472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.455 [2024-10-01 17:34:03.199186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.455 [2024-10-01 17:34:03.199196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:86480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.455 [2024-10-01 17:34:03.199203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.455 [2024-10-01 17:34:03.199212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:86488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.455 [2024-10-01 17:34:03.199220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.455 [2024-10-01 17:34:03.199229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:86496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.455 [2024-10-01 17:34:03.199236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.455 [2024-10-01 17:34:03.199246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:86504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.455 [2024-10-01 17:34:03.199253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.455 [2024-10-01 17:34:03.199263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:86512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.455 [2024-10-01 17:34:03.199270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.455 [2024-10-01 17:34:03.199279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:86520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.455 [2024-10-01 17:34:03.199287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.455 [2024-10-01 17:34:03.199296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:86528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.455 [2024-10-01 17:34:03.199303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.455 [2024-10-01 17:34:03.199312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:86536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.455 [2024-10-01 17:34:03.199321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.455 [2024-10-01 17:34:03.199331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:86544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.455 [2024-10-01 17:34:03.199338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.455 [2024-10-01 17:34:03.199348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.455 [2024-10-01 17:34:03.199355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.455 [2024-10-01 17:34:03.199364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:86560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.455 [2024-10-01 17:34:03.199371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.455 [2024-10-01 17:34:03.199381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.455 [2024-10-01 17:34:03.199388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.455 [2024-10-01 17:34:03.199397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.455 [2024-10-01 17:34:03.199404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.456 [2024-10-01 17:34:03.199414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:86584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.456 [2024-10-01 17:34:03.199421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.456 [2024-10-01 17:34:03.199430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:86592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.456 [2024-10-01 17:34:03.199438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.456 [2024-10-01 17:34:03.199447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:86600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.456 [2024-10-01 17:34:03.199454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.456 [2024-10-01 17:34:03.199463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:86608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.456 [2024-10-01 17:34:03.199471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.456 [2024-10-01 17:34:03.199480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:86616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.456 [2024-10-01 17:34:03.199488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.456 [2024-10-01 17:34:03.199498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:86624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.456 [2024-10-01 17:34:03.199506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.456 [2024-10-01 17:34:03.199515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:86632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.456 [2024-10-01 17:34:03.199522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.456 [2024-10-01 17:34:03.199533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:86640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.456 [2024-10-01 17:34:03.199540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.456 [2024-10-01 17:34:03.199550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:86648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.456 [2024-10-01 17:34:03.199557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.456 [2024-10-01 17:34:03.199567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:86656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.456 [2024-10-01 17:34:03.199574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.456 [2024-10-01 17:34:03.199583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:86664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.456 [2024-10-01 17:34:03.199591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.456 [2024-10-01 17:34:03.199600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:86672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.456 [2024-10-01 17:34:03.199608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.456 [2024-10-01 17:34:03.199617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:86680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.456 [2024-10-01 17:34:03.199624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.456 [2024-10-01 17:34:03.199634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:86688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.456 [2024-10-01 17:34:03.199642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.456 [2024-10-01 17:34:03.199651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:86696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.456 [2024-10-01 17:34:03.199659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.456 [2024-10-01 17:34:03.199668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:86704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.456 [2024-10-01 17:34:03.199675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.456 [2024-10-01 17:34:03.199685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:86712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.456 [2024-10-01 17:34:03.199692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.456 [2024-10-01 17:34:03.199701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:86720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.456 [2024-10-01 17:34:03.199709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.456 [2024-10-01 17:34:03.199718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:86728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.456 [2024-10-01 17:34:03.199726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.456 [2024-10-01 17:34:03.199735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:86736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.456 [2024-10-01 17:34:03.199744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.456 [2024-10-01 17:34:03.199754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:86744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.456 [2024-10-01 17:34:03.199761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.456 [2024-10-01 17:34:03.199770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:86752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.456 [2024-10-01 17:34:03.199777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.456 [2024-10-01 17:34:03.199787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.456 [2024-10-01 17:34:03.199794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.456 [2024-10-01 17:34:03.199803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:86768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.456 [2024-10-01 17:34:03.199811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.456 [2024-10-01 17:34:03.199820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:86776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.456 [2024-10-01 17:34:03.199827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.456 [2024-10-01 17:34:03.199836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:86784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.456 [2024-10-01 17:34:03.199844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.456 [2024-10-01 17:34:03.199853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:86792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.456 [2024-10-01 17:34:03.199861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.456 [2024-10-01 17:34:03.199870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:86800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.456 [2024-10-01 17:34:03.199877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.456 [2024-10-01 17:34:03.199886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:86808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.456 [2024-10-01 17:34:03.199894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.456 [2024-10-01 17:34:03.199903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:86816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.456 [2024-10-01 17:34:03.199911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.456 [2024-10-01 17:34:03.199921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:86824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.456 [2024-10-01 17:34:03.199928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.456 [2024-10-01 17:34:03.199937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:86832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.456 [2024-10-01 17:34:03.199946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.456 [2024-10-01 17:34:03.199957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:86840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.456 [2024-10-01 17:34:03.199969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.456 [2024-10-01 17:34:03.199982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:86848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.456 [2024-10-01 17:34:03.199992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.456 [2024-10-01 17:34:03.200005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.457 [2024-10-01 17:34:03.200013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.457 [2024-10-01 17:34:03.200024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:86864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.457 [2024-10-01 17:34:03.200033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.457 [2024-10-01 17:34:03.200044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:86872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.457 [2024-10-01 17:34:03.200055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.457 [2024-10-01 17:34:03.200066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:86880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.457 [2024-10-01 17:34:03.200076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.457 [2024-10-01 17:34:03.200087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:86888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.457 [2024-10-01 17:34:03.200094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.457 [2024-10-01 17:34:03.200104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:86896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.457 [2024-10-01 17:34:03.200111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.457 [2024-10-01 17:34:03.200121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:86904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.457 [2024-10-01 17:34:03.200129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.457 [2024-10-01 17:34:03.200138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.457 [2024-10-01 17:34:03.200145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.457 [2024-10-01 17:34:03.200155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:86920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.457 [2024-10-01 17:34:03.200163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.457 [2024-10-01 17:34:03.200172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:86928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.457 [2024-10-01 17:34:03.200180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.457 [2024-10-01 17:34:03.200189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:86936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.457 [2024-10-01 17:34:03.200196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.457 [2024-10-01 17:34:03.200211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:86944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.457 [2024-10-01 17:34:03.200218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.457 [2024-10-01 17:34:03.200228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:86952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.457 [2024-10-01 17:34:03.200235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.457 [2024-10-01 17:34:03.200245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:86960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.457 [2024-10-01 17:34:03.200252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.457 [2024-10-01 17:34:03.200261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:86968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.457 [2024-10-01 17:34:03.200268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.457 [2024-10-01 17:34:03.200278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:86976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.457 [2024-10-01 17:34:03.200285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.457 [2024-10-01 17:34:03.200295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.457 [2024-10-01 17:34:03.200302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.457 [2024-10-01 17:34:03.200312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.457 [2024-10-01 17:34:03.200319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.457 [2024-10-01 17:34:03.200328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:87000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.457 [2024-10-01 17:34:03.200336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.457 [2024-10-01 17:34:03.200345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:87008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.457 [2024-10-01 17:34:03.200353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.457 [2024-10-01 17:34:03.200362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:87016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.457 [2024-10-01 17:34:03.200369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.457 [2024-10-01 17:34:03.200379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:87024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.457 [2024-10-01 17:34:03.200386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.457 [2024-10-01 17:34:03.200396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:87032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.457 [2024-10-01 17:34:03.200403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.457 [2024-10-01 17:34:03.200412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:87040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.457 [2024-10-01 17:34:03.200421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.457 [2024-10-01 17:34:03.200431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:87048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.457 [2024-10-01 17:34:03.200438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.457 [2024-10-01 17:34:03.200448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.457 [2024-10-01 17:34:03.200455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.457 [2024-10-01 17:34:03.200464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:87064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.457 [2024-10-01 17:34:03.200471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.457 [2024-10-01 17:34:03.200481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:87072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.457 [2024-10-01 17:34:03.200488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.457 [2024-10-01 17:34:03.200498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:87080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.457 [2024-10-01 17:34:03.200505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.457 [2024-10-01 17:34:03.200514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:87088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.457 [2024-10-01 17:34:03.200521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.457 [2024-10-01 17:34:03.200531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:87096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.457 [2024-10-01 17:34:03.200538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.457 [2024-10-01 17:34:03.200548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:87104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.457 [2024-10-01 17:34:03.200555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.457 [2024-10-01 17:34:03.200564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:87112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.457 [2024-10-01 17:34:03.200571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.457 [2024-10-01 17:34:03.200581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:87120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.457 [2024-10-01 17:34:03.200588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.457 [2024-10-01 17:34:03.200598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:86112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.457 [2024-10-01 17:34:03.200605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.457 [2024-10-01 17:34:03.200615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:86120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.457 [2024-10-01 17:34:03.200622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.457 [2024-10-01 17:34:03.200632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:86128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.457 [2024-10-01 17:34:03.200641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.457 [2024-10-01 17:34:03.200651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:86136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.457 [2024-10-01 17:34:03.200659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.457 [2024-10-01 17:34:03.200668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:86144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.457 [2024-10-01 17:34:03.200675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.458 [2024-10-01 17:34:03.200685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:86152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.458 [2024-10-01 17:34:03.200692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.458 [2024-10-01 17:34:03.200702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:86160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.458 [2024-10-01 17:34:03.200710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.458 [2024-10-01 17:34:03.200719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:86168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.458 [2024-10-01 17:34:03.200728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.458 [2024-10-01 17:34:03.200738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:86176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.458 [2024-10-01 17:34:03.200745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.458 [2024-10-01 17:34:03.200754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:86184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.458 [2024-10-01 17:34:03.200763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.458 [2024-10-01 17:34:03.200773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:86192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.458 [2024-10-01 17:34:03.200780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.458 [2024-10-01 17:34:03.200789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:86200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.458 [2024-10-01 17:34:03.200797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.458 [2024-10-01 17:34:03.200807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:86208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.458 [2024-10-01 17:34:03.200814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.458 [2024-10-01 17:34:03.200824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:86216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.458 [2024-10-01 17:34:03.200833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.458 [2024-10-01 17:34:03.200842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:86224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.458 [2024-10-01 17:34:03.200849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.458 [2024-10-01 17:34:03.200862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:87128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.458 [2024-10-01 17:34:03.200869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.458 [2024-10-01 17:34:03.200889] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:11.458 [2024-10-01 17:34:03.200897] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:11.458 [2024-10-01 17:34:03.200904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86232 len:8 PRP1 0x0 PRP2 0x0 00:34:11.458 [2024-10-01 17:34:03.200912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.458 [2024-10-01 17:34:03.200947] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x239ffc0 was disconnected and freed. reset controller. 00:34:11.458 [2024-10-01 17:34:03.200957] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:34:11.458 [2024-10-01 17:34:03.200968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.458 [2024-10-01 17:34:03.204496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.458 [2024-10-01 17:34:03.204519] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x237ce40 (9): Bad file descriptor 00:34:11.458 [2024-10-01 17:34:03.330328] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:11.458 11288.30 IOPS, 44.09 MiB/s 11279.18 IOPS, 44.06 MiB/s 11275.75 IOPS, 44.05 MiB/s 11268.23 IOPS, 44.02 MiB/s 11266.43 IOPS, 44.01 MiB/s 00:34:11.458 Latency(us) 00:34:11.458 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:11.458 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:11.458 Verification LBA range: start 0x0 length 0x4000 00:34:11.458 NVMe0n1 : 15.01 11256.05 43.97 676.15 0.00 10699.78 539.31 16711.68 00:34:11.458 =================================================================================================================== 00:34:11.458 Total : 11256.05 43.97 676.15 0.00 10699.78 539.31 16711.68 00:34:11.458 Received shutdown signal, test time was about 15.000000 seconds 00:34:11.458 00:34:11.458 Latency(us) 00:34:11.458 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:11.458 =================================================================================================================== 00:34:11.458 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:11.458 17:34:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:34:11.458 17:34:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:34:11.458 17:34:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:34:11.458 17:34:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3225429 00:34:11.458 17:34:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3225429 /var/tmp/bdevperf.sock 00:34:11.458 17:34:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:34:11.458 17:34:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3225429 ']' 00:34:11.458 17:34:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:11.458 17:34:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:11.458 17:34:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:11.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:11.458 17:34:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:11.458 17:34:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:11.458 17:34:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:11.458 17:34:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:34:11.458 17:34:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:11.458 [2024-10-01 17:34:09.765239] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:11.458 17:34:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:34:11.458 [2024-10-01 17:34:09.937680] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:34:11.458 17:34:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:12.028 NVMe0n1 00:34:12.028 17:34:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:12.028 00:34:12.288 17:34:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:12.548 00:34:12.548 17:34:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:12.548 17:34:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:34:12.548 17:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:12.809 17:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:34:16.109 17:34:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:16.109 17:34:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:34:16.109 17:34:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3226352 00:34:16.109 17:34:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:16.109 17:34:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3226352 00:34:17.050 { 00:34:17.050 "results": [ 00:34:17.050 { 00:34:17.050 "job": "NVMe0n1", 00:34:17.050 "core_mask": "0x1", 00:34:17.050 "workload": "verify", 00:34:17.050 "status": "finished", 00:34:17.050 "verify_range": { 00:34:17.050 "start": 0, 00:34:17.050 "length": 16384 00:34:17.050 }, 00:34:17.050 "queue_depth": 128, 00:34:17.050 "io_size": 4096, 00:34:17.050 "runtime": 1.005342, 00:34:17.050 "iops": 11205.1421307376, 00:34:17.050 "mibps": 43.77008644819375, 00:34:17.050 "io_failed": 0, 00:34:17.050 "io_timeout": 0, 00:34:17.050 "avg_latency_us": 11372.324928539725, 00:34:17.050 "min_latency_us": 2594.133333333333, 00:34:17.050 "max_latency_us": 12451.84 00:34:17.050 } 00:34:17.050 ], 00:34:17.050 "core_count": 1 00:34:17.050 } 00:34:17.050 17:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:17.050 [2024-10-01 17:34:09.441377] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:34:17.050 [2024-10-01 17:34:09.441441] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3225429 ] 00:34:17.050 [2024-10-01 17:34:09.501846] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:17.050 [2024-10-01 17:34:09.530911] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:17.050 [2024-10-01 17:34:11.215377] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:34:17.050 [2024-10-01 17:34:11.215426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:17.050 [2024-10-01 17:34:11.215441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.050 [2024-10-01 17:34:11.215453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:17.050 [2024-10-01 17:34:11.215461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.050 [2024-10-01 17:34:11.215469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:17.050 [2024-10-01 17:34:11.215476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.050 [2024-10-01 17:34:11.215484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:17.050 [2024-10-01 17:34:11.215491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.050 [2024-10-01 17:34:11.215504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.050 [2024-10-01 17:34:11.215533] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.050 [2024-10-01 17:34:11.215554] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac9e40 (9): Bad file descriptor 00:34:17.050 [2024-10-01 17:34:11.226696] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:17.050 Running I/O for 1 seconds... 00:34:17.050 11137.00 IOPS, 43.50 MiB/s 00:34:17.050 Latency(us) 00:34:17.050 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:17.050 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:17.050 Verification LBA range: start 0x0 length 0x4000 00:34:17.050 NVMe0n1 : 1.01 11205.14 43.77 0.00 0.00 11372.32 2594.13 12451.84 00:34:17.050 =================================================================================================================== 00:34:17.050 Total : 11205.14 43.77 0.00 0.00 11372.32 2594.13 12451.84 00:34:17.050 17:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:17.050 17:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:34:17.311 17:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:17.572 17:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:17.572 17:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:34:17.572 17:34:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:17.832 17:34:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:34:21.135 17:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:21.136 17:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:34:21.136 17:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3225429 00:34:21.136 17:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3225429 ']' 00:34:21.136 17:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3225429 00:34:21.136 17:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:34:21.136 17:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:21.136 17:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3225429 00:34:21.136 17:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:21.136 17:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:21.136 17:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3225429' 00:34:21.136 killing process with pid 3225429 00:34:21.136 17:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3225429 00:34:21.136 17:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3225429 00:34:21.136 17:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:34:21.136 17:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:21.397 17:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:34:21.397 17:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:21.397 17:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:34:21.397 17:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:21.397 17:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:34:21.397 17:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:21.397 17:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:34:21.397 17:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:21.397 17:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:21.397 rmmod nvme_tcp 00:34:21.397 rmmod nvme_fabrics 00:34:21.397 rmmod nvme_keyring 00:34:21.397 17:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:21.658 17:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:34:21.658 17:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:34:21.658 17:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 3221791 ']' 00:34:21.658 17:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 3221791 00:34:21.658 17:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3221791 ']' 00:34:21.658 17:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3221791 00:34:21.658 17:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:34:21.658 17:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:21.658 17:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3221791 00:34:21.658 17:34:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:21.658 17:34:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:21.658 17:34:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3221791' 00:34:21.658 killing process with pid 3221791 00:34:21.658 17:34:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3221791 00:34:21.658 17:34:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3221791 00:34:21.658 17:34:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:21.658 17:34:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:21.658 17:34:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:21.658 17:34:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:34:21.658 17:34:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-save 00:34:21.659 17:34:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:21.659 17:34:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-restore 00:34:21.659 17:34:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:21.659 17:34:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:21.659 17:34:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:21.659 17:34:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:21.659 17:34:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:24.204 00:34:24.204 real 0m38.892s 00:34:24.204 user 1m59.164s 00:34:24.204 sys 0m8.334s 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:24.204 ************************************ 00:34:24.204 END TEST nvmf_failover 00:34:24.204 ************************************ 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.204 ************************************ 00:34:24.204 START TEST nvmf_host_discovery 00:34:24.204 ************************************ 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:34:24.204 * Looking for test storage... 00:34:24.204 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:24.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:24.204 --rc genhtml_branch_coverage=1 00:34:24.204 --rc genhtml_function_coverage=1 00:34:24.204 --rc genhtml_legend=1 00:34:24.204 --rc geninfo_all_blocks=1 00:34:24.204 --rc geninfo_unexecuted_blocks=1 00:34:24.204 00:34:24.204 ' 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:24.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:24.204 --rc genhtml_branch_coverage=1 00:34:24.204 --rc genhtml_function_coverage=1 00:34:24.204 --rc genhtml_legend=1 00:34:24.204 --rc geninfo_all_blocks=1 00:34:24.204 --rc geninfo_unexecuted_blocks=1 00:34:24.204 00:34:24.204 ' 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:24.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:24.204 --rc genhtml_branch_coverage=1 00:34:24.204 --rc genhtml_function_coverage=1 00:34:24.204 --rc genhtml_legend=1 00:34:24.204 --rc geninfo_all_blocks=1 00:34:24.204 --rc geninfo_unexecuted_blocks=1 00:34:24.204 00:34:24.204 ' 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:24.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:24.204 --rc genhtml_branch_coverage=1 00:34:24.204 --rc genhtml_function_coverage=1 00:34:24.204 --rc genhtml_legend=1 00:34:24.204 --rc geninfo_all_blocks=1 00:34:24.204 --rc geninfo_unexecuted_blocks=1 00:34:24.204 00:34:24.204 ' 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:34:24.204 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:24.205 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:24.205 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:24.205 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.205 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.205 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.205 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:34:24.205 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.205 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:34:24.205 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:24.205 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:24.205 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:24.205 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:24.205 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:24.205 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:24.205 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:24.205 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:24.205 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:24.205 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:24.205 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:34:24.205 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:34:24.205 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:34:24.205 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:34:24.205 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:34:24.205 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:34:24.205 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:34:24.205 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:24.205 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:24.205 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:24.205 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:24.205 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:24.205 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:24.205 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:24.205 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:24.205 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:24.205 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:24.205 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:34:24.205 17:34:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:32.334 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:32.334 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:32.334 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:32.334 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:32.334 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:32.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:32.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.524 ms 00:34:32.334 00:34:32.334 --- 10.0.0.2 ping statistics --- 00:34:32.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:32.335 rtt min/avg/max/mdev = 0.524/0.524/0.524/0.000 ms 00:34:32.335 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:32.335 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:32.335 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:34:32.335 00:34:32.335 --- 10.0.0.1 ping statistics --- 00:34:32.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:32.335 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:34:32.335 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:32.335 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # return 0 00:34:32.335 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:32.335 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:32.335 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:32.335 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:32.335 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:32.335 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:32.335 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:32.335 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:34:32.335 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:32.335 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:32.335 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.335 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # nvmfpid=3231461 00:34:32.335 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # waitforlisten 3231461 00:34:32.335 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:32.335 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 3231461 ']' 00:34:32.335 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:32.335 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:32.335 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:32.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:32.335 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:32.335 17:34:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.335 [2024-10-01 17:34:29.890148] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:34:32.335 [2024-10-01 17:34:29.890203] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:32.335 [2024-10-01 17:34:29.977059] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:32.335 [2024-10-01 17:34:30.029644] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:32.335 [2024-10-01 17:34:30.029702] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:32.335 [2024-10-01 17:34:30.029711] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:32.335 [2024-10-01 17:34:30.029719] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:32.335 [2024-10-01 17:34:30.029725] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:32.335 [2024-10-01 17:34:30.029748] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:34:32.335 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:32.335 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:34:32.335 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:32.335 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:32.335 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.335 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:32.335 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:32.335 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.335 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.335 [2024-10-01 17:34:30.749315] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:32.335 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.335 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:34:32.335 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.335 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.335 [2024-10-01 17:34:30.761600] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:32.335 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.335 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:34:32.335 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.335 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.335 null0 00:34:32.335 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.335 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:34:32.335 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.335 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.335 null1 00:34:32.335 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.335 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:34:32.335 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.335 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.335 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.335 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3231712 00:34:32.335 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3231712 /tmp/host.sock 00:34:32.335 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:34:32.335 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 3231712 ']' 00:34:32.335 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:34:32.335 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:32.335 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:32.335 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:32.335 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:32.335 17:34:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.335 [2024-10-01 17:34:30.859471] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:34:32.335 [2024-10-01 17:34:30.859539] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3231712 ] 00:34:32.596 [2024-10-01 17:34:30.927444] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:32.596 [2024-10-01 17:34:30.967660] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:32.596 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:32.596 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:34:32.596 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:32.596 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:34:32.596 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.596 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.596 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.596 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:34:32.596 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.596 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.596 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.596 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:34:32.596 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:34:32.596 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:32.596 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:32.596 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.596 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:32.596 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.596 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:32.596 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.596 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:34:32.596 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:34:32.596 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:32.597 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:32.597 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.597 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:32.597 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.597 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:32.597 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.857 [2024-10-01 17:34:31.391104] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:32.857 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:34:33.118 17:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:34:33.689 [2024-10-01 17:34:32.086088] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:33.689 [2024-10-01 17:34:32.086111] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:33.689 [2024-10-01 17:34:32.086125] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:33.689 [2024-10-01 17:34:32.173399] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:33.949 [2024-10-01 17:34:32.237425] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:33.949 [2024-10-01 17:34:32.237445] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:34.210 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:34.210 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:34.210 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:34:34.210 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:34.210 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:34.210 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.210 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:34.210 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:34.210 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:34.210 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.210 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:34.210 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:34.210 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:34.210 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:34.210 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:34.210 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:34.210 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:34:34.210 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:34:34.210 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:34.210 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:34.210 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.210 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:34.210 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:34.210 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:34.210 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.210 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:34:34.210 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:34.210 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:34.210 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:34.210 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:34.210 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:34.210 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:34:34.210 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:34:34.210 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:34.210 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:34.210 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.210 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:34.210 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:34.210 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:34.210 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.470 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:34:34.470 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:34.470 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:34:34.470 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:34.470 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:34.470 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:34.470 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:34.470 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:34.470 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:34.470 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:34:34.470 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:34:34.470 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:34.470 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.471 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:34.471 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.471 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:34.471 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:34:34.471 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:34:34.471 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:34.471 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:34:34.471 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.471 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:34.471 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.471 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:34.471 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:34.471 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:34.471 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:34.471 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:34.471 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:34:34.471 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:34.471 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:34.471 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.471 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:34.471 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:34.471 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:34.471 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.471 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:34.471 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:34.471 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:34:34.471 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:34.471 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:34.471 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:34.471 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:34.471 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:34.471 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:34.471 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:34:34.471 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:34:34.471 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:34.471 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.471 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:34.471 17:34:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.731 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:34.731 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:34.731 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:34:34.731 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:34.731 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:34:34.731 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.731 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:34.731 [2024-10-01 17:34:33.035405] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:34.731 [2024-10-01 17:34:33.035798] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:34.731 [2024-10-01 17:34:33.035826] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:34.731 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.731 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:34.731 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:34.731 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:34.731 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:34.731 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:34.731 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:34:34.731 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:34.731 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:34.731 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:34.731 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.731 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:34.731 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:34.731 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.731 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:34.731 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:34.731 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:34.731 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:34.731 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:34.731 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:34.732 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:34.732 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:34:34.732 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:34.732 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:34.732 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.732 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:34.732 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:34.732 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:34.732 [2024-10-01 17:34:33.124373] bdev_nvme.c:7086:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:34:34.732 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.732 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:34.732 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:34.732 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:34.732 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:34.732 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:34.732 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:34.732 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:34:34.732 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:34:34.732 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:34.732 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:34.732 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:34.732 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:34.732 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.732 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:34.732 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.732 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:34:34.732 17:34:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:34:34.732 [2024-10-01 17:34:33.225201] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:34.732 [2024-10-01 17:34:33.225220] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:34.732 [2024-10-01 17:34:33.225225] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:35.671 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:35.672 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:34:35.672 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:34:35.672 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:35.672 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:35.672 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.672 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:35.672 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:35.672 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:35.935 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.936 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:34:35.936 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:35.936 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:34:35.936 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:35.936 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:35.936 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:35.936 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:35.936 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:35.936 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:35.936 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:34:35.936 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:35.936 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:35.936 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.936 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:35.936 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.936 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:35.936 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:35.936 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:34:35.936 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:35.936 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:35.936 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.936 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:35.936 [2024-10-01 17:34:34.303325] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:35.936 [2024-10-01 17:34:34.303349] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:35.936 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.936 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:35.936 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:35.936 [2024-10-01 17:34:34.308608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:35.936 [2024-10-01 17:34:34.308628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.936 [2024-10-01 17:34:34.308637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:35.936 [2024-10-01 17:34:34.308645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.936 [2024-10-01 17:34:34.308653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:35.936 [2024-10-01 17:34:34.308660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.936 [2024-10-01 17:34:34.308668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:35.936 [2024-10-01 17:34:34.308675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.936 [2024-10-01 17:34:34.308682] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ad0 is same with the state(6) to be set 00:34:35.936 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:35.936 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:35.936 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:35.936 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:34:35.936 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:35.936 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:35.936 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.936 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:35.936 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:35.936 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:35.936 [2024-10-01 17:34:34.318622] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb71ad0 (9): Bad file descriptor 00:34:35.936 [2024-10-01 17:34:34.328661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:35.936 [2024-10-01 17:34:34.329243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.936 [2024-10-01 17:34:34.329282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb71ad0 with addr=10.0.0.2, port=4420 00:34:35.936 [2024-10-01 17:34:34.329293] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ad0 is same with the state(6) to be set 00:34:35.936 [2024-10-01 17:34:34.329312] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb71ad0 (9): Bad file descriptor 00:34:35.936 [2024-10-01 17:34:34.329336] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:35.936 [2024-10-01 17:34:34.329344] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:35.936 [2024-10-01 17:34:34.329353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:35.936 [2024-10-01 17:34:34.329368] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.936 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.936 [2024-10-01 17:34:34.338717] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:35.936 [2024-10-01 17:34:34.339194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.936 [2024-10-01 17:34:34.339232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb71ad0 with addr=10.0.0.2, port=4420 00:34:35.936 [2024-10-01 17:34:34.339244] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ad0 is same with the state(6) to be set 00:34:35.936 [2024-10-01 17:34:34.339262] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb71ad0 (9): Bad file descriptor 00:34:35.936 [2024-10-01 17:34:34.339274] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:35.936 [2024-10-01 17:34:34.339281] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:35.936 [2024-10-01 17:34:34.339290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:35.936 [2024-10-01 17:34:34.339305] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.936 [2024-10-01 17:34:34.348779] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:35.936 [2024-10-01 17:34:34.349270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.936 [2024-10-01 17:34:34.349308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb71ad0 with addr=10.0.0.2, port=4420 00:34:35.936 [2024-10-01 17:34:34.349319] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ad0 is same with the state(6) to be set 00:34:35.936 [2024-10-01 17:34:34.349338] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb71ad0 (9): Bad file descriptor 00:34:35.936 [2024-10-01 17:34:34.349350] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:35.936 [2024-10-01 17:34:34.349357] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:35.936 [2024-10-01 17:34:34.349365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:35.936 [2024-10-01 17:34:34.349380] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.936 [2024-10-01 17:34:34.358839] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:35.936 [2024-10-01 17:34:34.359193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.936 [2024-10-01 17:34:34.359208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb71ad0 with addr=10.0.0.2, port=4420 00:34:35.936 [2024-10-01 17:34:34.359215] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ad0 is same with the state(6) to be set 00:34:35.936 [2024-10-01 17:34:34.359227] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb71ad0 (9): Bad file descriptor 00:34:35.936 [2024-10-01 17:34:34.359237] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:35.936 [2024-10-01 17:34:34.359243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:35.936 [2024-10-01 17:34:34.359250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:35.936 [2024-10-01 17:34:34.359261] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.936 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.936 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:35.936 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:35.936 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:35.936 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:35.936 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:35.936 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:35.937 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:34:35.937 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:35.937 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:35.937 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:35.937 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.937 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:35.937 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:35.937 [2024-10-01 17:34:34.368896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:35.937 [2024-10-01 17:34:34.369801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.937 [2024-10-01 17:34:34.369825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb71ad0 with addr=10.0.0.2, port=4420 00:34:35.937 [2024-10-01 17:34:34.369835] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ad0 is same with the state(6) to be set 00:34:35.937 [2024-10-01 17:34:34.369850] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb71ad0 (9): Bad file descriptor 00:34:35.937 [2024-10-01 17:34:34.369883] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:35.937 [2024-10-01 17:34:34.369892] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:35.937 [2024-10-01 17:34:34.369900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:35.937 [2024-10-01 17:34:34.369913] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.937 [2024-10-01 17:34:34.378951] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:35.937 [2024-10-01 17:34:34.379264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.937 [2024-10-01 17:34:34.379278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb71ad0 with addr=10.0.0.2, port=4420 00:34:35.937 [2024-10-01 17:34:34.379285] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ad0 is same with the state(6) to be set 00:34:35.937 [2024-10-01 17:34:34.379297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb71ad0 (9): Bad file descriptor 00:34:35.937 [2024-10-01 17:34:34.379307] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:35.937 [2024-10-01 17:34:34.379314] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:35.937 [2024-10-01 17:34:34.379321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:35.937 [2024-10-01 17:34:34.379332] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.937 [2024-10-01 17:34:34.389013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:35.937 [2024-10-01 17:34:34.389233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.937 [2024-10-01 17:34:34.389247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb71ad0 with addr=10.0.0.2, port=4420 00:34:35.937 [2024-10-01 17:34:34.389254] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ad0 is same with the state(6) to be set 00:34:35.937 [2024-10-01 17:34:34.389266] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb71ad0 (9): Bad file descriptor 00:34:35.937 [2024-10-01 17:34:34.389277] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:35.937 [2024-10-01 17:34:34.389291] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:35.937 [2024-10-01 17:34:34.389299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:35.937 [2024-10-01 17:34:34.389310] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.937 [2024-10-01 17:34:34.399068] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:35.937 [2024-10-01 17:34:34.399440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.937 [2024-10-01 17:34:34.399453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb71ad0 with addr=10.0.0.2, port=4420 00:34:35.937 [2024-10-01 17:34:34.399460] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ad0 is same with the state(6) to be set 00:34:35.937 [2024-10-01 17:34:34.399471] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb71ad0 (9): Bad file descriptor 00:34:35.937 [2024-10-01 17:34:34.399481] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:35.937 [2024-10-01 17:34:34.399488] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:35.937 [2024-10-01 17:34:34.399495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:35.937 [2024-10-01 17:34:34.399505] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.937 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.937 [2024-10-01 17:34:34.409125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:35.937 [2024-10-01 17:34:34.409440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.937 [2024-10-01 17:34:34.409452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb71ad0 with addr=10.0.0.2, port=4420 00:34:35.937 [2024-10-01 17:34:34.409459] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ad0 is same with the state(6) to be set 00:34:35.937 [2024-10-01 17:34:34.409470] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb71ad0 (9): Bad file descriptor 00:34:35.937 [2024-10-01 17:34:34.409480] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:35.937 [2024-10-01 17:34:34.409486] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:35.937 [2024-10-01 17:34:34.409493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:35.937 [2024-10-01 17:34:34.409504] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.937 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:35.937 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:35.937 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:35.937 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:35.937 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:35.937 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:35.937 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:34:35.937 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:34:35.937 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:35.937 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:35.937 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:35.937 [2024-10-01 17:34:34.419179] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:35.937 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:35.937 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.937 [2024-10-01 17:34:34.419513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.937 [2024-10-01 17:34:34.419525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb71ad0 with addr=10.0.0.2, port=4420 00:34:35.937 [2024-10-01 17:34:34.419532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ad0 is same with the state(6) to be set 00:34:35.937 [2024-10-01 17:34:34.419543] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb71ad0 (9): Bad file descriptor 00:34:35.937 [2024-10-01 17:34:34.419553] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:35.937 [2024-10-01 17:34:34.419559] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:35.937 [2024-10-01 17:34:34.419566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:35.937 [2024-10-01 17:34:34.419577] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.937 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:35.937 [2024-10-01 17:34:34.429233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:35.937 [2024-10-01 17:34:34.429536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.937 [2024-10-01 17:34:34.429548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb71ad0 with addr=10.0.0.2, port=4420 00:34:35.937 [2024-10-01 17:34:34.429555] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb71ad0 is same with the state(6) to be set 00:34:35.937 [2024-10-01 17:34:34.429566] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb71ad0 (9): Bad file descriptor 00:34:35.937 [2024-10-01 17:34:34.429576] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:35.937 [2024-10-01 17:34:34.429582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:35.937 [2024-10-01 17:34:34.429589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:35.937 [2024-10-01 17:34:34.429599] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:35.937 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.937 [2024-10-01 17:34:34.432911] bdev_nvme.c:6949:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:34:35.937 [2024-10-01 17:34:34.432930] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:35.937 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:34:35.937 17:34:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.320 17:34:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:38.258 [2024-10-01 17:34:36.798953] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:38.258 [2024-10-01 17:34:36.798972] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:38.258 [2024-10-01 17:34:36.798984] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:38.519 [2024-10-01 17:34:36.926383] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:34:38.779 [2024-10-01 17:34:37.196283] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:38.779 [2024-10-01 17:34:37.196314] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:38.779 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.779 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:38.779 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:34:38.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:38.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:38.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:38.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:38.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:38.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:38.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:38.780 request: 00:34:38.780 { 00:34:38.780 "name": "nvme", 00:34:38.780 "trtype": "tcp", 00:34:38.780 "traddr": "10.0.0.2", 00:34:38.780 "adrfam": "ipv4", 00:34:38.780 "trsvcid": "8009", 00:34:38.780 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:38.780 "wait_for_attach": true, 00:34:38.780 "method": "bdev_nvme_start_discovery", 00:34:38.780 "req_id": 1 00:34:38.780 } 00:34:38.780 Got JSON-RPC error response 00:34:38.780 response: 00:34:38.780 { 00:34:38.780 "code": -17, 00:34:38.780 "message": "File exists" 00:34:38.780 } 00:34:38.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:38.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:34:38.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:38.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:38.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:38.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:34:38.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:38.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:38.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:38.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:38.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:38.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:34:38.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:34:38.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:38.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:38.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:38.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:38.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:38.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:38.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:38.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:34:38.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:38.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:38.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:38.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:38.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:38.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:38.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.780 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:39.039 request: 00:34:39.039 { 00:34:39.039 "name": "nvme_second", 00:34:39.039 "trtype": "tcp", 00:34:39.039 "traddr": "10.0.0.2", 00:34:39.039 "adrfam": "ipv4", 00:34:39.039 "trsvcid": "8009", 00:34:39.039 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:39.039 "wait_for_attach": true, 00:34:39.039 "method": "bdev_nvme_start_discovery", 00:34:39.039 "req_id": 1 00:34:39.039 } 00:34:39.039 Got JSON-RPC error response 00:34:39.039 response: 00:34:39.039 { 00:34:39.039 "code": -17, 00:34:39.039 "message": "File exists" 00:34:39.039 } 00:34:39.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:39.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:34:39.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:39.039 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:39.040 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:39.040 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:34:39.040 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:39.040 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:39.040 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.040 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:39.040 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:39.040 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:39.040 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.040 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:34:39.040 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:34:39.040 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:39.040 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:39.040 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:39.040 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:39.040 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.040 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:39.040 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.040 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:39.040 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:39.040 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:34:39.040 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:39.040 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:39.040 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:39.040 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:39.040 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:39.040 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:39.040 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.040 17:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:39.979 [2024-10-01 17:34:38.451743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.979 [2024-10-01 17:34:38.451772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7ab40 with addr=10.0.0.2, port=8010 00:34:39.979 [2024-10-01 17:34:38.451786] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:39.979 [2024-10-01 17:34:38.451793] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:39.979 [2024-10-01 17:34:38.451801] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:40.916 [2024-10-01 17:34:39.454085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.916 [2024-10-01 17:34:39.454108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7ab40 with addr=10.0.0.2, port=8010 00:34:40.916 [2024-10-01 17:34:39.454118] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:40.916 [2024-10-01 17:34:39.454125] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:40.916 [2024-10-01 17:34:39.454132] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:42.297 [2024-10-01 17:34:40.456124] bdev_nvme.c:7205:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:34:42.297 request: 00:34:42.297 { 00:34:42.297 "name": "nvme_second", 00:34:42.297 "trtype": "tcp", 00:34:42.297 "traddr": "10.0.0.2", 00:34:42.297 "adrfam": "ipv4", 00:34:42.297 "trsvcid": "8010", 00:34:42.297 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:42.297 "wait_for_attach": false, 00:34:42.297 "attach_timeout_ms": 3000, 00:34:42.297 "method": "bdev_nvme_start_discovery", 00:34:42.297 "req_id": 1 00:34:42.297 } 00:34:42.297 Got JSON-RPC error response 00:34:42.297 response: 00:34:42.297 { 00:34:42.297 "code": -110, 00:34:42.297 "message": "Connection timed out" 00:34:42.297 } 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3231712 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:42.297 rmmod nvme_tcp 00:34:42.297 rmmod nvme_fabrics 00:34:42.297 rmmod nvme_keyring 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@515 -- # '[' -n 3231461 ']' 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # killprocess 3231461 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 3231461 ']' 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 3231461 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3231461 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3231461' 00:34:42.297 killing process with pid 3231461 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 3231461 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 3231461 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-save 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:42.297 17:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:44.837 17:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:44.837 00:34:44.837 real 0m20.532s 00:34:44.837 user 0m24.350s 00:34:44.837 sys 0m6.963s 00:34:44.837 17:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:44.837 17:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:44.837 ************************************ 00:34:44.837 END TEST nvmf_host_discovery 00:34:44.837 ************************************ 00:34:44.837 17:34:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:44.837 17:34:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:44.837 17:34:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:44.837 17:34:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.837 ************************************ 00:34:44.837 START TEST nvmf_host_multipath_status 00:34:44.837 ************************************ 00:34:44.837 17:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:44.837 * Looking for test storage... 00:34:44.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:44.837 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:44.837 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:34:44.837 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:44.837 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:44.837 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:44.837 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:44.837 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:44.837 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:34:44.837 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:34:44.837 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:34:44.837 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:34:44.837 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:34:44.837 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:34:44.837 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:34:44.837 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:44.837 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:34:44.837 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:34:44.837 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:44.837 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:44.837 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:34:44.837 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:34:44.837 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:44.837 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:34:44.837 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:34:44.837 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:34:44.837 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:34:44.837 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:44.837 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:34:44.837 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:34:44.837 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:44.837 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:44.837 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:34:44.837 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:44.837 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:44.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:44.837 --rc genhtml_branch_coverage=1 00:34:44.837 --rc genhtml_function_coverage=1 00:34:44.838 --rc genhtml_legend=1 00:34:44.838 --rc geninfo_all_blocks=1 00:34:44.838 --rc geninfo_unexecuted_blocks=1 00:34:44.838 00:34:44.838 ' 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:44.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:44.838 --rc genhtml_branch_coverage=1 00:34:44.838 --rc genhtml_function_coverage=1 00:34:44.838 --rc genhtml_legend=1 00:34:44.838 --rc geninfo_all_blocks=1 00:34:44.838 --rc geninfo_unexecuted_blocks=1 00:34:44.838 00:34:44.838 ' 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:44.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:44.838 --rc genhtml_branch_coverage=1 00:34:44.838 --rc genhtml_function_coverage=1 00:34:44.838 --rc genhtml_legend=1 00:34:44.838 --rc geninfo_all_blocks=1 00:34:44.838 --rc geninfo_unexecuted_blocks=1 00:34:44.838 00:34:44.838 ' 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:44.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:44.838 --rc genhtml_branch_coverage=1 00:34:44.838 --rc genhtml_function_coverage=1 00:34:44.838 --rc genhtml_legend=1 00:34:44.838 --rc geninfo_all_blocks=1 00:34:44.838 --rc geninfo_unexecuted_blocks=1 00:34:44.838 00:34:44.838 ' 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:44.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:34:44.838 17:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:53.007 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:53.007 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:34:53.007 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:53.007 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:53.007 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:53.007 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:53.007 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:53.007 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:34:53.007 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:53.007 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:53.008 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:53.008 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:53.008 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:53.008 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # is_hw=yes 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:53.008 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:53.008 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.492 ms 00:34:53.008 00:34:53.008 --- 10.0.0.2 ping statistics --- 00:34:53.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:53.008 rtt min/avg/max/mdev = 0.492/0.492/0.492/0.000 ms 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:53.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:53.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.332 ms 00:34:53.008 00:34:53.008 --- 10.0.0.1 ping statistics --- 00:34:53.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:53.008 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:34:53.008 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:53.009 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # return 0 00:34:53.009 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:53.009 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:53.009 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:53.009 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:53.009 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:53.009 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:53.009 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:53.009 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:34:53.009 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:53.009 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:53.009 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:53.009 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # nvmfpid=3237804 00:34:53.009 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # waitforlisten 3237804 00:34:53.009 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:34:53.009 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 3237804 ']' 00:34:53.009 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:53.009 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:53.009 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:53.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:53.009 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:53.009 17:34:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:53.009 [2024-10-01 17:34:50.621144] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:34:53.009 [2024-10-01 17:34:50.621232] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:53.009 [2024-10-01 17:34:50.698731] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:53.009 [2024-10-01 17:34:50.739603] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:53.009 [2024-10-01 17:34:50.739653] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:53.009 [2024-10-01 17:34:50.739661] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:53.009 [2024-10-01 17:34:50.739668] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:53.009 [2024-10-01 17:34:50.739675] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:53.009 [2024-10-01 17:34:50.739832] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:34:53.009 [2024-10-01 17:34:50.739833] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:53.009 17:34:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:53.009 17:34:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:34:53.009 17:34:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:53.009 17:34:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:53.009 17:34:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:53.009 17:34:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:53.009 17:34:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3237804 00:34:53.009 17:34:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:53.269 [2024-10-01 17:34:51.606610] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:53.269 17:34:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:53.269 Malloc0 00:34:53.269 17:34:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:34:53.529 17:34:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:53.790 17:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:53.790 [2024-10-01 17:34:52.278724] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:53.790 17:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:54.051 [2024-10-01 17:34:52.435076] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:54.051 17:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3238197 00:34:54.052 17:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:34:54.052 17:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:54.052 17:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3238197 /var/tmp/bdevperf.sock 00:34:54.052 17:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 3238197 ']' 00:34:54.052 17:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:54.052 17:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:54.052 17:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:54.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:54.052 17:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:54.052 17:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:54.313 17:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:54.313 17:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:34:54.313 17:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:34:54.313 17:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:34:54.974 Nvme0n1 00:34:54.974 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:54.974 Nvme0n1 00:34:55.253 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:34:55.253 17:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:34:57.164 17:34:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:34:57.164 17:34:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:57.423 17:34:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:57.423 17:34:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:34:58.805 17:34:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:34:58.805 17:34:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:58.805 17:34:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:58.805 17:34:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:58.805 17:34:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:58.805 17:34:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:58.805 17:34:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:58.805 17:34:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:58.805 17:34:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:58.805 17:34:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:58.805 17:34:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:58.805 17:34:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:59.065 17:34:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:59.065 17:34:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:59.065 17:34:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:59.065 17:34:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:59.326 17:34:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:59.326 17:34:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:59.326 17:34:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:59.326 17:34:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:59.326 17:34:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:59.326 17:34:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:59.326 17:34:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:59.326 17:34:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:59.586 17:34:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:59.586 17:34:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:34:59.586 17:34:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:59.845 17:34:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:00.104 17:34:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:35:01.043 17:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:35:01.043 17:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:01.043 17:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:01.043 17:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:01.043 17:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:01.043 17:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:01.043 17:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:01.043 17:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:01.304 17:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:01.304 17:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:01.304 17:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:01.304 17:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:01.565 17:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:01.565 17:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:01.565 17:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:01.565 17:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:01.825 17:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:01.825 17:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:01.825 17:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:01.825 17:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:01.825 17:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:01.825 17:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:01.825 17:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:01.825 17:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:02.086 17:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:02.086 17:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:35:02.086 17:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:02.347 17:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:35:02.347 17:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:35:03.750 17:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:35:03.750 17:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:03.750 17:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:03.750 17:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:03.750 17:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:03.750 17:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:03.750 17:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:03.750 17:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:03.750 17:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:03.750 17:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:03.750 17:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:03.750 17:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:04.010 17:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:04.010 17:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:04.010 17:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:04.010 17:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:04.270 17:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:04.270 17:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:04.270 17:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:04.270 17:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:04.270 17:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:04.270 17:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:04.270 17:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:04.270 17:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:04.530 17:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:04.530 17:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:35:04.530 17:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:04.789 17:35:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:04.789 17:35:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:35:06.172 17:35:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:35:06.172 17:35:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:06.172 17:35:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:06.172 17:35:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:06.172 17:35:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:06.172 17:35:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:06.172 17:35:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:06.172 17:35:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:06.172 17:35:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:06.172 17:35:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:06.172 17:35:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:06.172 17:35:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:06.433 17:35:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:06.433 17:35:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:06.433 17:35:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:06.433 17:35:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:06.696 17:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:06.696 17:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:06.696 17:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:06.696 17:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:06.696 17:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:06.696 17:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:06.696 17:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:06.696 17:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:06.956 17:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:06.956 17:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:35:06.956 17:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:35:07.216 17:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:07.475 17:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:35:08.413 17:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:35:08.413 17:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:08.414 17:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.414 17:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:08.673 17:35:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:08.673 17:35:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:08.673 17:35:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.673 17:35:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:08.673 17:35:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:08.673 17:35:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:08.932 17:35:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.932 17:35:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:08.932 17:35:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:08.932 17:35:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:08.932 17:35:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.932 17:35:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:09.192 17:35:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:09.192 17:35:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:35:09.192 17:35:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:09.192 17:35:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:09.450 17:35:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:09.450 17:35:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:09.451 17:35:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:09.451 17:35:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:09.451 17:35:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:09.451 17:35:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:35:09.451 17:35:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:35:09.710 17:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:09.970 17:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:35:10.909 17:35:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:35:10.909 17:35:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:10.909 17:35:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:10.909 17:35:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:11.168 17:35:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:11.168 17:35:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:11.168 17:35:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:11.168 17:35:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:11.168 17:35:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:11.168 17:35:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:11.168 17:35:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:11.168 17:35:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:11.427 17:35:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:11.427 17:35:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:11.427 17:35:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:11.428 17:35:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:11.687 17:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:11.687 17:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:35:11.687 17:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:11.687 17:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:11.949 17:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:11.949 17:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:11.949 17:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:11.949 17:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:11.949 17:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:11.949 17:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:35:12.208 17:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:35:12.208 17:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:35:12.468 17:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:12.468 17:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:35:13.412 17:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:35:13.412 17:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:13.412 17:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:13.412 17:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:13.672 17:35:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:13.672 17:35:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:13.672 17:35:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:13.672 17:35:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:13.932 17:35:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:13.932 17:35:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:13.932 17:35:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:13.932 17:35:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:14.191 17:35:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:14.191 17:35:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:14.191 17:35:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:14.191 17:35:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:14.191 17:35:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:14.191 17:35:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:14.191 17:35:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:14.191 17:35:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:14.450 17:35:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:14.450 17:35:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:14.450 17:35:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:14.450 17:35:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:14.710 17:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:14.710 17:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:35:14.710 17:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:14.970 17:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:14.970 17:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:35:16.353 17:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:35:16.353 17:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:16.353 17:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:16.353 17:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:16.353 17:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:16.353 17:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:16.353 17:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:16.353 17:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:16.353 17:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:16.353 17:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:16.353 17:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:16.353 17:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:16.613 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:16.613 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:16.613 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:16.613 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:16.873 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:16.873 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:16.873 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:16.873 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:16.873 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:16.873 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:16.873 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:16.873 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:17.133 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:17.133 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:35:17.133 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:17.393 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:35:17.393 17:35:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:35:18.775 17:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:35:18.775 17:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:18.775 17:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:18.775 17:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:18.775 17:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:18.775 17:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:18.775 17:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:18.775 17:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:18.775 17:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:18.775 17:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:18.775 17:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:18.775 17:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:19.035 17:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:19.035 17:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:19.036 17:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:19.036 17:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:19.296 17:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:19.296 17:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:19.296 17:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:19.296 17:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:19.296 17:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:19.296 17:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:19.557 17:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:19.557 17:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:19.557 17:35:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:19.557 17:35:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:35:19.557 17:35:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:19.817 17:35:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:20.077 17:35:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:35:21.018 17:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:35:21.018 17:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:21.018 17:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:21.018 17:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:21.279 17:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:21.279 17:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:21.279 17:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:21.279 17:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:21.540 17:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:21.540 17:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:21.540 17:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:21.540 17:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:21.540 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:21.540 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:21.540 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:21.540 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:21.801 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:21.801 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:21.801 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:21.801 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:22.061 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:22.061 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:22.061 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:22.061 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:22.061 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:22.061 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3238197 00:35:22.061 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 3238197 ']' 00:35:22.061 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 3238197 00:35:22.062 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:35:22.062 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:22.062 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3238197 00:35:22.325 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:35:22.325 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:35:22.325 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3238197' 00:35:22.325 killing process with pid 3238197 00:35:22.325 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 3238197 00:35:22.325 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 3238197 00:35:22.325 { 00:35:22.325 "results": [ 00:35:22.325 { 00:35:22.325 "job": "Nvme0n1", 00:35:22.325 "core_mask": "0x4", 00:35:22.325 "workload": "verify", 00:35:22.325 "status": "terminated", 00:35:22.325 "verify_range": { 00:35:22.325 "start": 0, 00:35:22.325 "length": 16384 00:35:22.325 }, 00:35:22.325 "queue_depth": 128, 00:35:22.325 "io_size": 4096, 00:35:22.325 "runtime": 26.98492, 00:35:22.325 "iops": 10868.737057586237, 00:35:22.325 "mibps": 42.45600413119624, 00:35:22.325 "io_failed": 0, 00:35:22.325 "io_timeout": 0, 00:35:22.325 "avg_latency_us": 11758.606622569545, 00:35:22.325 "min_latency_us": 285.0133333333333, 00:35:22.325 "max_latency_us": 3019898.88 00:35:22.325 } 00:35:22.325 ], 00:35:22.325 "core_count": 1 00:35:22.325 } 00:35:22.325 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3238197 00:35:22.325 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:22.325 [2024-10-01 17:34:52.502740] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:35:22.325 [2024-10-01 17:34:52.502834] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3238197 ] 00:35:22.325 [2024-10-01 17:34:52.559493] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:22.325 [2024-10-01 17:34:52.587397] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:35:22.325 [2024-10-01 17:34:53.445899] bdev_nvme.c:5605:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:35:22.325 Running I/O for 90 seconds... 00:35:22.325 9587.00 IOPS, 37.45 MiB/s 9617.50 IOPS, 37.57 MiB/s 9668.00 IOPS, 37.77 MiB/s 9682.50 IOPS, 37.82 MiB/s 9923.60 IOPS, 38.76 MiB/s 10454.83 IOPS, 40.84 MiB/s 10795.14 IOPS, 42.17 MiB/s 10771.50 IOPS, 42.08 MiB/s 10641.33 IOPS, 41.57 MiB/s 10545.80 IOPS, 41.19 MiB/s 10467.18 IOPS, 40.89 MiB/s [2024-10-01 17:35:05.573466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:75144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.325 [2024-10-01 17:35:05.573500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:22.325 [2024-10-01 17:35:05.573535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:75152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.325 [2024-10-01 17:35:05.573543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:22.325 [2024-10-01 17:35:05.573554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:75160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.325 [2024-10-01 17:35:05.573559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:22.325 [2024-10-01 17:35:05.573570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.325 [2024-10-01 17:35:05.573575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:22.325 [2024-10-01 17:35:05.573585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:75176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.325 [2024-10-01 17:35:05.573590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:22.325 [2024-10-01 17:35:05.573601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:75184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.325 [2024-10-01 17:35:05.573606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:22.325 [2024-10-01 17:35:05.573616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:75192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.325 [2024-10-01 17:35:05.573621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:22.325 [2024-10-01 17:35:05.573631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.325 [2024-10-01 17:35:05.573637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:22.326 [2024-10-01 17:35:05.573647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:74896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.326 [2024-10-01 17:35:05.573652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:22.326 [2024-10-01 17:35:05.573667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:74904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.326 [2024-10-01 17:35:05.573673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:22.326 [2024-10-01 17:35:05.573683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:74912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.326 [2024-10-01 17:35:05.573688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:22.326 [2024-10-01 17:35:05.573698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:74920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.326 [2024-10-01 17:35:05.573704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:22.326 [2024-10-01 17:35:05.573714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:74928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.326 [2024-10-01 17:35:05.573719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:22.326 [2024-10-01 17:35:05.573730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:74936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.326 [2024-10-01 17:35:05.573735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:22.326 [2024-10-01 17:35:05.573745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:74944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.326 [2024-10-01 17:35:05.573751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:22.326 [2024-10-01 17:35:05.573761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:75208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.326 [2024-10-01 17:35:05.573766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:22.326 [2024-10-01 17:35:05.573776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:75216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.326 [2024-10-01 17:35:05.573782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:22.326 [2024-10-01 17:35:05.573792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:75224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.326 [2024-10-01 17:35:05.573797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.326 [2024-10-01 17:35:05.573807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:75232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.326 [2024-10-01 17:35:05.573812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:22.326 [2024-10-01 17:35:05.573822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:75240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.326 [2024-10-01 17:35:05.573828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.326 [2024-10-01 17:35:05.573838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:75248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.326 [2024-10-01 17:35:05.573843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:22.326 [2024-10-01 17:35:05.573855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:75256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.326 [2024-10-01 17:35:05.573860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:22.326 [2024-10-01 17:35:05.573871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.326 [2024-10-01 17:35:05.573876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:22.326 [2024-10-01 17:35:05.574132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:75272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.326 [2024-10-01 17:35:05.574140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:22.326 [2024-10-01 17:35:05.574153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:75280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.326 [2024-10-01 17:35:05.574158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:22.326 [2024-10-01 17:35:05.574169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:75288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.326 [2024-10-01 17:35:05.574175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:22.326 [2024-10-01 17:35:05.574186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:75296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.326 [2024-10-01 17:35:05.574191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:22.326 [2024-10-01 17:35:05.574202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:75304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.326 [2024-10-01 17:35:05.574207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:22.326 [2024-10-01 17:35:05.574218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:75312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.326 [2024-10-01 17:35:05.574223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:22.326 [2024-10-01 17:35:05.574235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:75320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.326 [2024-10-01 17:35:05.574240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:22.326 [2024-10-01 17:35:05.574251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.326 [2024-10-01 17:35:05.574256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:22.326 [2024-10-01 17:35:05.575057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:75336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.326 [2024-10-01 17:35:05.575064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:22.326 [2024-10-01 17:35:05.575077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:75344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.326 [2024-10-01 17:35:05.575082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:22.326 [2024-10-01 17:35:05.575094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:75352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.326 [2024-10-01 17:35:05.575101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:22.326 [2024-10-01 17:35:05.575113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:75360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.326 [2024-10-01 17:35:05.575118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:22.326 [2024-10-01 17:35:05.575130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.326 [2024-10-01 17:35:05.575135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:22.326 [2024-10-01 17:35:05.575147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:75376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.326 [2024-10-01 17:35:05.575152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:22.326 [2024-10-01 17:35:05.575164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.326 [2024-10-01 17:35:05.575169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:22.326 [2024-10-01 17:35:05.575180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:75392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.326 [2024-10-01 17:35:05.575185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:22.326 [2024-10-01 17:35:05.575197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:75400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.326 [2024-10-01 17:35:05.575202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:22.326 [2024-10-01 17:35:05.575214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:75408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.326 [2024-10-01 17:35:05.575219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:22.326 [2024-10-01 17:35:05.575231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:75416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.326 [2024-10-01 17:35:05.575236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:22.326 [2024-10-01 17:35:05.575248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:75424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.327 [2024-10-01 17:35:05.575254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:22.327 [2024-10-01 17:35:05.575266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:75432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.327 [2024-10-01 17:35:05.575271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:22.327 [2024-10-01 17:35:05.575282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:75440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.327 [2024-10-01 17:35:05.575287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:22.327 [2024-10-01 17:35:05.575299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.327 [2024-10-01 17:35:05.575306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:22.327 [2024-10-01 17:35:05.575317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:75456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.327 [2024-10-01 17:35:05.575322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:22.327 [2024-10-01 17:35:05.575334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.327 [2024-10-01 17:35:05.575340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:22.327 [2024-10-01 17:35:05.575381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.327 [2024-10-01 17:35:05.575387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:22.327 [2024-10-01 17:35:05.575400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:75480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.327 [2024-10-01 17:35:05.575405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:22.327 [2024-10-01 17:35:05.575418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.327 [2024-10-01 17:35:05.575423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:22.327 [2024-10-01 17:35:05.575436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.327 [2024-10-01 17:35:05.575441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.327 [2024-10-01 17:35:05.575453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:75504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.327 [2024-10-01 17:35:05.575458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:22.327 [2024-10-01 17:35:05.575471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:75512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.327 [2024-10-01 17:35:05.575477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:22.327 [2024-10-01 17:35:05.575489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:75520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.327 [2024-10-01 17:35:05.575494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:22.327 [2024-10-01 17:35:05.575507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:75528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.327 [2024-10-01 17:35:05.575512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:22.327 [2024-10-01 17:35:05.575525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:75536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.327 [2024-10-01 17:35:05.575530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:22.327 [2024-10-01 17:35:05.575543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:75544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.327 [2024-10-01 17:35:05.575548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:22.327 [2024-10-01 17:35:05.575562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:75552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.327 [2024-10-01 17:35:05.575568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:22.327 [2024-10-01 17:35:05.575581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:75560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.327 [2024-10-01 17:35:05.575586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:22.327 [2024-10-01 17:35:05.575598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:75568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.327 [2024-10-01 17:35:05.575603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:22.327 [2024-10-01 17:35:05.575616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:75576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.327 [2024-10-01 17:35:05.575622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:22.327 [2024-10-01 17:35:05.575635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:75584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.327 [2024-10-01 17:35:05.575640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:22.327 [2024-10-01 17:35:05.575652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.327 [2024-10-01 17:35:05.575657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:22.327 [2024-10-01 17:35:05.575670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:75600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.327 [2024-10-01 17:35:05.575675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:22.327 [2024-10-01 17:35:05.575688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:75608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.327 [2024-10-01 17:35:05.575693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:22.327 [2024-10-01 17:35:05.575705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.327 [2024-10-01 17:35:05.575710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:22.327 [2024-10-01 17:35:05.575723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.327 [2024-10-01 17:35:05.575728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:22.327 [2024-10-01 17:35:05.575741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:75632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.327 [2024-10-01 17:35:05.575746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:22.327 [2024-10-01 17:35:05.575758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:75640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.327 [2024-10-01 17:35:05.575763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:22.327 [2024-10-01 17:35:05.575777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:75648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.327 [2024-10-01 17:35:05.575782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:22.327 [2024-10-01 17:35:05.575795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.327 [2024-10-01 17:35:05.575800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:22.327 [2024-10-01 17:35:05.575812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:75664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.327 [2024-10-01 17:35:05.575818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:22.327 [2024-10-01 17:35:05.575831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.327 [2024-10-01 17:35:05.575837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:22.327 [2024-10-01 17:35:05.575850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.327 [2024-10-01 17:35:05.575855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:22.327 [2024-10-01 17:35:05.575867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:74952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.327 [2024-10-01 17:35:05.575873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:22.327 [2024-10-01 17:35:05.575885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:74960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.327 [2024-10-01 17:35:05.575891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:22.328 [2024-10-01 17:35:05.575903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:74968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.328 [2024-10-01 17:35:05.575908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:22.328 [2024-10-01 17:35:05.575921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:74976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.328 [2024-10-01 17:35:05.575926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:22.328 [2024-10-01 17:35:05.575938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:74984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.328 [2024-10-01 17:35:05.575944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:22.328 [2024-10-01 17:35:05.575956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:74992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.328 [2024-10-01 17:35:05.575961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:22.328 [2024-10-01 17:35:05.575974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.328 [2024-10-01 17:35:05.575979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:22.328 [2024-10-01 17:35:05.575992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:75008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.328 [2024-10-01 17:35:05.576004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:22.328 [2024-10-01 17:35:05.576021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:75016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.328 [2024-10-01 17:35:05.576026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.328 [2024-10-01 17:35:05.576039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.328 [2024-10-01 17:35:05.576044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:22.328 [2024-10-01 17:35:05.576057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:75032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.328 [2024-10-01 17:35:05.576062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:22.328 [2024-10-01 17:35:05.576074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:75040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.328 [2024-10-01 17:35:05.576079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:22.328 [2024-10-01 17:35:05.576092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:75048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.328 [2024-10-01 17:35:05.576097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:22.328 [2024-10-01 17:35:05.576110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:75056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.328 [2024-10-01 17:35:05.576115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:22.328 [2024-10-01 17:35:05.576128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:75064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.328 [2024-10-01 17:35:05.576133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:22.328 [2024-10-01 17:35:05.576146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.328 [2024-10-01 17:35:05.576151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:22.328 [2024-10-01 17:35:05.576163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.328 [2024-10-01 17:35:05.576168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:22.328 [2024-10-01 17:35:05.576181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:75696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.328 [2024-10-01 17:35:05.576186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:22.328 [2024-10-01 17:35:05.576199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.328 [2024-10-01 17:35:05.576204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:22.328 [2024-10-01 17:35:05.576286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.328 [2024-10-01 17:35:05.576297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:22.328 [2024-10-01 17:35:05.576312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.328 [2024-10-01 17:35:05.576317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:22.328 [2024-10-01 17:35:05.576332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:75728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.328 [2024-10-01 17:35:05.576337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:22.328 [2024-10-01 17:35:05.576352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.328 [2024-10-01 17:35:05.576357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:22.328 [2024-10-01 17:35:05.576372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.328 [2024-10-01 17:35:05.576377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:22.328 [2024-10-01 17:35:05.576391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.328 [2024-10-01 17:35:05.576397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:22.328 [2024-10-01 17:35:05.576411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:75760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.328 [2024-10-01 17:35:05.576416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:22.328 [2024-10-01 17:35:05.576431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.328 [2024-10-01 17:35:05.576436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:22.328 [2024-10-01 17:35:05.576451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:75776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.328 [2024-10-01 17:35:05.576457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:22.328 [2024-10-01 17:35:05.576471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.328 [2024-10-01 17:35:05.576476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:22.328 [2024-10-01 17:35:05.576491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:75792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.328 [2024-10-01 17:35:05.576496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:22.328 [2024-10-01 17:35:05.576511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.328 [2024-10-01 17:35:05.576517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:22.328 [2024-10-01 17:35:05.576532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.328 [2024-10-01 17:35:05.576537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:22.328 [2024-10-01 17:35:05.576553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.328 [2024-10-01 17:35:05.576559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:22.328 [2024-10-01 17:35:05.576573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.328 [2024-10-01 17:35:05.576579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:22.328 [2024-10-01 17:35:05.576594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.328 [2024-10-01 17:35:05.576599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:22.328 [2024-10-01 17:35:05.576614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.328 [2024-10-01 17:35:05.576619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:22.328 [2024-10-01 17:35:05.576634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:75848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.328 [2024-10-01 17:35:05.576639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:22.328 [2024-10-01 17:35:05.576654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.328 [2024-10-01 17:35:05.576659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:22.328 [2024-10-01 17:35:05.576674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:75864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.328 [2024-10-01 17:35:05.576679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:22.328 [2024-10-01 17:35:05.576694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.329 [2024-10-01 17:35:05.576699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:22.329 [2024-10-01 17:35:05.576714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.329 [2024-10-01 17:35:05.576719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.329 [2024-10-01 17:35:05.576734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.329 [2024-10-01 17:35:05.576739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:22.329 [2024-10-01 17:35:05.576754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.329 [2024-10-01 17:35:05.576759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:22.329 [2024-10-01 17:35:05.576773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.329 [2024-10-01 17:35:05.576779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:22.329 [2024-10-01 17:35:05.576795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.329 [2024-10-01 17:35:05.576800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:22.329 [2024-10-01 17:35:05.576814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:75080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.329 [2024-10-01 17:35:05.576819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:22.329 [2024-10-01 17:35:05.576834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:75088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.329 [2024-10-01 17:35:05.576839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:22.329 [2024-10-01 17:35:05.576854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:75096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.329 [2024-10-01 17:35:05.576859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:22.329 [2024-10-01 17:35:05.576873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.329 [2024-10-01 17:35:05.576879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:22.329 [2024-10-01 17:35:05.576893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:75112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.329 [2024-10-01 17:35:05.576898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:22.329 [2024-10-01 17:35:05.576913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.329 [2024-10-01 17:35:05.576918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:22.329 [2024-10-01 17:35:05.576932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:75128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.329 [2024-10-01 17:35:05.576938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:22.329 [2024-10-01 17:35:05.576953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:75136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.329 [2024-10-01 17:35:05.576958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:22.329 10337.50 IOPS, 40.38 MiB/s 9542.31 IOPS, 37.27 MiB/s 8860.71 IOPS, 34.61 MiB/s 8335.27 IOPS, 32.56 MiB/s 8622.31 IOPS, 33.68 MiB/s 8873.88 IOPS, 34.66 MiB/s 9289.17 IOPS, 36.29 MiB/s 9695.05 IOPS, 37.87 MiB/s 9995.40 IOPS, 39.04 MiB/s 10145.19 IOPS, 39.63 MiB/s 10270.82 IOPS, 40.12 MiB/s 10512.52 IOPS, 41.06 MiB/s 10782.25 IOPS, 42.12 MiB/s [2024-10-01 17:35:18.457652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.329 [2024-10-01 17:35:18.457690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:22.329 [2024-10-01 17:35:18.457722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.329 [2024-10-01 17:35:18.457729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:22.329 [2024-10-01 17:35:18.457745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.329 [2024-10-01 17:35:18.457750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:22.329 [2024-10-01 17:35:18.457761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.329 [2024-10-01 17:35:18.457766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:22.329 [2024-10-01 17:35:18.457776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.329 [2024-10-01 17:35:18.457781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:22.329 [2024-10-01 17:35:18.457791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.329 [2024-10-01 17:35:18.457796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:22.329 [2024-10-01 17:35:18.457807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:79224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.329 [2024-10-01 17:35:18.457812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:22.329 [2024-10-01 17:35:18.457822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.329 [2024-10-01 17:35:18.457827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:22.329 [2024-10-01 17:35:18.457837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.329 [2024-10-01 17:35:18.457842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:22.329 [2024-10-01 17:35:18.457853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:79240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.329 [2024-10-01 17:35:18.457858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:22.329 [2024-10-01 17:35:18.457868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.329 [2024-10-01 17:35:18.457873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:22.329 [2024-10-01 17:35:18.457883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.329 [2024-10-01 17:35:18.457888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:22.329 [2024-10-01 17:35:18.457899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.329 [2024-10-01 17:35:18.457904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:22.329 [2024-10-01 17:35:18.458019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.329 [2024-10-01 17:35:18.458027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:22.329 [2024-10-01 17:35:18.458038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.329 [2024-10-01 17:35:18.458045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:22.329 [2024-10-01 17:35:18.458085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.329 [2024-10-01 17:35:18.458093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:22.329 [2024-10-01 17:35:18.458104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.329 [2024-10-01 17:35:18.458109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:22.329 [2024-10-01 17:35:18.458119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.329 [2024-10-01 17:35:18.458124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:22.329 [2024-10-01 17:35:18.458135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.329 [2024-10-01 17:35:18.458140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:22.329 [2024-10-01 17:35:18.458151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.329 [2024-10-01 17:35:18.458156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:22.329 10965.88 IOPS, 42.84 MiB/s 10915.77 IOPS, 42.64 MiB/s Received shutdown signal, test time was about 26.985529 seconds 00:35:22.329 00:35:22.329 Latency(us) 00:35:22.329 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:22.329 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:35:22.329 Verification LBA range: start 0x0 length 0x4000 00:35:22.329 Nvme0n1 : 26.98 10868.74 42.46 0.00 0.00 11758.61 285.01 3019898.88 00:35:22.329 =================================================================================================================== 00:35:22.329 Total : 10868.74 42.46 0.00 0.00 11758.61 285.01 3019898.88 00:35:22.329 [2024-10-01 17:35:20.670246] app.c:1032:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:35:22.329 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:22.591 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:35:22.591 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:22.591 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:35:22.591 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:22.591 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:35:22.591 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:22.591 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:35:22.591 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:22.591 17:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:22.591 rmmod nvme_tcp 00:35:22.591 rmmod nvme_fabrics 00:35:22.591 rmmod nvme_keyring 00:35:22.591 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:22.591 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:35:22.591 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:35:22.591 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@515 -- # '[' -n 3237804 ']' 00:35:22.591 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # killprocess 3237804 00:35:22.591 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 3237804 ']' 00:35:22.591 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 3237804 00:35:22.591 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:35:22.591 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:22.591 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3237804 00:35:22.591 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:22.591 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:22.591 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3237804' 00:35:22.591 killing process with pid 3237804 00:35:22.591 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 3237804 00:35:22.591 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 3237804 00:35:22.851 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:22.851 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:22.851 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:22.851 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:35:22.851 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-save 00:35:22.851 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:22.851 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-restore 00:35:22.851 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:22.851 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:22.851 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:22.851 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:22.851 17:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:25.394 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:25.394 00:35:25.394 real 0m40.388s 00:35:25.394 user 1m44.298s 00:35:25.394 sys 0m11.519s 00:35:25.394 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:25.394 17:35:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:25.394 ************************************ 00:35:25.394 END TEST nvmf_host_multipath_status 00:35:25.394 ************************************ 00:35:25.394 17:35:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:35:25.394 17:35:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:25.394 17:35:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:25.394 17:35:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.394 ************************************ 00:35:25.394 START TEST nvmf_discovery_remove_ifc 00:35:25.394 ************************************ 00:35:25.394 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:35:25.394 * Looking for test storage... 00:35:25.394 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:25.394 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:25.394 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:25.394 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:35:25.394 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:25.394 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:25.394 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:25.394 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:25.394 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:35:25.394 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:35:25.394 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:35:25.394 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:35:25.394 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:35:25.394 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:35:25.394 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:35:25.394 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:25.394 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:35:25.394 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:35:25.394 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:25.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:25.395 --rc genhtml_branch_coverage=1 00:35:25.395 --rc genhtml_function_coverage=1 00:35:25.395 --rc genhtml_legend=1 00:35:25.395 --rc geninfo_all_blocks=1 00:35:25.395 --rc geninfo_unexecuted_blocks=1 00:35:25.395 00:35:25.395 ' 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:25.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:25.395 --rc genhtml_branch_coverage=1 00:35:25.395 --rc genhtml_function_coverage=1 00:35:25.395 --rc genhtml_legend=1 00:35:25.395 --rc geninfo_all_blocks=1 00:35:25.395 --rc geninfo_unexecuted_blocks=1 00:35:25.395 00:35:25.395 ' 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:25.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:25.395 --rc genhtml_branch_coverage=1 00:35:25.395 --rc genhtml_function_coverage=1 00:35:25.395 --rc genhtml_legend=1 00:35:25.395 --rc geninfo_all_blocks=1 00:35:25.395 --rc geninfo_unexecuted_blocks=1 00:35:25.395 00:35:25.395 ' 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:25.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:25.395 --rc genhtml_branch_coverage=1 00:35:25.395 --rc genhtml_function_coverage=1 00:35:25.395 --rc genhtml_legend=1 00:35:25.395 --rc geninfo_all_blocks=1 00:35:25.395 --rc geninfo_unexecuted_blocks=1 00:35:25.395 00:35:25.395 ' 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:25.395 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:35:25.395 17:35:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:33.536 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:33.536 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:33.536 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:33.536 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:33.537 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # is_hw=yes 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:33.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:33.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.553 ms 00:35:33.537 00:35:33.537 --- 10.0.0.2 ping statistics --- 00:35:33.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:33.537 rtt min/avg/max/mdev = 0.553/0.553/0.553/0.000 ms 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:33.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:33.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:35:33.537 00:35:33.537 --- 10.0.0.1 ping statistics --- 00:35:33.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:33.537 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # return 0 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:33.537 17:35:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:33.537 17:35:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # nvmfpid=3247906 00:35:33.537 17:35:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # waitforlisten 3247906 00:35:33.537 17:35:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:33.537 17:35:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 3247906 ']' 00:35:33.537 17:35:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:33.537 17:35:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:33.537 17:35:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:33.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:33.537 17:35:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:33.537 17:35:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:33.537 [2024-10-01 17:35:31.064922] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:35:33.537 [2024-10-01 17:35:31.064991] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:33.537 [2024-10-01 17:35:31.152758] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:33.537 [2024-10-01 17:35:31.199495] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:33.537 [2024-10-01 17:35:31.199549] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:33.537 [2024-10-01 17:35:31.199557] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:33.537 [2024-10-01 17:35:31.199569] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:33.537 [2024-10-01 17:35:31.199576] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:33.537 [2024-10-01 17:35:31.199600] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:33.537 17:35:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:33.537 17:35:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:35:33.537 17:35:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:33.537 17:35:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:33.537 17:35:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:33.537 17:35:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:33.537 17:35:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:35:33.537 17:35:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.537 17:35:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:33.537 [2024-10-01 17:35:31.933676] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:33.537 [2024-10-01 17:35:31.941915] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:35:33.537 null0 00:35:33.537 [2024-10-01 17:35:31.973891] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:33.537 17:35:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.537 17:35:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3247990 00:35:33.537 17:35:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3247990 /tmp/host.sock 00:35:33.537 17:35:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:35:33.537 17:35:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 3247990 ']' 00:35:33.537 17:35:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:35:33.537 17:35:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:33.537 17:35:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:35:33.537 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:35:33.537 17:35:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:33.537 17:35:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:33.537 [2024-10-01 17:35:32.050532] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:35:33.537 [2024-10-01 17:35:32.050596] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3247990 ] 00:35:33.799 [2024-10-01 17:35:32.115085] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:33.799 [2024-10-01 17:35:32.154414] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:33.799 17:35:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:33.799 17:35:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:35:33.799 17:35:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:33.799 17:35:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:35:33.799 17:35:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.799 17:35:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:33.799 17:35:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.799 17:35:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:35:33.799 17:35:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.799 17:35:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:33.799 17:35:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.799 17:35:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:35:33.799 17:35:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.799 17:35:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:35.183 [2024-10-01 17:35:33.311140] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:35.183 [2024-10-01 17:35:33.311163] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:35.183 [2024-10-01 17:35:33.311177] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:35.183 [2024-10-01 17:35:33.439623] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:35:35.183 [2024-10-01 17:35:33.665253] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:35.183 [2024-10-01 17:35:33.665303] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:35.183 [2024-10-01 17:35:33.665324] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:35.183 [2024-10-01 17:35:33.665338] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:35.183 [2024-10-01 17:35:33.665357] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:35.183 17:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.183 17:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:35:35.183 17:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:35.183 [2024-10-01 17:35:33.670346] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x24a54d0 was disconnected and freed. delete nvme_qpair. 00:35:35.183 17:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:35.183 17:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:35.183 17:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:35.183 17:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.183 17:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:35.183 17:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:35.183 17:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.183 17:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:35:35.183 17:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:35:35.183 17:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:35:35.444 17:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:35:35.444 17:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:35.444 17:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:35.444 17:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:35.444 17:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:35.444 17:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:35.444 17:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.444 17:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:35.444 17:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.444 17:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:35.444 17:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:36.385 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:36.385 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:36.385 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:36.385 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:36.385 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.385 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:36.385 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:36.385 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.645 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:36.645 17:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:37.587 17:35:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:37.587 17:35:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:37.587 17:35:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:37.587 17:35:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.587 17:35:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:37.587 17:35:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:37.587 17:35:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:37.587 17:35:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.587 17:35:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:37.587 17:35:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:38.527 17:35:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:38.527 17:35:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:38.528 17:35:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:38.528 17:35:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.528 17:35:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:38.528 17:35:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:38.528 17:35:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:38.528 17:35:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.528 17:35:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:38.528 17:35:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:39.912 17:35:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:39.912 17:35:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:39.912 17:35:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:39.912 17:35:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:39.912 17:35:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.912 17:35:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:39.912 17:35:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:39.912 17:35:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.912 17:35:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:39.912 17:35:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:40.852 [2024-10-01 17:35:39.116235] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:35:40.852 [2024-10-01 17:35:39.116276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:40.852 [2024-10-01 17:35:39.116287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.852 [2024-10-01 17:35:39.116298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:40.852 [2024-10-01 17:35:39.116306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.852 [2024-10-01 17:35:39.116314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:40.852 [2024-10-01 17:35:39.116321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.852 [2024-10-01 17:35:39.116330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:40.852 [2024-10-01 17:35:39.116337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.852 [2024-10-01 17:35:39.116345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:40.852 [2024-10-01 17:35:39.116353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.852 [2024-10-01 17:35:39.116361] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2481d80 is same with the state(6) to be set 00:35:40.852 17:35:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:40.852 17:35:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:40.852 17:35:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:40.852 17:35:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:40.852 17:35:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.852 17:35:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:40.852 17:35:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:40.852 [2024-10-01 17:35:39.126258] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2481d80 (9): Bad file descriptor 00:35:40.852 [2024-10-01 17:35:39.136297] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:40.852 17:35:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.852 17:35:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:40.852 17:35:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:41.791 [2024-10-01 17:35:40.160040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:35:41.791 [2024-10-01 17:35:40.160093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2481d80 with addr=10.0.0.2, port=4420 00:35:41.791 [2024-10-01 17:35:40.160106] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2481d80 is same with the state(6) to be set 00:35:41.791 [2024-10-01 17:35:40.160137] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2481d80 (9): Bad file descriptor 00:35:41.791 [2024-10-01 17:35:40.160190] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:35:41.791 [2024-10-01 17:35:40.160212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:41.791 [2024-10-01 17:35:40.160220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:41.791 [2024-10-01 17:35:40.160229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:41.791 [2024-10-01 17:35:40.160249] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.791 [2024-10-01 17:35:40.160258] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:41.791 17:35:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:41.791 17:35:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:41.791 17:35:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:41.791 17:35:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.791 17:35:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:41.791 17:35:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:41.791 17:35:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:41.791 17:35:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.791 17:35:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:41.791 17:35:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:42.756 [2024-10-01 17:35:41.162638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:42.756 [2024-10-01 17:35:41.162661] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:42.756 [2024-10-01 17:35:41.162670] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:42.756 [2024-10-01 17:35:41.162678] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:35:42.756 [2024-10-01 17:35:41.162692] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.756 [2024-10-01 17:35:41.162713] bdev_nvme.c:6913:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:35:42.756 [2024-10-01 17:35:41.162742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:42.756 [2024-10-01 17:35:41.162753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.756 [2024-10-01 17:35:41.162763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:42.756 [2024-10-01 17:35:41.162771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.756 [2024-10-01 17:35:41.162779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:42.756 [2024-10-01 17:35:41.162787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.756 [2024-10-01 17:35:41.162795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:42.756 [2024-10-01 17:35:41.162802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.756 [2024-10-01 17:35:41.162811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:42.756 [2024-10-01 17:35:41.162819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.756 [2024-10-01 17:35:41.162826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:35:42.756 [2024-10-01 17:35:41.162852] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24714c0 (9): Bad file descriptor 00:35:42.756 [2024-10-01 17:35:41.163851] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:35:42.756 [2024-10-01 17:35:41.163861] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:35:42.756 17:35:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:42.756 17:35:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:42.756 17:35:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:42.756 17:35:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.756 17:35:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:42.757 17:35:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:42.757 17:35:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:42.757 17:35:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.757 17:35:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:35:42.757 17:35:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:43.033 17:35:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:43.033 17:35:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:35:43.033 17:35:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:43.033 17:35:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:43.033 17:35:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:43.033 17:35:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.033 17:35:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:43.033 17:35:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:43.033 17:35:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:43.033 17:35:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.033 17:35:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:43.033 17:35:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:44.011 17:35:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:44.011 17:35:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:44.011 17:35:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:44.011 17:35:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.011 17:35:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:44.011 17:35:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:44.011 17:35:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:44.011 17:35:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.011 17:35:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:44.011 17:35:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:44.948 [2024-10-01 17:35:43.216147] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:44.948 [2024-10-01 17:35:43.216164] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:44.948 [2024-10-01 17:35:43.216177] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:44.948 [2024-10-01 17:35:43.303466] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:35:44.948 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:44.948 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:44.948 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:44.948 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.948 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:44.949 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:44.949 [2024-10-01 17:35:43.487626] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:44.949 [2024-10-01 17:35:43.487667] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:44.949 [2024-10-01 17:35:43.487686] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:44.949 [2024-10-01 17:35:43.487700] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:35:44.949 [2024-10-01 17:35:43.487708] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:44.949 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:44.949 [2024-10-01 17:35:43.494471] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x247d760 was disconnected and freed. delete nvme_qpair. 00:35:45.209 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.209 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:35:45.209 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:35:45.209 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3247990 00:35:45.209 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 3247990 ']' 00:35:45.209 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 3247990 00:35:45.209 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:35:45.209 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:45.209 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3247990 00:35:45.209 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:45.209 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:45.209 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3247990' 00:35:45.209 killing process with pid 3247990 00:35:45.209 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 3247990 00:35:45.209 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 3247990 00:35:45.209 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:35:45.209 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:45.209 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:35:45.209 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:45.209 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:35:45.209 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:45.209 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:45.209 rmmod nvme_tcp 00:35:45.209 rmmod nvme_fabrics 00:35:45.470 rmmod nvme_keyring 00:35:45.470 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:45.470 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:35:45.470 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:35:45.470 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@515 -- # '[' -n 3247906 ']' 00:35:45.470 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # killprocess 3247906 00:35:45.470 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 3247906 ']' 00:35:45.470 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 3247906 00:35:45.470 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:35:45.470 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:45.470 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3247906 00:35:45.470 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:45.470 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:45.470 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3247906' 00:35:45.470 killing process with pid 3247906 00:35:45.470 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 3247906 00:35:45.470 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 3247906 00:35:45.470 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:45.470 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:45.470 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:45.470 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:35:45.470 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-save 00:35:45.470 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-restore 00:35:45.470 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:45.470 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:45.470 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:45.470 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:45.470 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:45.470 17:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:48.009 00:35:48.009 real 0m22.661s 00:35:48.009 user 0m26.050s 00:35:48.009 sys 0m7.009s 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:48.009 ************************************ 00:35:48.009 END TEST nvmf_discovery_remove_ifc 00:35:48.009 ************************************ 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.009 ************************************ 00:35:48.009 START TEST nvmf_identify_kernel_target 00:35:48.009 ************************************ 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:48.009 * Looking for test storage... 00:35:48.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:48.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:48.009 --rc genhtml_branch_coverage=1 00:35:48.009 --rc genhtml_function_coverage=1 00:35:48.009 --rc genhtml_legend=1 00:35:48.009 --rc geninfo_all_blocks=1 00:35:48.009 --rc geninfo_unexecuted_blocks=1 00:35:48.009 00:35:48.009 ' 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:48.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:48.009 --rc genhtml_branch_coverage=1 00:35:48.009 --rc genhtml_function_coverage=1 00:35:48.009 --rc genhtml_legend=1 00:35:48.009 --rc geninfo_all_blocks=1 00:35:48.009 --rc geninfo_unexecuted_blocks=1 00:35:48.009 00:35:48.009 ' 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:48.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:48.009 --rc genhtml_branch_coverage=1 00:35:48.009 --rc genhtml_function_coverage=1 00:35:48.009 --rc genhtml_legend=1 00:35:48.009 --rc geninfo_all_blocks=1 00:35:48.009 --rc geninfo_unexecuted_blocks=1 00:35:48.009 00:35:48.009 ' 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:48.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:48.009 --rc genhtml_branch_coverage=1 00:35:48.009 --rc genhtml_function_coverage=1 00:35:48.009 --rc genhtml_legend=1 00:35:48.009 --rc geninfo_all_blocks=1 00:35:48.009 --rc geninfo_unexecuted_blocks=1 00:35:48.009 00:35:48.009 ' 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:48.009 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:35:48.009 17:35:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:56.145 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:56.145 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:56.145 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:56.145 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # is_hw=yes 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:56.145 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:56.146 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:56.146 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:35:56.146 00:35:56.146 --- 10.0.0.2 ping statistics --- 00:35:56.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:56.146 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:56.146 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:56.146 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:35:56.146 00:35:56.146 --- 10.0.0.1 ping statistics --- 00:35:56.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:56.146 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # return 0 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # local ip 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # local block nvme 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # modprobe nvmet 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:56.146 17:35:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:58.691 Waiting for block devices as requested 00:35:58.691 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:58.957 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:58.957 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:58.957 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:59.219 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:59.219 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:59.219 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:59.479 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:59.479 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:35:59.741 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:59.741 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:59.741 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:00.001 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:00.001 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:00.001 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:00.002 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:00.262 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:00.523 17:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:36:00.523 17:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:00.523 17:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:36:00.523 17:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:36:00.523 17:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:00.523 17:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:36:00.523 17:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:36:00.523 17:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:36:00.523 17:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:00.523 No valid GPT data, bailing 00:36:00.523 17:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:00.523 17:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:36:00.523 17:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:36:00.523 17:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:36:00.523 17:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:36:00.523 17:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:00.523 17:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:00.523 17:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:00.523 17:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:00.523 17:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:36:00.523 17:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:36:00.523 17:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:36:00.523 17:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:36:00.523 17:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo tcp 00:36:00.523 17:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 4420 00:36:00.523 17:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo ipv4 00:36:00.523 17:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:00.523 17:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:36:00.523 00:36:00.523 Discovery Log Number of Records 2, Generation counter 2 00:36:00.523 =====Discovery Log Entry 0====== 00:36:00.523 trtype: tcp 00:36:00.523 adrfam: ipv4 00:36:00.523 subtype: current discovery subsystem 00:36:00.523 treq: not specified, sq flow control disable supported 00:36:00.523 portid: 1 00:36:00.523 trsvcid: 4420 00:36:00.523 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:00.523 traddr: 10.0.0.1 00:36:00.523 eflags: none 00:36:00.523 sectype: none 00:36:00.523 =====Discovery Log Entry 1====== 00:36:00.523 trtype: tcp 00:36:00.523 adrfam: ipv4 00:36:00.523 subtype: nvme subsystem 00:36:00.523 treq: not specified, sq flow control disable supported 00:36:00.523 portid: 1 00:36:00.523 trsvcid: 4420 00:36:00.523 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:00.523 traddr: 10.0.0.1 00:36:00.523 eflags: none 00:36:00.523 sectype: none 00:36:00.523 17:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:36:00.523 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:36:00.785 ===================================================== 00:36:00.785 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:36:00.785 ===================================================== 00:36:00.785 Controller Capabilities/Features 00:36:00.785 ================================ 00:36:00.785 Vendor ID: 0000 00:36:00.785 Subsystem Vendor ID: 0000 00:36:00.785 Serial Number: af8983bce43e20a6524d 00:36:00.785 Model Number: Linux 00:36:00.785 Firmware Version: 6.8.9-20 00:36:00.785 Recommended Arb Burst: 0 00:36:00.785 IEEE OUI Identifier: 00 00 00 00:36:00.785 Multi-path I/O 00:36:00.785 May have multiple subsystem ports: No 00:36:00.785 May have multiple controllers: No 00:36:00.785 Associated with SR-IOV VF: No 00:36:00.785 Max Data Transfer Size: Unlimited 00:36:00.785 Max Number of Namespaces: 0 00:36:00.785 Max Number of I/O Queues: 1024 00:36:00.785 NVMe Specification Version (VS): 1.3 00:36:00.785 NVMe Specification Version (Identify): 1.3 00:36:00.785 Maximum Queue Entries: 1024 00:36:00.785 Contiguous Queues Required: No 00:36:00.785 Arbitration Mechanisms Supported 00:36:00.785 Weighted Round Robin: Not Supported 00:36:00.785 Vendor Specific: Not Supported 00:36:00.785 Reset Timeout: 7500 ms 00:36:00.785 Doorbell Stride: 4 bytes 00:36:00.785 NVM Subsystem Reset: Not Supported 00:36:00.785 Command Sets Supported 00:36:00.785 NVM Command Set: Supported 00:36:00.785 Boot Partition: Not Supported 00:36:00.785 Memory Page Size Minimum: 4096 bytes 00:36:00.785 Memory Page Size Maximum: 4096 bytes 00:36:00.785 Persistent Memory Region: Not Supported 00:36:00.785 Optional Asynchronous Events Supported 00:36:00.785 Namespace Attribute Notices: Not Supported 00:36:00.785 Firmware Activation Notices: Not Supported 00:36:00.785 ANA Change Notices: Not Supported 00:36:00.785 PLE Aggregate Log Change Notices: Not Supported 00:36:00.785 LBA Status Info Alert Notices: Not Supported 00:36:00.785 EGE Aggregate Log Change Notices: Not Supported 00:36:00.785 Normal NVM Subsystem Shutdown event: Not Supported 00:36:00.785 Zone Descriptor Change Notices: Not Supported 00:36:00.785 Discovery Log Change Notices: Supported 00:36:00.785 Controller Attributes 00:36:00.785 128-bit Host Identifier: Not Supported 00:36:00.785 Non-Operational Permissive Mode: Not Supported 00:36:00.785 NVM Sets: Not Supported 00:36:00.785 Read Recovery Levels: Not Supported 00:36:00.785 Endurance Groups: Not Supported 00:36:00.785 Predictable Latency Mode: Not Supported 00:36:00.785 Traffic Based Keep ALive: Not Supported 00:36:00.785 Namespace Granularity: Not Supported 00:36:00.785 SQ Associations: Not Supported 00:36:00.785 UUID List: Not Supported 00:36:00.785 Multi-Domain Subsystem: Not Supported 00:36:00.785 Fixed Capacity Management: Not Supported 00:36:00.785 Variable Capacity Management: Not Supported 00:36:00.785 Delete Endurance Group: Not Supported 00:36:00.785 Delete NVM Set: Not Supported 00:36:00.785 Extended LBA Formats Supported: Not Supported 00:36:00.785 Flexible Data Placement Supported: Not Supported 00:36:00.785 00:36:00.785 Controller Memory Buffer Support 00:36:00.785 ================================ 00:36:00.785 Supported: No 00:36:00.785 00:36:00.785 Persistent Memory Region Support 00:36:00.785 ================================ 00:36:00.785 Supported: No 00:36:00.785 00:36:00.785 Admin Command Set Attributes 00:36:00.785 ============================ 00:36:00.785 Security Send/Receive: Not Supported 00:36:00.785 Format NVM: Not Supported 00:36:00.785 Firmware Activate/Download: Not Supported 00:36:00.785 Namespace Management: Not Supported 00:36:00.785 Device Self-Test: Not Supported 00:36:00.785 Directives: Not Supported 00:36:00.785 NVMe-MI: Not Supported 00:36:00.785 Virtualization Management: Not Supported 00:36:00.785 Doorbell Buffer Config: Not Supported 00:36:00.785 Get LBA Status Capability: Not Supported 00:36:00.785 Command & Feature Lockdown Capability: Not Supported 00:36:00.785 Abort Command Limit: 1 00:36:00.785 Async Event Request Limit: 1 00:36:00.785 Number of Firmware Slots: N/A 00:36:00.785 Firmware Slot 1 Read-Only: N/A 00:36:00.785 Firmware Activation Without Reset: N/A 00:36:00.785 Multiple Update Detection Support: N/A 00:36:00.785 Firmware Update Granularity: No Information Provided 00:36:00.785 Per-Namespace SMART Log: No 00:36:00.785 Asymmetric Namespace Access Log Page: Not Supported 00:36:00.785 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:36:00.785 Command Effects Log Page: Not Supported 00:36:00.785 Get Log Page Extended Data: Supported 00:36:00.785 Telemetry Log Pages: Not Supported 00:36:00.785 Persistent Event Log Pages: Not Supported 00:36:00.785 Supported Log Pages Log Page: May Support 00:36:00.785 Commands Supported & Effects Log Page: Not Supported 00:36:00.785 Feature Identifiers & Effects Log Page:May Support 00:36:00.785 NVMe-MI Commands & Effects Log Page: May Support 00:36:00.785 Data Area 4 for Telemetry Log: Not Supported 00:36:00.785 Error Log Page Entries Supported: 1 00:36:00.785 Keep Alive: Not Supported 00:36:00.785 00:36:00.785 NVM Command Set Attributes 00:36:00.785 ========================== 00:36:00.785 Submission Queue Entry Size 00:36:00.785 Max: 1 00:36:00.785 Min: 1 00:36:00.785 Completion Queue Entry Size 00:36:00.785 Max: 1 00:36:00.785 Min: 1 00:36:00.785 Number of Namespaces: 0 00:36:00.785 Compare Command: Not Supported 00:36:00.785 Write Uncorrectable Command: Not Supported 00:36:00.785 Dataset Management Command: Not Supported 00:36:00.785 Write Zeroes Command: Not Supported 00:36:00.785 Set Features Save Field: Not Supported 00:36:00.786 Reservations: Not Supported 00:36:00.786 Timestamp: Not Supported 00:36:00.786 Copy: Not Supported 00:36:00.786 Volatile Write Cache: Not Present 00:36:00.786 Atomic Write Unit (Normal): 1 00:36:00.786 Atomic Write Unit (PFail): 1 00:36:00.786 Atomic Compare & Write Unit: 1 00:36:00.786 Fused Compare & Write: Not Supported 00:36:00.786 Scatter-Gather List 00:36:00.786 SGL Command Set: Supported 00:36:00.786 SGL Keyed: Not Supported 00:36:00.786 SGL Bit Bucket Descriptor: Not Supported 00:36:00.786 SGL Metadata Pointer: Not Supported 00:36:00.786 Oversized SGL: Not Supported 00:36:00.786 SGL Metadata Address: Not Supported 00:36:00.786 SGL Offset: Supported 00:36:00.786 Transport SGL Data Block: Not Supported 00:36:00.786 Replay Protected Memory Block: Not Supported 00:36:00.786 00:36:00.786 Firmware Slot Information 00:36:00.786 ========================= 00:36:00.786 Active slot: 0 00:36:00.786 00:36:00.786 00:36:00.786 Error Log 00:36:00.786 ========= 00:36:00.786 00:36:00.786 Active Namespaces 00:36:00.786 ================= 00:36:00.786 Discovery Log Page 00:36:00.786 ================== 00:36:00.786 Generation Counter: 2 00:36:00.786 Number of Records: 2 00:36:00.786 Record Format: 0 00:36:00.786 00:36:00.786 Discovery Log Entry 0 00:36:00.786 ---------------------- 00:36:00.786 Transport Type: 3 (TCP) 00:36:00.786 Address Family: 1 (IPv4) 00:36:00.786 Subsystem Type: 3 (Current Discovery Subsystem) 00:36:00.786 Entry Flags: 00:36:00.786 Duplicate Returned Information: 0 00:36:00.786 Explicit Persistent Connection Support for Discovery: 0 00:36:00.786 Transport Requirements: 00:36:00.786 Secure Channel: Not Specified 00:36:00.786 Port ID: 1 (0x0001) 00:36:00.786 Controller ID: 65535 (0xffff) 00:36:00.786 Admin Max SQ Size: 32 00:36:00.786 Transport Service Identifier: 4420 00:36:00.786 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:36:00.786 Transport Address: 10.0.0.1 00:36:00.786 Discovery Log Entry 1 00:36:00.786 ---------------------- 00:36:00.786 Transport Type: 3 (TCP) 00:36:00.786 Address Family: 1 (IPv4) 00:36:00.786 Subsystem Type: 2 (NVM Subsystem) 00:36:00.786 Entry Flags: 00:36:00.786 Duplicate Returned Information: 0 00:36:00.786 Explicit Persistent Connection Support for Discovery: 0 00:36:00.786 Transport Requirements: 00:36:00.786 Secure Channel: Not Specified 00:36:00.786 Port ID: 1 (0x0001) 00:36:00.786 Controller ID: 65535 (0xffff) 00:36:00.786 Admin Max SQ Size: 32 00:36:00.786 Transport Service Identifier: 4420 00:36:00.786 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:36:00.786 Transport Address: 10.0.0.1 00:36:00.786 17:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:00.786 get_feature(0x01) failed 00:36:00.786 get_feature(0x02) failed 00:36:00.786 get_feature(0x04) failed 00:36:00.786 ===================================================== 00:36:00.786 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:00.786 ===================================================== 00:36:00.786 Controller Capabilities/Features 00:36:00.786 ================================ 00:36:00.786 Vendor ID: 0000 00:36:00.786 Subsystem Vendor ID: 0000 00:36:00.786 Serial Number: e3e8c2f19e34c67134e5 00:36:00.786 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:36:00.786 Firmware Version: 6.8.9-20 00:36:00.786 Recommended Arb Burst: 6 00:36:00.786 IEEE OUI Identifier: 00 00 00 00:36:00.786 Multi-path I/O 00:36:00.786 May have multiple subsystem ports: Yes 00:36:00.786 May have multiple controllers: Yes 00:36:00.786 Associated with SR-IOV VF: No 00:36:00.786 Max Data Transfer Size: Unlimited 00:36:00.786 Max Number of Namespaces: 1024 00:36:00.786 Max Number of I/O Queues: 128 00:36:00.786 NVMe Specification Version (VS): 1.3 00:36:00.786 NVMe Specification Version (Identify): 1.3 00:36:00.786 Maximum Queue Entries: 1024 00:36:00.786 Contiguous Queues Required: No 00:36:00.786 Arbitration Mechanisms Supported 00:36:00.786 Weighted Round Robin: Not Supported 00:36:00.786 Vendor Specific: Not Supported 00:36:00.786 Reset Timeout: 7500 ms 00:36:00.786 Doorbell Stride: 4 bytes 00:36:00.786 NVM Subsystem Reset: Not Supported 00:36:00.786 Command Sets Supported 00:36:00.786 NVM Command Set: Supported 00:36:00.786 Boot Partition: Not Supported 00:36:00.786 Memory Page Size Minimum: 4096 bytes 00:36:00.786 Memory Page Size Maximum: 4096 bytes 00:36:00.786 Persistent Memory Region: Not Supported 00:36:00.786 Optional Asynchronous Events Supported 00:36:00.786 Namespace Attribute Notices: Supported 00:36:00.786 Firmware Activation Notices: Not Supported 00:36:00.786 ANA Change Notices: Supported 00:36:00.786 PLE Aggregate Log Change Notices: Not Supported 00:36:00.786 LBA Status Info Alert Notices: Not Supported 00:36:00.786 EGE Aggregate Log Change Notices: Not Supported 00:36:00.786 Normal NVM Subsystem Shutdown event: Not Supported 00:36:00.786 Zone Descriptor Change Notices: Not Supported 00:36:00.786 Discovery Log Change Notices: Not Supported 00:36:00.786 Controller Attributes 00:36:00.786 128-bit Host Identifier: Supported 00:36:00.786 Non-Operational Permissive Mode: Not Supported 00:36:00.786 NVM Sets: Not Supported 00:36:00.786 Read Recovery Levels: Not Supported 00:36:00.786 Endurance Groups: Not Supported 00:36:00.786 Predictable Latency Mode: Not Supported 00:36:00.786 Traffic Based Keep ALive: Supported 00:36:00.786 Namespace Granularity: Not Supported 00:36:00.786 SQ Associations: Not Supported 00:36:00.786 UUID List: Not Supported 00:36:00.786 Multi-Domain Subsystem: Not Supported 00:36:00.786 Fixed Capacity Management: Not Supported 00:36:00.786 Variable Capacity Management: Not Supported 00:36:00.786 Delete Endurance Group: Not Supported 00:36:00.786 Delete NVM Set: Not Supported 00:36:00.786 Extended LBA Formats Supported: Not Supported 00:36:00.786 Flexible Data Placement Supported: Not Supported 00:36:00.786 00:36:00.786 Controller Memory Buffer Support 00:36:00.786 ================================ 00:36:00.786 Supported: No 00:36:00.786 00:36:00.786 Persistent Memory Region Support 00:36:00.786 ================================ 00:36:00.786 Supported: No 00:36:00.786 00:36:00.786 Admin Command Set Attributes 00:36:00.786 ============================ 00:36:00.786 Security Send/Receive: Not Supported 00:36:00.786 Format NVM: Not Supported 00:36:00.786 Firmware Activate/Download: Not Supported 00:36:00.786 Namespace Management: Not Supported 00:36:00.786 Device Self-Test: Not Supported 00:36:00.786 Directives: Not Supported 00:36:00.786 NVMe-MI: Not Supported 00:36:00.786 Virtualization Management: Not Supported 00:36:00.786 Doorbell Buffer Config: Not Supported 00:36:00.786 Get LBA Status Capability: Not Supported 00:36:00.786 Command & Feature Lockdown Capability: Not Supported 00:36:00.786 Abort Command Limit: 4 00:36:00.786 Async Event Request Limit: 4 00:36:00.786 Number of Firmware Slots: N/A 00:36:00.786 Firmware Slot 1 Read-Only: N/A 00:36:00.786 Firmware Activation Without Reset: N/A 00:36:00.786 Multiple Update Detection Support: N/A 00:36:00.786 Firmware Update Granularity: No Information Provided 00:36:00.786 Per-Namespace SMART Log: Yes 00:36:00.786 Asymmetric Namespace Access Log Page: Supported 00:36:00.786 ANA Transition Time : 10 sec 00:36:00.786 00:36:00.786 Asymmetric Namespace Access Capabilities 00:36:00.786 ANA Optimized State : Supported 00:36:00.786 ANA Non-Optimized State : Supported 00:36:00.786 ANA Inaccessible State : Supported 00:36:00.786 ANA Persistent Loss State : Supported 00:36:00.786 ANA Change State : Supported 00:36:00.786 ANAGRPID is not changed : No 00:36:00.786 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:36:00.786 00:36:00.786 ANA Group Identifier Maximum : 128 00:36:00.786 Number of ANA Group Identifiers : 128 00:36:00.786 Max Number of Allowed Namespaces : 1024 00:36:00.786 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:36:00.786 Command Effects Log Page: Supported 00:36:00.786 Get Log Page Extended Data: Supported 00:36:00.786 Telemetry Log Pages: Not Supported 00:36:00.786 Persistent Event Log Pages: Not Supported 00:36:00.786 Supported Log Pages Log Page: May Support 00:36:00.786 Commands Supported & Effects Log Page: Not Supported 00:36:00.786 Feature Identifiers & Effects Log Page:May Support 00:36:00.786 NVMe-MI Commands & Effects Log Page: May Support 00:36:00.786 Data Area 4 for Telemetry Log: Not Supported 00:36:00.786 Error Log Page Entries Supported: 128 00:36:00.786 Keep Alive: Supported 00:36:00.786 Keep Alive Granularity: 1000 ms 00:36:00.786 00:36:00.786 NVM Command Set Attributes 00:36:00.786 ========================== 00:36:00.786 Submission Queue Entry Size 00:36:00.786 Max: 64 00:36:00.786 Min: 64 00:36:00.786 Completion Queue Entry Size 00:36:00.786 Max: 16 00:36:00.786 Min: 16 00:36:00.786 Number of Namespaces: 1024 00:36:00.787 Compare Command: Not Supported 00:36:00.787 Write Uncorrectable Command: Not Supported 00:36:00.787 Dataset Management Command: Supported 00:36:00.787 Write Zeroes Command: Supported 00:36:00.787 Set Features Save Field: Not Supported 00:36:00.787 Reservations: Not Supported 00:36:00.787 Timestamp: Not Supported 00:36:00.787 Copy: Not Supported 00:36:00.787 Volatile Write Cache: Present 00:36:00.787 Atomic Write Unit (Normal): 1 00:36:00.787 Atomic Write Unit (PFail): 1 00:36:00.787 Atomic Compare & Write Unit: 1 00:36:00.787 Fused Compare & Write: Not Supported 00:36:00.787 Scatter-Gather List 00:36:00.787 SGL Command Set: Supported 00:36:00.787 SGL Keyed: Not Supported 00:36:00.787 SGL Bit Bucket Descriptor: Not Supported 00:36:00.787 SGL Metadata Pointer: Not Supported 00:36:00.787 Oversized SGL: Not Supported 00:36:00.787 SGL Metadata Address: Not Supported 00:36:00.787 SGL Offset: Supported 00:36:00.787 Transport SGL Data Block: Not Supported 00:36:00.787 Replay Protected Memory Block: Not Supported 00:36:00.787 00:36:00.787 Firmware Slot Information 00:36:00.787 ========================= 00:36:00.787 Active slot: 0 00:36:00.787 00:36:00.787 Asymmetric Namespace Access 00:36:00.787 =========================== 00:36:00.787 Change Count : 0 00:36:00.787 Number of ANA Group Descriptors : 1 00:36:00.787 ANA Group Descriptor : 0 00:36:00.787 ANA Group ID : 1 00:36:00.787 Number of NSID Values : 1 00:36:00.787 Change Count : 0 00:36:00.787 ANA State : 1 00:36:00.787 Namespace Identifier : 1 00:36:00.787 00:36:00.787 Commands Supported and Effects 00:36:00.787 ============================== 00:36:00.787 Admin Commands 00:36:00.787 -------------- 00:36:00.787 Get Log Page (02h): Supported 00:36:00.787 Identify (06h): Supported 00:36:00.787 Abort (08h): Supported 00:36:00.787 Set Features (09h): Supported 00:36:00.787 Get Features (0Ah): Supported 00:36:00.787 Asynchronous Event Request (0Ch): Supported 00:36:00.787 Keep Alive (18h): Supported 00:36:00.787 I/O Commands 00:36:00.787 ------------ 00:36:00.787 Flush (00h): Supported 00:36:00.787 Write (01h): Supported LBA-Change 00:36:00.787 Read (02h): Supported 00:36:00.787 Write Zeroes (08h): Supported LBA-Change 00:36:00.787 Dataset Management (09h): Supported 00:36:00.787 00:36:00.787 Error Log 00:36:00.787 ========= 00:36:00.787 Entry: 0 00:36:00.787 Error Count: 0x3 00:36:00.787 Submission Queue Id: 0x0 00:36:00.787 Command Id: 0x5 00:36:00.787 Phase Bit: 0 00:36:00.787 Status Code: 0x2 00:36:00.787 Status Code Type: 0x0 00:36:00.787 Do Not Retry: 1 00:36:00.787 Error Location: 0x28 00:36:00.787 LBA: 0x0 00:36:00.787 Namespace: 0x0 00:36:00.787 Vendor Log Page: 0x0 00:36:00.787 ----------- 00:36:00.787 Entry: 1 00:36:00.787 Error Count: 0x2 00:36:00.787 Submission Queue Id: 0x0 00:36:00.787 Command Id: 0x5 00:36:00.787 Phase Bit: 0 00:36:00.787 Status Code: 0x2 00:36:00.787 Status Code Type: 0x0 00:36:00.787 Do Not Retry: 1 00:36:00.787 Error Location: 0x28 00:36:00.787 LBA: 0x0 00:36:00.787 Namespace: 0x0 00:36:00.787 Vendor Log Page: 0x0 00:36:00.787 ----------- 00:36:00.787 Entry: 2 00:36:00.787 Error Count: 0x1 00:36:00.787 Submission Queue Id: 0x0 00:36:00.787 Command Id: 0x4 00:36:00.787 Phase Bit: 0 00:36:00.787 Status Code: 0x2 00:36:00.787 Status Code Type: 0x0 00:36:00.787 Do Not Retry: 1 00:36:00.787 Error Location: 0x28 00:36:00.787 LBA: 0x0 00:36:00.787 Namespace: 0x0 00:36:00.787 Vendor Log Page: 0x0 00:36:00.787 00:36:00.787 Number of Queues 00:36:00.787 ================ 00:36:00.787 Number of I/O Submission Queues: 128 00:36:00.787 Number of I/O Completion Queues: 128 00:36:00.787 00:36:00.787 ZNS Specific Controller Data 00:36:00.787 ============================ 00:36:00.787 Zone Append Size Limit: 0 00:36:00.787 00:36:00.787 00:36:00.787 Active Namespaces 00:36:00.787 ================= 00:36:00.787 get_feature(0x05) failed 00:36:00.787 Namespace ID:1 00:36:00.787 Command Set Identifier: NVM (00h) 00:36:00.787 Deallocate: Supported 00:36:00.787 Deallocated/Unwritten Error: Not Supported 00:36:00.787 Deallocated Read Value: Unknown 00:36:00.787 Deallocate in Write Zeroes: Not Supported 00:36:00.787 Deallocated Guard Field: 0xFFFF 00:36:00.787 Flush: Supported 00:36:00.787 Reservation: Not Supported 00:36:00.787 Namespace Sharing Capabilities: Multiple Controllers 00:36:00.787 Size (in LBAs): 3750748848 (1788GiB) 00:36:00.787 Capacity (in LBAs): 3750748848 (1788GiB) 00:36:00.787 Utilization (in LBAs): 3750748848 (1788GiB) 00:36:00.787 UUID: 69b0917c-d662-4ed2-82ef-01720bf58b21 00:36:00.787 Thin Provisioning: Not Supported 00:36:00.787 Per-NS Atomic Units: Yes 00:36:00.787 Atomic Write Unit (Normal): 8 00:36:00.787 Atomic Write Unit (PFail): 8 00:36:00.787 Preferred Write Granularity: 8 00:36:00.787 Atomic Compare & Write Unit: 8 00:36:00.787 Atomic Boundary Size (Normal): 0 00:36:00.787 Atomic Boundary Size (PFail): 0 00:36:00.787 Atomic Boundary Offset: 0 00:36:00.787 NGUID/EUI64 Never Reused: No 00:36:00.787 ANA group ID: 1 00:36:00.787 Namespace Write Protected: No 00:36:00.787 Number of LBA Formats: 1 00:36:00.787 Current LBA Format: LBA Format #00 00:36:00.787 LBA Format #00: Data Size: 512 Metadata Size: 0 00:36:00.787 00:36:00.787 17:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:36:00.787 17:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:00.787 17:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:36:00.787 17:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:00.787 17:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:36:00.787 17:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:00.787 17:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:00.787 rmmod nvme_tcp 00:36:00.787 rmmod nvme_fabrics 00:36:00.787 17:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:00.787 17:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:36:00.787 17:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:36:00.787 17:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:36:00.787 17:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:36:00.787 17:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:00.787 17:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:00.787 17:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:36:00.787 17:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-save 00:36:00.787 17:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:00.787 17:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-restore 00:36:00.787 17:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:00.787 17:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:00.787 17:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:00.787 17:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:00.787 17:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:03.330 17:36:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:03.330 17:36:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:36:03.330 17:36:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:03.330 17:36:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # echo 0 00:36:03.330 17:36:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:03.330 17:36:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:03.330 17:36:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:03.330 17:36:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:03.330 17:36:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:36:03.330 17:36:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:36:03.330 17:36:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:06.635 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:06.635 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:06.635 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:06.635 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:06.635 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:06.635 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:06.635 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:06.635 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:06.635 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:06.635 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:06.635 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:06.635 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:06.635 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:06.635 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:06.635 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:06.635 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:06.635 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:36:06.635 00:36:06.635 real 0m19.010s 00:36:06.635 user 0m5.174s 00:36:06.635 sys 0m10.926s 00:36:06.635 17:36:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:06.635 17:36:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:36:06.635 ************************************ 00:36:06.635 END TEST nvmf_identify_kernel_target 00:36:06.635 ************************************ 00:36:06.896 17:36:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:36:06.896 17:36:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:06.896 17:36:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:06.896 17:36:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.896 ************************************ 00:36:06.896 START TEST nvmf_auth_host 00:36:06.896 ************************************ 00:36:06.896 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:36:06.896 * Looking for test storage... 00:36:06.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:06.896 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:06.896 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:06.896 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:36:06.896 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:06.896 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:06.896 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:06.896 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:06.896 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:36:06.896 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:36:06.896 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:36:06.896 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:36:06.896 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:36:06.896 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:36:06.896 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:36:06.896 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:06.896 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:36:06.896 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:36:06.896 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:06.896 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:06.896 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:36:06.896 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:36:06.896 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:06.896 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:36:06.896 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:36:06.896 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:36:06.896 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:36:06.896 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:06.896 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:36:06.896 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:36:06.896 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:06.896 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:06.896 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:36:06.896 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:06.896 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:06.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:06.896 --rc genhtml_branch_coverage=1 00:36:06.896 --rc genhtml_function_coverage=1 00:36:06.896 --rc genhtml_legend=1 00:36:06.896 --rc geninfo_all_blocks=1 00:36:06.896 --rc geninfo_unexecuted_blocks=1 00:36:06.896 00:36:06.896 ' 00:36:06.896 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:06.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:06.896 --rc genhtml_branch_coverage=1 00:36:06.896 --rc genhtml_function_coverage=1 00:36:06.896 --rc genhtml_legend=1 00:36:06.896 --rc geninfo_all_blocks=1 00:36:06.896 --rc geninfo_unexecuted_blocks=1 00:36:06.896 00:36:06.896 ' 00:36:06.896 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:06.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:06.896 --rc genhtml_branch_coverage=1 00:36:06.896 --rc genhtml_function_coverage=1 00:36:06.896 --rc genhtml_legend=1 00:36:06.896 --rc geninfo_all_blocks=1 00:36:06.896 --rc geninfo_unexecuted_blocks=1 00:36:06.896 00:36:06.896 ' 00:36:06.896 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:06.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:06.896 --rc genhtml_branch_coverage=1 00:36:06.896 --rc genhtml_function_coverage=1 00:36:06.896 --rc genhtml_legend=1 00:36:06.896 --rc geninfo_all_blocks=1 00:36:06.896 --rc geninfo_unexecuted_blocks=1 00:36:06.896 00:36:06.896 ' 00:36:06.896 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:06.896 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:36:06.896 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:07.157 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:07.157 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:07.157 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:07.157 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:07.157 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:07.157 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:07.157 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:07.157 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:07.157 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:07.157 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:07.157 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:07.157 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:07.157 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:07.157 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:07.157 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:07.157 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:07.157 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:36:07.157 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:07.157 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:07.157 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:07.157 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.157 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.157 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.157 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:36:07.158 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.158 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:36:07.158 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:07.158 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:07.158 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:07.158 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:07.158 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:07.158 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:07.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:07.158 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:07.158 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:07.158 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:07.158 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:36:07.158 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:36:07.158 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:36:07.158 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:36:07.158 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:07.158 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:07.158 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:36:07.158 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:36:07.158 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:36:07.158 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:07.158 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:07.158 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:07.158 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:07.158 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:07.158 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:07.158 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:07.158 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:07.158 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:36:07.158 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:36:07.158 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:36:07.158 17:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:13.737 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:13.737 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:13.737 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:13.737 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # is_hw=yes 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:13.737 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:13.999 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:13.999 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:13.999 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:13.999 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:13.999 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:13.999 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:13.999 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:13.999 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:13.999 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:13.999 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:36:13.999 00:36:13.999 --- 10.0.0.2 ping statistics --- 00:36:13.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:13.999 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:36:13.999 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:13.999 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:13.999 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:36:13.999 00:36:13.999 --- 10.0.0.1 ping statistics --- 00:36:13.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:13.999 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:36:13.999 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:13.999 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # return 0 00:36:13.999 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:36:13.999 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:13.999 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:36:13.999 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:36:13.999 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:13.999 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:36:13.999 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:36:14.260 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:36:14.260 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:36:14.260 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:14.260 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.260 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # nvmfpid=3262650 00:36:14.260 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # waitforlisten 3262650 00:36:14.260 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:36:14.260 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 3262650 ']' 00:36:14.260 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:14.260 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:14.260 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:14.260 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:14.260 17:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.203 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:15.203 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:36:15.203 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:36:15.203 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:15.203 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.203 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:15.203 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:36:15.203 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:36:15.203 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:36:15.203 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:15.203 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:36:15.203 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:36:15.203 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:36:15.203 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:15.203 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=6ee86b83ac147da601d55f16e5e11e26 00:36:15.203 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:36:15.203 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.o85 00:36:15.203 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 6ee86b83ac147da601d55f16e5e11e26 0 00:36:15.203 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 6ee86b83ac147da601d55f16e5e11e26 0 00:36:15.203 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:36:15.203 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:36:15.203 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=6ee86b83ac147da601d55f16e5e11e26 00:36:15.203 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:36:15.203 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:36:15.203 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.o85 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.o85 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.o85 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=130d7274c9682d6f673f9b007adf9af4d6e9c89c7331b3fe9175e7f8e6ca3b8f 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.f8a 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 130d7274c9682d6f673f9b007adf9af4d6e9c89c7331b3fe9175e7f8e6ca3b8f 3 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 130d7274c9682d6f673f9b007adf9af4d6e9c89c7331b3fe9175e7f8e6ca3b8f 3 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=130d7274c9682d6f673f9b007adf9af4d6e9c89c7331b3fe9175e7f8e6ca3b8f 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.f8a 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.f8a 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.f8a 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=a3e040bd315dc50932384e8b99545da513ff5e5f1acc5830 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.wZE 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key a3e040bd315dc50932384e8b99545da513ff5e5f1acc5830 0 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 a3e040bd315dc50932384e8b99545da513ff5e5f1acc5830 0 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=a3e040bd315dc50932384e8b99545da513ff5e5f1acc5830 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.wZE 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.wZE 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.wZE 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=a9ecc10b9a19e36c973f513d977dffec188254ab45c4efc0 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.OY3 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key a9ecc10b9a19e36c973f513d977dffec188254ab45c4efc0 2 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 a9ecc10b9a19e36c973f513d977dffec188254ab45c4efc0 2 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=a9ecc10b9a19e36c973f513d977dffec188254ab45c4efc0 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.OY3 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.OY3 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.OY3 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=6438337e490067af7e4578e643f98753 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.zyO 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 6438337e490067af7e4578e643f98753 1 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 6438337e490067af7e4578e643f98753 1 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=6438337e490067af7e4578e643f98753 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:36:15.204 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:36:15.467 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.zyO 00:36:15.467 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.zyO 00:36:15.467 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.zyO 00:36:15.467 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:36:15.467 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:36:15.467 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:15.467 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:36:15.467 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:36:15.467 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:36:15.467 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:15.467 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=904661c82930512a3dae46cbf74f402b 00:36:15.467 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:36:15.467 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.qKL 00:36:15.467 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 904661c82930512a3dae46cbf74f402b 1 00:36:15.467 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 904661c82930512a3dae46cbf74f402b 1 00:36:15.467 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:36:15.467 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:36:15.467 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=904661c82930512a3dae46cbf74f402b 00:36:15.467 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:36:15.467 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:36:15.467 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.qKL 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.qKL 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.qKL 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=2049ff83ba3d27ddeace2afb3107fb57e4ea1d3e795e480b 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.KRF 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 2049ff83ba3d27ddeace2afb3107fb57e4ea1d3e795e480b 2 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 2049ff83ba3d27ddeace2afb3107fb57e4ea1d3e795e480b 2 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=2049ff83ba3d27ddeace2afb3107fb57e4ea1d3e795e480b 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.KRF 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.KRF 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.KRF 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=0795574f4df15f3c700ec8ff66ed35ad 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.4tu 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 0795574f4df15f3c700ec8ff66ed35ad 0 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 0795574f4df15f3c700ec8ff66ed35ad 0 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=0795574f4df15f3c700ec8ff66ed35ad 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.4tu 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.4tu 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.4tu 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=8e47cecc5427fa5bff625544f900fe7e6c6258f61f7e2fcda3d7411fb5d5d9f4 00:36:15.468 17:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:36:15.468 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.ewF 00:36:15.468 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 8e47cecc5427fa5bff625544f900fe7e6c6258f61f7e2fcda3d7411fb5d5d9f4 3 00:36:15.468 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 8e47cecc5427fa5bff625544f900fe7e6c6258f61f7e2fcda3d7411fb5d5d9f4 3 00:36:15.468 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:36:15.468 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:36:15.468 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=8e47cecc5427fa5bff625544f900fe7e6c6258f61f7e2fcda3d7411fb5d5d9f4 00:36:15.468 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:36:15.468 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:36:15.732 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.ewF 00:36:15.732 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.ewF 00:36:15.732 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.ewF 00:36:15.732 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:36:15.732 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3262650 00:36:15.732 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 3262650 ']' 00:36:15.732 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:15.732 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:15.732 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:15.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:15.732 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:15.732 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.732 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:15.732 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:36:15.732 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:15.732 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.o85 00:36:15.732 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.732 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.732 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.732 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.f8a ]] 00:36:15.732 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.f8a 00:36:15.732 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.732 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.732 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.732 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:15.732 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.wZE 00:36:15.732 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.732 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.732 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.732 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.OY3 ]] 00:36:15.732 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.OY3 00:36:15.732 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.732 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.732 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.732 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:15.732 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.zyO 00:36:15.732 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.732 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.732 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.732 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.qKL ]] 00:36:15.732 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.qKL 00:36:15.732 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.732 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.994 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.994 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:15.994 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.KRF 00:36:15.994 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.994 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.994 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.994 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.4tu ]] 00:36:15.994 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.4tu 00:36:15.994 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.994 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.994 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.994 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:15.994 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.ewF 00:36:15.994 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.994 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.994 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.994 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:36:15.994 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:36:15.994 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:36:15.994 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:15.994 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:15.994 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:15.994 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:15.994 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:15.994 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:15.994 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:15.994 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:15.994 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:15.994 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:15.994 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:36:15.994 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:36:15.994 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:36:15.994 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:15.994 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:15.994 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:15.994 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # local block nvme 00:36:15.994 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:36:15.994 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # modprobe nvmet 00:36:15.994 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:15.994 17:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:19.291 Waiting for block devices as requested 00:36:19.291 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:19.291 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:19.291 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:19.550 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:19.550 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:19.550 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:19.810 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:19.810 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:19.810 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:36:20.070 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:20.070 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:20.070 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:20.330 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:20.330 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:20.330 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:20.589 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:20.589 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:21.529 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:36:21.529 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:21.529 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:36:21.529 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:36:21.529 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:21.529 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:36:21.529 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:36:21.529 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:36:21.529 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:21.529 No valid GPT data, bailing 00:36:21.529 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:21.529 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:36:21.529 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:36:21.529 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:36:21.529 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:36:21.529 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:21.529 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:21.529 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:21.529 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:36:21.529 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:36:21.529 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:36:21.529 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:36:21.529 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:36:21.529 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo tcp 00:36:21.529 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 4420 00:36:21.529 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo ipv4 00:36:21.529 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:21.529 17:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:36:21.529 00:36:21.529 Discovery Log Number of Records 2, Generation counter 2 00:36:21.529 =====Discovery Log Entry 0====== 00:36:21.529 trtype: tcp 00:36:21.529 adrfam: ipv4 00:36:21.529 subtype: current discovery subsystem 00:36:21.529 treq: not specified, sq flow control disable supported 00:36:21.529 portid: 1 00:36:21.529 trsvcid: 4420 00:36:21.529 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:21.529 traddr: 10.0.0.1 00:36:21.529 eflags: none 00:36:21.529 sectype: none 00:36:21.529 =====Discovery Log Entry 1====== 00:36:21.529 trtype: tcp 00:36:21.529 adrfam: ipv4 00:36:21.529 subtype: nvme subsystem 00:36:21.529 treq: not specified, sq flow control disable supported 00:36:21.529 portid: 1 00:36:21.529 trsvcid: 4420 00:36:21.529 subnqn: nqn.2024-02.io.spdk:cnode0 00:36:21.529 traddr: 10.0.0.1 00:36:21.529 eflags: none 00:36:21.529 sectype: none 00:36:21.529 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:21.529 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:36:21.529 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:21.529 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:21.529 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:21.529 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:21.529 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:21.529 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:21.790 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTNlMDQwYmQzMTVkYzUwOTMyMzg0ZThiOTk1NDVkYTUxM2ZmNWU1ZjFhY2M1ODMwZ/7lwA==: 00:36:21.790 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: 00:36:21.790 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:21.790 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:21.790 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTNlMDQwYmQzMTVkYzUwOTMyMzg0ZThiOTk1NDVkYTUxM2ZmNWU1ZjFhY2M1ODMwZ/7lwA==: 00:36:21.790 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: ]] 00:36:21.790 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: 00:36:21.790 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:36:21.790 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.791 nvme0n1 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlODZiODNhYzE0N2RhNjAxZDU1ZjE2ZTVlMTFlMjabv3Z+: 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTMwZDcyNzRjOTY4MmQ2ZjY3M2Y5YjAwN2FkZjlhZjRkNmU5Yzg5YzczMzFiM2ZlOTE3NWU3ZjhlNmNhM2I4Zhicz/w=: 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlODZiODNhYzE0N2RhNjAxZDU1ZjE2ZTVlMTFlMjabv3Z+: 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTMwZDcyNzRjOTY4MmQ2ZjY3M2Y5YjAwN2FkZjlhZjRkNmU5Yzg5YzczMzFiM2ZlOTE3NWU3ZjhlNmNhM2I4Zhicz/w=: ]] 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTMwZDcyNzRjOTY4MmQ2ZjY3M2Y5YjAwN2FkZjlhZjRkNmU5Yzg5YzczMzFiM2ZlOTE3NWU3ZjhlNmNhM2I4Zhicz/w=: 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:21.791 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:22.051 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:22.051 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:22.051 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.051 nvme0n1 00:36:22.051 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:22.051 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:22.051 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:22.051 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:22.051 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.051 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:22.051 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:22.052 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:22.052 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:22.052 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.052 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:22.052 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:22.052 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:22.052 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:22.052 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:22.052 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:22.052 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:22.052 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTNlMDQwYmQzMTVkYzUwOTMyMzg0ZThiOTk1NDVkYTUxM2ZmNWU1ZjFhY2M1ODMwZ/7lwA==: 00:36:22.052 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: 00:36:22.052 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:22.052 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:22.052 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTNlMDQwYmQzMTVkYzUwOTMyMzg0ZThiOTk1NDVkYTUxM2ZmNWU1ZjFhY2M1ODMwZ/7lwA==: 00:36:22.052 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: ]] 00:36:22.052 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: 00:36:22.052 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:36:22.052 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:22.052 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:22.052 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:22.052 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:22.052 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:22.052 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:22.052 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:22.052 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.052 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:22.052 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:22.052 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:22.052 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:22.052 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:22.052 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:22.052 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:22.052 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:22.052 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:22.052 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:22.052 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:22.052 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:22.052 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:22.052 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:22.052 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.363 nvme0n1 00:36:22.363 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:22.363 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQzODMzN2U0OTAwNjdhZjdlNDU3OGU2NDNmOTg3NTN7paym: 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQzODMzN2U0OTAwNjdhZjdlNDU3OGU2NDNmOTg3NTN7paym: 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: ]] 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:22.364 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.667 nvme0n1 00:36:22.667 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:22.667 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:22.667 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:22.667 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:22.667 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.667 17:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:22.667 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:22.667 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:22.667 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:22.667 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.667 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:22.667 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:22.667 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:36:22.667 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:22.667 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:22.667 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:22.668 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:22.668 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjA0OWZmODNiYTNkMjdkZGVhY2UyYWZiMzEwN2ZiNTdlNGVhMWQzZTc5NWU0ODBiBSC2kg==: 00:36:22.668 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDc5NTU3NGY0ZGYxNWYzYzcwMGVjOGZmNjZlZDM1YWQgohB1: 00:36:22.668 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:22.668 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:22.668 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjA0OWZmODNiYTNkMjdkZGVhY2UyYWZiMzEwN2ZiNTdlNGVhMWQzZTc5NWU0ODBiBSC2kg==: 00:36:22.668 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDc5NTU3NGY0ZGYxNWYzYzcwMGVjOGZmNjZlZDM1YWQgohB1: ]] 00:36:22.668 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDc5NTU3NGY0ZGYxNWYzYzcwMGVjOGZmNjZlZDM1YWQgohB1: 00:36:22.668 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:36:22.668 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:22.668 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:22.668 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:22.668 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:22.668 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:22.668 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:22.668 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:22.668 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.668 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:22.668 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:22.668 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:22.668 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:22.668 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:22.668 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:22.668 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:22.668 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:22.668 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:22.668 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:22.668 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:22.668 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:22.668 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:22.668 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:22.668 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.668 nvme0n1 00:36:22.668 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:22.668 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:22.668 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:22.668 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:22.668 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.928 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:22.928 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:22.928 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:22.929 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:22.929 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.929 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:22.929 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:22.929 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:36:22.929 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:22.929 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:22.929 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:22.929 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:22.929 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU0N2NlY2M1NDI3ZmE1YmZmNjI1NTQ0ZjkwMGZlN2U2YzYyNThmNjFmN2UyZmNkYTNkNzQxMWZiNWQ1ZDlmNJ1OAbY=: 00:36:22.929 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:22.929 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:22.929 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:22.929 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU0N2NlY2M1NDI3ZmE1YmZmNjI1NTQ0ZjkwMGZlN2U2YzYyNThmNjFmN2UyZmNkYTNkNzQxMWZiNWQ1ZDlmNJ1OAbY=: 00:36:22.929 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:22.929 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:36:22.929 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:22.929 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:22.929 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:22.929 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:22.929 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:22.929 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:22.929 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:22.929 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.929 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:22.929 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:22.929 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:22.929 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:22.929 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:22.929 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:22.929 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:22.929 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:22.929 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:22.929 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:22.929 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:22.929 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:22.929 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:22.929 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:22.929 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.929 nvme0n1 00:36:22.929 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:22.929 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:22.929 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:22.929 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:22.929 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.929 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.189 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlODZiODNhYzE0N2RhNjAxZDU1ZjE2ZTVlMTFlMjabv3Z+: 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTMwZDcyNzRjOTY4MmQ2ZjY3M2Y5YjAwN2FkZjlhZjRkNmU5Yzg5YzczMzFiM2ZlOTE3NWU3ZjhlNmNhM2I4Zhicz/w=: 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlODZiODNhYzE0N2RhNjAxZDU1ZjE2ZTVlMTFlMjabv3Z+: 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTMwZDcyNzRjOTY4MmQ2ZjY3M2Y5YjAwN2FkZjlhZjRkNmU5Yzg5YzczMzFiM2ZlOTE3NWU3ZjhlNmNhM2I4Zhicz/w=: ]] 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTMwZDcyNzRjOTY4MmQ2ZjY3M2Y5YjAwN2FkZjlhZjRkNmU5Yzg5YzczMzFiM2ZlOTE3NWU3ZjhlNmNhM2I4Zhicz/w=: 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.190 nvme0n1 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.190 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTNlMDQwYmQzMTVkYzUwOTMyMzg0ZThiOTk1NDVkYTUxM2ZmNWU1ZjFhY2M1ODMwZ/7lwA==: 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTNlMDQwYmQzMTVkYzUwOTMyMzg0ZThiOTk1NDVkYTUxM2ZmNWU1ZjFhY2M1ODMwZ/7lwA==: 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: ]] 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.451 nvme0n1 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.451 17:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.712 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:23.712 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:23.712 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.712 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.713 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.713 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:23.713 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:36:23.713 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:23.713 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:23.713 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:23.713 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:23.713 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQzODMzN2U0OTAwNjdhZjdlNDU3OGU2NDNmOTg3NTN7paym: 00:36:23.713 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: 00:36:23.713 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:23.713 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:23.713 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQzODMzN2U0OTAwNjdhZjdlNDU3OGU2NDNmOTg3NTN7paym: 00:36:23.713 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: ]] 00:36:23.713 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: 00:36:23.713 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:36:23.713 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:23.713 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:23.713 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:23.713 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:23.713 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:23.713 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:23.713 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.713 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.713 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.713 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:23.713 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:23.713 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:23.713 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:23.713 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:23.713 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:23.713 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:23.713 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:23.713 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:23.713 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:23.713 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:23.713 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:23.713 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.713 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.713 nvme0n1 00:36:23.713 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.713 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:23.713 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:23.713 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.713 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.974 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.974 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:23.974 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:23.974 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.974 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.974 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.974 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:23.974 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:36:23.974 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:23.974 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:23.974 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:23.974 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:23.974 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjA0OWZmODNiYTNkMjdkZGVhY2UyYWZiMzEwN2ZiNTdlNGVhMWQzZTc5NWU0ODBiBSC2kg==: 00:36:23.974 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDc5NTU3NGY0ZGYxNWYzYzcwMGVjOGZmNjZlZDM1YWQgohB1: 00:36:23.974 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:23.974 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:23.974 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjA0OWZmODNiYTNkMjdkZGVhY2UyYWZiMzEwN2ZiNTdlNGVhMWQzZTc5NWU0ODBiBSC2kg==: 00:36:23.974 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDc5NTU3NGY0ZGYxNWYzYzcwMGVjOGZmNjZlZDM1YWQgohB1: ]] 00:36:23.974 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDc5NTU3NGY0ZGYxNWYzYzcwMGVjOGZmNjZlZDM1YWQgohB1: 00:36:23.974 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:36:23.974 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:23.974 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:23.974 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:23.974 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:23.974 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:23.974 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:23.974 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.974 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.974 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.974 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:23.974 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:23.974 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:23.974 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:23.974 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:23.974 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:23.974 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:23.974 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:23.974 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:23.974 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:23.974 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:23.974 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:23.974 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.974 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.235 nvme0n1 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU0N2NlY2M1NDI3ZmE1YmZmNjI1NTQ0ZjkwMGZlN2U2YzYyNThmNjFmN2UyZmNkYTNkNzQxMWZiNWQ1ZDlmNJ1OAbY=: 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU0N2NlY2M1NDI3ZmE1YmZmNjI1NTQ0ZjkwMGZlN2U2YzYyNThmNjFmN2UyZmNkYTNkNzQxMWZiNWQ1ZDlmNJ1OAbY=: 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.235 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.497 nvme0n1 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlODZiODNhYzE0N2RhNjAxZDU1ZjE2ZTVlMTFlMjabv3Z+: 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTMwZDcyNzRjOTY4MmQ2ZjY3M2Y5YjAwN2FkZjlhZjRkNmU5Yzg5YzczMzFiM2ZlOTE3NWU3ZjhlNmNhM2I4Zhicz/w=: 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlODZiODNhYzE0N2RhNjAxZDU1ZjE2ZTVlMTFlMjabv3Z+: 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTMwZDcyNzRjOTY4MmQ2ZjY3M2Y5YjAwN2FkZjlhZjRkNmU5Yzg5YzczMzFiM2ZlOTE3NWU3ZjhlNmNhM2I4Zhicz/w=: ]] 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTMwZDcyNzRjOTY4MmQ2ZjY3M2Y5YjAwN2FkZjlhZjRkNmU5Yzg5YzczMzFiM2ZlOTE3NWU3ZjhlNmNhM2I4Zhicz/w=: 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.497 17:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.758 nvme0n1 00:36:24.758 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.758 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTNlMDQwYmQzMTVkYzUwOTMyMzg0ZThiOTk1NDVkYTUxM2ZmNWU1ZjFhY2M1ODMwZ/7lwA==: 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTNlMDQwYmQzMTVkYzUwOTMyMzg0ZThiOTk1NDVkYTUxM2ZmNWU1ZjFhY2M1ODMwZ/7lwA==: 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: ]] 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.759 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.020 nvme0n1 00:36:25.020 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.020 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:25.020 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:25.020 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.020 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.020 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.281 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:25.281 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:25.281 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.281 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.281 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.281 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:25.281 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:36:25.281 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:25.281 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:25.281 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:25.281 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:25.281 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQzODMzN2U0OTAwNjdhZjdlNDU3OGU2NDNmOTg3NTN7paym: 00:36:25.281 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: 00:36:25.281 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:25.281 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:25.281 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQzODMzN2U0OTAwNjdhZjdlNDU3OGU2NDNmOTg3NTN7paym: 00:36:25.281 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: ]] 00:36:25.281 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: 00:36:25.281 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:36:25.281 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:25.281 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:25.281 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:25.281 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:25.281 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:25.281 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:25.281 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.281 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.281 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.281 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:25.281 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:25.281 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:25.281 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:25.281 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:25.281 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:25.281 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:25.281 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:25.281 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:25.281 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:25.281 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:25.281 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:25.281 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.281 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.542 nvme0n1 00:36:25.542 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.542 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:25.542 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:25.543 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.543 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.543 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.543 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:25.543 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:25.543 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.543 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.543 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.543 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:25.543 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:36:25.543 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:25.543 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:25.543 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:25.543 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:25.543 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjA0OWZmODNiYTNkMjdkZGVhY2UyYWZiMzEwN2ZiNTdlNGVhMWQzZTc5NWU0ODBiBSC2kg==: 00:36:25.543 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDc5NTU3NGY0ZGYxNWYzYzcwMGVjOGZmNjZlZDM1YWQgohB1: 00:36:25.543 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:25.543 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:25.543 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjA0OWZmODNiYTNkMjdkZGVhY2UyYWZiMzEwN2ZiNTdlNGVhMWQzZTc5NWU0ODBiBSC2kg==: 00:36:25.543 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDc5NTU3NGY0ZGYxNWYzYzcwMGVjOGZmNjZlZDM1YWQgohB1: ]] 00:36:25.543 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDc5NTU3NGY0ZGYxNWYzYzcwMGVjOGZmNjZlZDM1YWQgohB1: 00:36:25.543 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:36:25.543 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:25.543 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:25.543 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:25.543 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:25.543 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:25.543 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:25.543 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.543 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.543 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.543 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:25.543 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:25.543 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:25.543 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:25.543 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:25.543 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:25.543 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:25.543 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:25.543 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:25.543 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:25.543 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:25.543 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:25.543 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.543 17:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.803 nvme0n1 00:36:25.803 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.803 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:25.803 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:25.803 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.803 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.803 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.803 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:25.803 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:25.803 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.803 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.803 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.803 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:25.803 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:36:25.803 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:25.803 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:25.803 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:25.803 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:25.803 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU0N2NlY2M1NDI3ZmE1YmZmNjI1NTQ0ZjkwMGZlN2U2YzYyNThmNjFmN2UyZmNkYTNkNzQxMWZiNWQ1ZDlmNJ1OAbY=: 00:36:25.803 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:25.803 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:25.803 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:25.803 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU0N2NlY2M1NDI3ZmE1YmZmNjI1NTQ0ZjkwMGZlN2U2YzYyNThmNjFmN2UyZmNkYTNkNzQxMWZiNWQ1ZDlmNJ1OAbY=: 00:36:25.803 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:25.803 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:36:25.803 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:25.803 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:25.803 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:25.803 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:25.803 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:25.803 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:25.803 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.803 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.803 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.803 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:25.803 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:25.803 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:25.804 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:25.804 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:25.804 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:25.804 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:25.804 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:25.804 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:25.804 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:25.804 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:25.804 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:25.804 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.804 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.064 nvme0n1 00:36:26.064 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.064 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:26.064 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:26.064 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.064 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.325 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.325 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:26.325 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:26.325 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.325 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.325 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.325 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:26.325 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:26.325 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:36:26.325 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:26.325 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:26.325 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:26.325 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:26.325 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlODZiODNhYzE0N2RhNjAxZDU1ZjE2ZTVlMTFlMjabv3Z+: 00:36:26.325 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTMwZDcyNzRjOTY4MmQ2ZjY3M2Y5YjAwN2FkZjlhZjRkNmU5Yzg5YzczMzFiM2ZlOTE3NWU3ZjhlNmNhM2I4Zhicz/w=: 00:36:26.325 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:26.325 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:26.325 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlODZiODNhYzE0N2RhNjAxZDU1ZjE2ZTVlMTFlMjabv3Z+: 00:36:26.325 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTMwZDcyNzRjOTY4MmQ2ZjY3M2Y5YjAwN2FkZjlhZjRkNmU5Yzg5YzczMzFiM2ZlOTE3NWU3ZjhlNmNhM2I4Zhicz/w=: ]] 00:36:26.325 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTMwZDcyNzRjOTY4MmQ2ZjY3M2Y5YjAwN2FkZjlhZjRkNmU5Yzg5YzczMzFiM2ZlOTE3NWU3ZjhlNmNhM2I4Zhicz/w=: 00:36:26.325 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:36:26.325 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:26.325 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:26.325 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:26.325 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:26.325 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:26.325 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:26.325 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.325 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.325 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.325 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:26.325 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:26.325 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:26.325 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:26.325 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:26.325 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:26.325 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:26.325 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:26.325 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:26.325 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:26.325 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:26.325 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:26.325 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.325 17:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.897 nvme0n1 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTNlMDQwYmQzMTVkYzUwOTMyMzg0ZThiOTk1NDVkYTUxM2ZmNWU1ZjFhY2M1ODMwZ/7lwA==: 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTNlMDQwYmQzMTVkYzUwOTMyMzg0ZThiOTk1NDVkYTUxM2ZmNWU1ZjFhY2M1ODMwZ/7lwA==: 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: ]] 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.897 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.467 nvme0n1 00:36:27.467 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.467 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.467 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:27.467 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.467 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.467 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.467 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.468 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.468 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.468 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.468 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.468 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:27.468 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:36:27.468 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:27.468 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:27.468 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:27.468 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:27.468 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQzODMzN2U0OTAwNjdhZjdlNDU3OGU2NDNmOTg3NTN7paym: 00:36:27.468 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: 00:36:27.468 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:27.468 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:27.468 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQzODMzN2U0OTAwNjdhZjdlNDU3OGU2NDNmOTg3NTN7paym: 00:36:27.468 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: ]] 00:36:27.468 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: 00:36:27.468 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:36:27.468 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:27.468 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:27.468 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:27.468 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:27.468 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:27.468 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:27.468 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.468 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.468 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.468 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:27.468 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:27.468 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:27.468 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:27.468 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.468 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.468 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:27.468 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:27.468 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:27.468 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:27.468 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:27.468 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:27.468 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.468 17:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.728 nvme0n1 00:36:27.728 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.728 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.728 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.728 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:27.728 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.728 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.988 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.988 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.988 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.988 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.988 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.988 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:27.988 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:36:27.988 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:27.988 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:27.988 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:27.988 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:27.988 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjA0OWZmODNiYTNkMjdkZGVhY2UyYWZiMzEwN2ZiNTdlNGVhMWQzZTc5NWU0ODBiBSC2kg==: 00:36:27.988 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDc5NTU3NGY0ZGYxNWYzYzcwMGVjOGZmNjZlZDM1YWQgohB1: 00:36:27.988 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:27.988 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:27.988 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjA0OWZmODNiYTNkMjdkZGVhY2UyYWZiMzEwN2ZiNTdlNGVhMWQzZTc5NWU0ODBiBSC2kg==: 00:36:27.988 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDc5NTU3NGY0ZGYxNWYzYzcwMGVjOGZmNjZlZDM1YWQgohB1: ]] 00:36:27.988 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDc5NTU3NGY0ZGYxNWYzYzcwMGVjOGZmNjZlZDM1YWQgohB1: 00:36:27.988 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:36:27.988 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:27.988 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:27.988 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:27.988 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:27.988 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:27.988 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:27.988 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.988 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.988 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.988 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:27.988 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:27.988 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:27.988 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:27.988 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.988 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.988 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:27.988 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:27.988 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:27.988 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:27.988 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:27.988 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:27.988 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.988 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.557 nvme0n1 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU0N2NlY2M1NDI3ZmE1YmZmNjI1NTQ0ZjkwMGZlN2U2YzYyNThmNjFmN2UyZmNkYTNkNzQxMWZiNWQ1ZDlmNJ1OAbY=: 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU0N2NlY2M1NDI3ZmE1YmZmNjI1NTQ0ZjkwMGZlN2U2YzYyNThmNjFmN2UyZmNkYTNkNzQxMWZiNWQ1ZDlmNJ1OAbY=: 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:28.557 17:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.817 nvme0n1 00:36:28.817 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.817 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:28.817 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:28.817 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:28.817 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.076 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:29.076 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:29.076 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:29.076 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:29.076 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.076 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:29.076 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:29.076 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:29.076 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:36:29.076 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:29.076 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:29.076 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:29.076 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:29.076 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlODZiODNhYzE0N2RhNjAxZDU1ZjE2ZTVlMTFlMjabv3Z+: 00:36:29.076 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTMwZDcyNzRjOTY4MmQ2ZjY3M2Y5YjAwN2FkZjlhZjRkNmU5Yzg5YzczMzFiM2ZlOTE3NWU3ZjhlNmNhM2I4Zhicz/w=: 00:36:29.076 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:29.076 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:29.076 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlODZiODNhYzE0N2RhNjAxZDU1ZjE2ZTVlMTFlMjabv3Z+: 00:36:29.076 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTMwZDcyNzRjOTY4MmQ2ZjY3M2Y5YjAwN2FkZjlhZjRkNmU5Yzg5YzczMzFiM2ZlOTE3NWU3ZjhlNmNhM2I4Zhicz/w=: ]] 00:36:29.076 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTMwZDcyNzRjOTY4MmQ2ZjY3M2Y5YjAwN2FkZjlhZjRkNmU5Yzg5YzczMzFiM2ZlOTE3NWU3ZjhlNmNhM2I4Zhicz/w=: 00:36:29.076 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:36:29.076 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:29.076 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:29.076 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:29.076 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:29.076 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:29.076 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:29.076 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:29.076 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.076 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:29.076 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:29.076 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:29.076 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:29.076 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:29.076 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:29.076 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:29.076 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:29.076 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:29.076 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:29.076 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:29.076 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:29.077 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:29.077 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:29.077 17:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.645 nvme0n1 00:36:29.645 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:29.645 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:29.645 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:29.645 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:29.645 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.906 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:29.906 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:29.906 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:29.906 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:29.906 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.906 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:29.906 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:29.906 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:36:29.906 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:29.906 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:29.906 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:29.906 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:29.906 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTNlMDQwYmQzMTVkYzUwOTMyMzg0ZThiOTk1NDVkYTUxM2ZmNWU1ZjFhY2M1ODMwZ/7lwA==: 00:36:29.906 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: 00:36:29.906 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:29.906 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:29.906 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTNlMDQwYmQzMTVkYzUwOTMyMzg0ZThiOTk1NDVkYTUxM2ZmNWU1ZjFhY2M1ODMwZ/7lwA==: 00:36:29.906 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: ]] 00:36:29.906 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: 00:36:29.906 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:36:29.906 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:29.906 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:29.906 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:29.906 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:29.906 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:29.906 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:29.906 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:29.906 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.906 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:29.906 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:29.906 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:29.906 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:29.906 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:29.906 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:29.906 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:29.906 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:29.906 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:29.906 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:29.906 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:29.906 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:29.906 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:29.906 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:29.906 17:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.845 nvme0n1 00:36:30.845 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.845 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:30.845 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:30.845 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.845 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.845 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.845 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:30.845 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:30.845 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.845 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.845 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.845 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:30.845 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:36:30.845 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:30.845 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:30.845 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:30.845 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:30.846 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQzODMzN2U0OTAwNjdhZjdlNDU3OGU2NDNmOTg3NTN7paym: 00:36:30.846 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: 00:36:30.846 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:30.846 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:30.846 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQzODMzN2U0OTAwNjdhZjdlNDU3OGU2NDNmOTg3NTN7paym: 00:36:30.846 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: ]] 00:36:30.846 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: 00:36:30.846 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:36:30.846 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:30.846 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:30.846 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:30.846 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:30.846 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:30.846 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:30.846 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.846 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.846 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.846 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:30.846 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:30.846 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:30.846 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:30.846 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:30.846 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:30.846 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:30.846 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:30.846 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:30.846 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:30.846 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:30.846 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:30.846 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.846 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.416 nvme0n1 00:36:31.416 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:31.416 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:31.416 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:31.416 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:31.416 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.416 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:31.416 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:31.417 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:31.417 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:31.417 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.417 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:31.417 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:31.417 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:36:31.417 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:31.417 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:31.417 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:31.417 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:31.417 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjA0OWZmODNiYTNkMjdkZGVhY2UyYWZiMzEwN2ZiNTdlNGVhMWQzZTc5NWU0ODBiBSC2kg==: 00:36:31.417 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDc5NTU3NGY0ZGYxNWYzYzcwMGVjOGZmNjZlZDM1YWQgohB1: 00:36:31.417 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:31.417 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:31.417 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjA0OWZmODNiYTNkMjdkZGVhY2UyYWZiMzEwN2ZiNTdlNGVhMWQzZTc5NWU0ODBiBSC2kg==: 00:36:31.417 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDc5NTU3NGY0ZGYxNWYzYzcwMGVjOGZmNjZlZDM1YWQgohB1: ]] 00:36:31.417 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDc5NTU3NGY0ZGYxNWYzYzcwMGVjOGZmNjZlZDM1YWQgohB1: 00:36:31.417 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:36:31.417 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:31.417 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:31.417 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:31.417 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:31.417 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:31.417 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:31.417 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:31.417 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.417 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:31.417 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:31.417 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:31.417 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:31.417 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:31.417 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:31.417 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:31.417 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:31.417 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:31.417 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:31.417 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:31.417 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:31.417 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:31.417 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:31.417 17:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.357 nvme0n1 00:36:32.357 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.357 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:32.357 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:32.357 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.357 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.357 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.357 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:32.357 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:32.357 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.357 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.357 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.357 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:32.357 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:36:32.357 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:32.357 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:32.357 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:32.357 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:32.357 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU0N2NlY2M1NDI3ZmE1YmZmNjI1NTQ0ZjkwMGZlN2U2YzYyNThmNjFmN2UyZmNkYTNkNzQxMWZiNWQ1ZDlmNJ1OAbY=: 00:36:32.357 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:32.357 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:32.357 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:32.357 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU0N2NlY2M1NDI3ZmE1YmZmNjI1NTQ0ZjkwMGZlN2U2YzYyNThmNjFmN2UyZmNkYTNkNzQxMWZiNWQ1ZDlmNJ1OAbY=: 00:36:32.357 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:32.357 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:36:32.357 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:32.357 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:32.357 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:32.357 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:32.357 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:32.357 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:32.357 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.357 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.357 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.357 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:32.358 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:32.358 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:32.358 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:32.358 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:32.358 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:32.358 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:32.358 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:32.358 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:32.358 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:32.358 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:32.358 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:32.358 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.358 17:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.298 nvme0n1 00:36:33.298 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.298 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:33.298 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.298 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:33.298 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.298 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.298 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:33.298 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:33.298 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.298 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.298 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlODZiODNhYzE0N2RhNjAxZDU1ZjE2ZTVlMTFlMjabv3Z+: 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTMwZDcyNzRjOTY4MmQ2ZjY3M2Y5YjAwN2FkZjlhZjRkNmU5Yzg5YzczMzFiM2ZlOTE3NWU3ZjhlNmNhM2I4Zhicz/w=: 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlODZiODNhYzE0N2RhNjAxZDU1ZjE2ZTVlMTFlMjabv3Z+: 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTMwZDcyNzRjOTY4MmQ2ZjY3M2Y5YjAwN2FkZjlhZjRkNmU5Yzg5YzczMzFiM2ZlOTE3NWU3ZjhlNmNhM2I4Zhicz/w=: ]] 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTMwZDcyNzRjOTY4MmQ2ZjY3M2Y5YjAwN2FkZjlhZjRkNmU5Yzg5YzczMzFiM2ZlOTE3NWU3ZjhlNmNhM2I4Zhicz/w=: 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.299 nvme0n1 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTNlMDQwYmQzMTVkYzUwOTMyMzg0ZThiOTk1NDVkYTUxM2ZmNWU1ZjFhY2M1ODMwZ/7lwA==: 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTNlMDQwYmQzMTVkYzUwOTMyMzg0ZThiOTk1NDVkYTUxM2ZmNWU1ZjFhY2M1ODMwZ/7lwA==: 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: ]] 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.299 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.560 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:33.560 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:33.560 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:33.560 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:33.560 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:33.560 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:33.560 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:33.560 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:33.560 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:33.560 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:33.560 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:33.560 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:33.560 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.560 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.560 nvme0n1 00:36:33.560 17:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQzODMzN2U0OTAwNjdhZjdlNDU3OGU2NDNmOTg3NTN7paym: 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQzODMzN2U0OTAwNjdhZjdlNDU3OGU2NDNmOTg3NTN7paym: 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: ]] 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.560 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.820 nvme0n1 00:36:33.820 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.820 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:33.820 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:33.820 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.821 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.821 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.821 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:33.821 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:33.821 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.821 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.821 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.821 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:33.821 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:36:33.821 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:33.821 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:33.821 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:33.821 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:33.821 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjA0OWZmODNiYTNkMjdkZGVhY2UyYWZiMzEwN2ZiNTdlNGVhMWQzZTc5NWU0ODBiBSC2kg==: 00:36:33.821 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDc5NTU3NGY0ZGYxNWYzYzcwMGVjOGZmNjZlZDM1YWQgohB1: 00:36:33.821 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:33.821 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:33.821 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjA0OWZmODNiYTNkMjdkZGVhY2UyYWZiMzEwN2ZiNTdlNGVhMWQzZTc5NWU0ODBiBSC2kg==: 00:36:33.821 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDc5NTU3NGY0ZGYxNWYzYzcwMGVjOGZmNjZlZDM1YWQgohB1: ]] 00:36:33.821 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDc5NTU3NGY0ZGYxNWYzYzcwMGVjOGZmNjZlZDM1YWQgohB1: 00:36:33.821 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:36:33.821 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:33.821 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:33.821 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:33.821 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:33.821 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:33.821 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:33.821 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.821 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.821 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.821 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:33.821 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:33.821 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:33.821 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:33.821 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:33.821 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:33.821 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:33.821 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:33.821 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:33.821 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:33.821 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:33.821 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:33.821 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.821 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.081 nvme0n1 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU0N2NlY2M1NDI3ZmE1YmZmNjI1NTQ0ZjkwMGZlN2U2YzYyNThmNjFmN2UyZmNkYTNkNzQxMWZiNWQ1ZDlmNJ1OAbY=: 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU0N2NlY2M1NDI3ZmE1YmZmNjI1NTQ0ZjkwMGZlN2U2YzYyNThmNjFmN2UyZmNkYTNkNzQxMWZiNWQ1ZDlmNJ1OAbY=: 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.081 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.342 nvme0n1 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlODZiODNhYzE0N2RhNjAxZDU1ZjE2ZTVlMTFlMjabv3Z+: 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTMwZDcyNzRjOTY4MmQ2ZjY3M2Y5YjAwN2FkZjlhZjRkNmU5Yzg5YzczMzFiM2ZlOTE3NWU3ZjhlNmNhM2I4Zhicz/w=: 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlODZiODNhYzE0N2RhNjAxZDU1ZjE2ZTVlMTFlMjabv3Z+: 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTMwZDcyNzRjOTY4MmQ2ZjY3M2Y5YjAwN2FkZjlhZjRkNmU5Yzg5YzczMzFiM2ZlOTE3NWU3ZjhlNmNhM2I4Zhicz/w=: ]] 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTMwZDcyNzRjOTY4MmQ2ZjY3M2Y5YjAwN2FkZjlhZjRkNmU5Yzg5YzczMzFiM2ZlOTE3NWU3ZjhlNmNhM2I4Zhicz/w=: 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.342 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.603 nvme0n1 00:36:34.603 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.603 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:34.603 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:34.603 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.603 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.603 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.603 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:34.603 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:34.603 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.603 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.603 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.603 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:34.603 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:36:34.603 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:34.603 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:34.603 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:34.603 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:34.603 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTNlMDQwYmQzMTVkYzUwOTMyMzg0ZThiOTk1NDVkYTUxM2ZmNWU1ZjFhY2M1ODMwZ/7lwA==: 00:36:34.603 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: 00:36:34.603 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:34.603 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:34.603 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTNlMDQwYmQzMTVkYzUwOTMyMzg0ZThiOTk1NDVkYTUxM2ZmNWU1ZjFhY2M1ODMwZ/7lwA==: 00:36:34.604 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: ]] 00:36:34.604 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: 00:36:34.604 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:36:34.604 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:34.604 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:34.604 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:34.604 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:34.604 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:34.604 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:34.604 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.604 17:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.604 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.604 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:34.604 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:34.604 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:34.604 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:34.604 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:34.604 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:34.604 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:34.604 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:34.604 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:34.604 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:34.604 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:34.604 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:34.604 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.604 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.864 nvme0n1 00:36:34.864 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.864 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:34.864 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:34.864 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.864 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.864 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.864 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:34.865 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:34.865 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.865 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.865 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.865 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:34.865 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:36:34.865 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:34.865 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:34.865 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:34.865 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:34.865 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQzODMzN2U0OTAwNjdhZjdlNDU3OGU2NDNmOTg3NTN7paym: 00:36:34.865 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: 00:36:34.865 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:34.865 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:34.865 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQzODMzN2U0OTAwNjdhZjdlNDU3OGU2NDNmOTg3NTN7paym: 00:36:34.865 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: ]] 00:36:34.865 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: 00:36:34.865 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:36:34.865 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:34.865 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:34.865 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:34.865 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:34.865 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:34.865 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:34.865 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.865 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.865 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.865 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:34.865 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:34.865 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:34.865 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:34.865 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:34.865 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:34.865 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:34.865 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:34.865 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:34.865 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:34.865 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:34.865 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:34.865 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.865 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.125 nvme0n1 00:36:35.125 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.125 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:35.125 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:35.125 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.125 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.125 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.125 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:35.125 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:35.125 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.125 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.125 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.125 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:35.125 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:36:35.125 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:35.125 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:35.125 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:35.125 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:35.125 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjA0OWZmODNiYTNkMjdkZGVhY2UyYWZiMzEwN2ZiNTdlNGVhMWQzZTc5NWU0ODBiBSC2kg==: 00:36:35.125 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDc5NTU3NGY0ZGYxNWYzYzcwMGVjOGZmNjZlZDM1YWQgohB1: 00:36:35.125 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:35.125 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:35.125 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjA0OWZmODNiYTNkMjdkZGVhY2UyYWZiMzEwN2ZiNTdlNGVhMWQzZTc5NWU0ODBiBSC2kg==: 00:36:35.125 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDc5NTU3NGY0ZGYxNWYzYzcwMGVjOGZmNjZlZDM1YWQgohB1: ]] 00:36:35.125 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDc5NTU3NGY0ZGYxNWYzYzcwMGVjOGZmNjZlZDM1YWQgohB1: 00:36:35.125 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:36:35.125 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:35.125 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:35.125 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:35.125 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:35.125 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:35.125 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:35.125 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.125 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.125 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.125 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:35.126 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:35.126 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:35.126 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:35.126 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:35.126 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:35.126 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:35.126 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:35.126 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:35.126 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:35.126 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:35.126 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:35.126 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.126 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.386 nvme0n1 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU0N2NlY2M1NDI3ZmE1YmZmNjI1NTQ0ZjkwMGZlN2U2YzYyNThmNjFmN2UyZmNkYTNkNzQxMWZiNWQ1ZDlmNJ1OAbY=: 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU0N2NlY2M1NDI3ZmE1YmZmNjI1NTQ0ZjkwMGZlN2U2YzYyNThmNjFmN2UyZmNkYTNkNzQxMWZiNWQ1ZDlmNJ1OAbY=: 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.386 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.647 nvme0n1 00:36:35.647 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.647 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:35.647 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:35.647 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.647 17:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.647 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.647 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:35.647 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:35.647 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.647 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.647 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.647 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:35.647 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:35.647 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:36:35.647 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:35.648 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:35.648 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:35.648 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:35.648 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlODZiODNhYzE0N2RhNjAxZDU1ZjE2ZTVlMTFlMjabv3Z+: 00:36:35.648 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTMwZDcyNzRjOTY4MmQ2ZjY3M2Y5YjAwN2FkZjlhZjRkNmU5Yzg5YzczMzFiM2ZlOTE3NWU3ZjhlNmNhM2I4Zhicz/w=: 00:36:35.648 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:35.648 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:35.648 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlODZiODNhYzE0N2RhNjAxZDU1ZjE2ZTVlMTFlMjabv3Z+: 00:36:35.648 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTMwZDcyNzRjOTY4MmQ2ZjY3M2Y5YjAwN2FkZjlhZjRkNmU5Yzg5YzczMzFiM2ZlOTE3NWU3ZjhlNmNhM2I4Zhicz/w=: ]] 00:36:35.648 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTMwZDcyNzRjOTY4MmQ2ZjY3M2Y5YjAwN2FkZjlhZjRkNmU5Yzg5YzczMzFiM2ZlOTE3NWU3ZjhlNmNhM2I4Zhicz/w=: 00:36:35.648 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:36:35.648 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:35.648 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:35.648 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:35.648 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:35.648 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:35.648 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:35.648 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.648 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.648 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.648 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:35.648 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:35.648 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:35.648 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:35.648 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:35.648 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:35.648 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:35.648 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:35.648 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:35.648 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:35.648 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:35.648 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:35.648 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.648 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.908 nvme0n1 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTNlMDQwYmQzMTVkYzUwOTMyMzg0ZThiOTk1NDVkYTUxM2ZmNWU1ZjFhY2M1ODMwZ/7lwA==: 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTNlMDQwYmQzMTVkYzUwOTMyMzg0ZThiOTk1NDVkYTUxM2ZmNWU1ZjFhY2M1ODMwZ/7lwA==: 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: ]] 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.909 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.169 nvme0n1 00:36:36.169 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.169 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:36.169 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:36.169 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.169 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.430 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.430 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:36.430 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:36.430 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.430 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.430 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.430 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:36.430 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:36:36.430 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:36.430 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:36.430 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:36.430 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:36.430 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQzODMzN2U0OTAwNjdhZjdlNDU3OGU2NDNmOTg3NTN7paym: 00:36:36.430 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: 00:36:36.430 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:36.430 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:36.430 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQzODMzN2U0OTAwNjdhZjdlNDU3OGU2NDNmOTg3NTN7paym: 00:36:36.430 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: ]] 00:36:36.431 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: 00:36:36.431 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:36:36.431 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:36.431 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:36.431 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:36.431 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:36.431 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:36.431 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:36.431 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.431 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.431 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.431 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:36.431 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:36.431 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:36.431 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:36.431 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:36.431 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:36.431 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:36.431 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:36.431 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:36.431 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:36.431 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:36.431 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:36.431 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.431 17:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.693 nvme0n1 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjA0OWZmODNiYTNkMjdkZGVhY2UyYWZiMzEwN2ZiNTdlNGVhMWQzZTc5NWU0ODBiBSC2kg==: 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDc5NTU3NGY0ZGYxNWYzYzcwMGVjOGZmNjZlZDM1YWQgohB1: 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjA0OWZmODNiYTNkMjdkZGVhY2UyYWZiMzEwN2ZiNTdlNGVhMWQzZTc5NWU0ODBiBSC2kg==: 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDc5NTU3NGY0ZGYxNWYzYzcwMGVjOGZmNjZlZDM1YWQgohB1: ]] 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDc5NTU3NGY0ZGYxNWYzYzcwMGVjOGZmNjZlZDM1YWQgohB1: 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.693 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.954 nvme0n1 00:36:36.954 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.954 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:36.954 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:36.954 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.954 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.954 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.954 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:36.954 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:36.954 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.954 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.954 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.954 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:36.954 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:36:36.954 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:36.954 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:36.954 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:36.954 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:36.954 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU0N2NlY2M1NDI3ZmE1YmZmNjI1NTQ0ZjkwMGZlN2U2YzYyNThmNjFmN2UyZmNkYTNkNzQxMWZiNWQ1ZDlmNJ1OAbY=: 00:36:36.954 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:36.954 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:36.954 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:36.954 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU0N2NlY2M1NDI3ZmE1YmZmNjI1NTQ0ZjkwMGZlN2U2YzYyNThmNjFmN2UyZmNkYTNkNzQxMWZiNWQ1ZDlmNJ1OAbY=: 00:36:36.954 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:36.954 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:36:36.954 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:36.954 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:36.955 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:36.955 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:36.955 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:36.955 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:36.955 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.955 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.955 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.955 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:36.955 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:36.955 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:36.955 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:36.955 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:36.955 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:36.955 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:36.955 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:36.955 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:36.955 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:36.955 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:36.955 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:36.955 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.955 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.527 nvme0n1 00:36:37.527 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.527 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:37.527 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:37.527 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.527 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.527 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.527 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:37.527 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:37.527 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.527 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.527 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.527 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:37.527 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:37.527 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:36:37.527 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:37.527 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:37.527 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:37.527 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:37.527 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlODZiODNhYzE0N2RhNjAxZDU1ZjE2ZTVlMTFlMjabv3Z+: 00:36:37.527 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTMwZDcyNzRjOTY4MmQ2ZjY3M2Y5YjAwN2FkZjlhZjRkNmU5Yzg5YzczMzFiM2ZlOTE3NWU3ZjhlNmNhM2I4Zhicz/w=: 00:36:37.527 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:37.527 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:37.527 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlODZiODNhYzE0N2RhNjAxZDU1ZjE2ZTVlMTFlMjabv3Z+: 00:36:37.527 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTMwZDcyNzRjOTY4MmQ2ZjY3M2Y5YjAwN2FkZjlhZjRkNmU5Yzg5YzczMzFiM2ZlOTE3NWU3ZjhlNmNhM2I4Zhicz/w=: ]] 00:36:37.527 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTMwZDcyNzRjOTY4MmQ2ZjY3M2Y5YjAwN2FkZjlhZjRkNmU5Yzg5YzczMzFiM2ZlOTE3NWU3ZjhlNmNhM2I4Zhicz/w=: 00:36:37.527 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:36:37.527 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:37.527 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:37.527 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:37.527 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:37.527 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:37.527 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:37.527 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.527 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.527 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.527 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:37.527 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:37.527 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:37.527 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:37.527 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:37.528 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:37.528 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:37.528 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:37.528 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:37.528 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:37.528 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:37.528 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:37.528 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.528 17:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.788 nvme0n1 00:36:37.788 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.788 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:37.788 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:37.788 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.788 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.788 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.049 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:38.049 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:38.049 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.049 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.049 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.049 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:38.049 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:36:38.049 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:38.049 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:38.049 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:38.049 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:38.049 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTNlMDQwYmQzMTVkYzUwOTMyMzg0ZThiOTk1NDVkYTUxM2ZmNWU1ZjFhY2M1ODMwZ/7lwA==: 00:36:38.049 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: 00:36:38.049 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:38.049 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:38.049 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTNlMDQwYmQzMTVkYzUwOTMyMzg0ZThiOTk1NDVkYTUxM2ZmNWU1ZjFhY2M1ODMwZ/7lwA==: 00:36:38.049 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: ]] 00:36:38.050 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: 00:36:38.050 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:36:38.050 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:38.050 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:38.050 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:38.050 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:38.050 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:38.050 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:38.050 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.050 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.050 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.050 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:38.050 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:38.050 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:38.050 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:38.050 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:38.050 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:38.050 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:38.050 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:38.050 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:38.050 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:38.050 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:38.050 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:38.050 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.050 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.310 nvme0n1 00:36:38.310 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.310 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:38.310 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:38.310 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.310 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.310 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.571 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:38.571 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:38.571 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.571 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.571 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.571 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:38.571 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:36:38.571 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:38.571 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:38.571 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:38.571 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:38.572 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQzODMzN2U0OTAwNjdhZjdlNDU3OGU2NDNmOTg3NTN7paym: 00:36:38.572 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: 00:36:38.572 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:38.572 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:38.572 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQzODMzN2U0OTAwNjdhZjdlNDU3OGU2NDNmOTg3NTN7paym: 00:36:38.572 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: ]] 00:36:38.572 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: 00:36:38.572 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:36:38.572 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:38.572 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:38.572 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:38.572 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:38.572 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:38.572 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:38.572 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.572 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.572 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.572 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:38.572 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:38.572 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:38.572 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:38.572 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:38.572 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:38.572 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:38.572 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:38.572 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:38.572 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:38.572 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:38.572 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:38.572 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.572 17:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.832 nvme0n1 00:36:38.832 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.832 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:38.832 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:38.832 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.832 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.092 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.092 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:39.092 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:39.092 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.092 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.092 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.092 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:39.092 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:36:39.092 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:39.092 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:39.092 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:39.092 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:39.092 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjA0OWZmODNiYTNkMjdkZGVhY2UyYWZiMzEwN2ZiNTdlNGVhMWQzZTc5NWU0ODBiBSC2kg==: 00:36:39.092 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDc5NTU3NGY0ZGYxNWYzYzcwMGVjOGZmNjZlZDM1YWQgohB1: 00:36:39.092 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:39.092 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:39.092 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjA0OWZmODNiYTNkMjdkZGVhY2UyYWZiMzEwN2ZiNTdlNGVhMWQzZTc5NWU0ODBiBSC2kg==: 00:36:39.092 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDc5NTU3NGY0ZGYxNWYzYzcwMGVjOGZmNjZlZDM1YWQgohB1: ]] 00:36:39.092 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDc5NTU3NGY0ZGYxNWYzYzcwMGVjOGZmNjZlZDM1YWQgohB1: 00:36:39.092 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:36:39.092 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:39.092 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:39.092 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:39.092 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:39.092 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:39.092 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:39.092 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.092 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.092 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.092 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:39.092 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:39.092 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:39.092 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:39.092 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:39.092 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:39.092 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:39.092 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:39.093 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:39.093 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:39.093 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:39.093 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:39.093 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.093 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.664 nvme0n1 00:36:39.664 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.664 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:39.664 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:39.664 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.664 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.664 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.664 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:39.664 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:39.664 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.664 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.664 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.664 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:39.664 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:36:39.664 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:39.664 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:39.664 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:39.664 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:39.664 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU0N2NlY2M1NDI3ZmE1YmZmNjI1NTQ0ZjkwMGZlN2U2YzYyNThmNjFmN2UyZmNkYTNkNzQxMWZiNWQ1ZDlmNJ1OAbY=: 00:36:39.664 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:39.664 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:39.664 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:39.664 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU0N2NlY2M1NDI3ZmE1YmZmNjI1NTQ0ZjkwMGZlN2U2YzYyNThmNjFmN2UyZmNkYTNkNzQxMWZiNWQ1ZDlmNJ1OAbY=: 00:36:39.664 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:39.664 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:36:39.664 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:39.664 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:39.664 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:39.664 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:39.664 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:39.664 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:39.664 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.664 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.664 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.664 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:39.664 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:39.664 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:39.664 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:39.664 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:39.664 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:39.664 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:39.664 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:39.664 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:39.664 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:39.665 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:39.665 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:39.665 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.665 17:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.925 nvme0n1 00:36:39.925 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.925 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:39.925 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:39.925 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.925 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.925 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:40.186 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:40.186 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:40.186 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:40.186 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.186 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:40.186 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:40.186 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:40.186 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:36:40.186 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:40.186 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:40.186 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:40.186 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:40.186 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlODZiODNhYzE0N2RhNjAxZDU1ZjE2ZTVlMTFlMjabv3Z+: 00:36:40.186 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTMwZDcyNzRjOTY4MmQ2ZjY3M2Y5YjAwN2FkZjlhZjRkNmU5Yzg5YzczMzFiM2ZlOTE3NWU3ZjhlNmNhM2I4Zhicz/w=: 00:36:40.186 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:40.186 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:40.186 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlODZiODNhYzE0N2RhNjAxZDU1ZjE2ZTVlMTFlMjabv3Z+: 00:36:40.186 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTMwZDcyNzRjOTY4MmQ2ZjY3M2Y5YjAwN2FkZjlhZjRkNmU5Yzg5YzczMzFiM2ZlOTE3NWU3ZjhlNmNhM2I4Zhicz/w=: ]] 00:36:40.186 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTMwZDcyNzRjOTY4MmQ2ZjY3M2Y5YjAwN2FkZjlhZjRkNmU5Yzg5YzczMzFiM2ZlOTE3NWU3ZjhlNmNhM2I4Zhicz/w=: 00:36:40.186 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:36:40.186 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:40.186 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:40.186 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:40.186 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:40.186 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:40.187 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:40.187 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:40.187 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.187 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:40.187 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:40.187 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:40.187 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:40.187 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:40.187 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:40.187 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:40.187 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:40.187 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:40.187 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:40.187 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:40.187 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:40.187 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:40.187 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:40.187 17:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.756 nvme0n1 00:36:40.756 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:40.756 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:40.756 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:40.756 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:40.756 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.756 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.017 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:41.017 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:41.017 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.017 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.017 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.017 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:41.017 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:36:41.017 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:41.017 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:41.017 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:41.017 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:41.017 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTNlMDQwYmQzMTVkYzUwOTMyMzg0ZThiOTk1NDVkYTUxM2ZmNWU1ZjFhY2M1ODMwZ/7lwA==: 00:36:41.017 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: 00:36:41.017 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:41.017 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:41.017 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTNlMDQwYmQzMTVkYzUwOTMyMzg0ZThiOTk1NDVkYTUxM2ZmNWU1ZjFhY2M1ODMwZ/7lwA==: 00:36:41.017 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: ]] 00:36:41.017 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: 00:36:41.017 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:36:41.017 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:41.017 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:41.017 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:41.017 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:41.017 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:41.017 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:41.017 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.017 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.017 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.017 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:41.017 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:41.017 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:41.017 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:41.017 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:41.017 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:41.017 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:41.017 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:41.017 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:41.017 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:41.017 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:41.017 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:41.017 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.017 17:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.588 nvme0n1 00:36:41.588 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.588 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:41.588 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:41.588 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.588 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.588 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.848 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:41.848 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:41.848 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.848 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.848 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.848 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:41.848 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:36:41.848 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:41.848 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:41.848 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:41.848 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:41.848 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQzODMzN2U0OTAwNjdhZjdlNDU3OGU2NDNmOTg3NTN7paym: 00:36:41.848 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: 00:36:41.848 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:41.848 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:41.848 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQzODMzN2U0OTAwNjdhZjdlNDU3OGU2NDNmOTg3NTN7paym: 00:36:41.848 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: ]] 00:36:41.848 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: 00:36:41.848 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:36:41.848 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:41.848 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:41.848 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:41.848 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:41.848 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:41.848 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:41.848 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.848 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.848 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.848 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:41.848 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:41.848 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:41.848 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:41.848 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:41.848 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:41.848 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:41.848 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:41.848 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:41.848 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:41.848 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:41.848 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:41.848 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.848 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.418 nvme0n1 00:36:42.418 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:42.418 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:42.418 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:42.418 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:42.418 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.418 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:42.679 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:42.679 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:42.679 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:42.679 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.679 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:42.679 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:42.679 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:36:42.679 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:42.679 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:42.679 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:42.679 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:42.679 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjA0OWZmODNiYTNkMjdkZGVhY2UyYWZiMzEwN2ZiNTdlNGVhMWQzZTc5NWU0ODBiBSC2kg==: 00:36:42.679 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDc5NTU3NGY0ZGYxNWYzYzcwMGVjOGZmNjZlZDM1YWQgohB1: 00:36:42.679 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:42.679 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:42.679 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjA0OWZmODNiYTNkMjdkZGVhY2UyYWZiMzEwN2ZiNTdlNGVhMWQzZTc5NWU0ODBiBSC2kg==: 00:36:42.679 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDc5NTU3NGY0ZGYxNWYzYzcwMGVjOGZmNjZlZDM1YWQgohB1: ]] 00:36:42.679 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDc5NTU3NGY0ZGYxNWYzYzcwMGVjOGZmNjZlZDM1YWQgohB1: 00:36:42.679 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:36:42.679 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:42.679 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:42.679 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:42.679 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:42.679 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:42.679 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:42.679 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:42.679 17:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.679 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:42.679 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:42.679 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:42.679 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:42.679 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:42.679 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:42.679 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:42.679 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:42.679 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:42.679 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:42.679 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:42.679 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:42.679 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:42.679 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:42.679 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.250 nvme0n1 00:36:43.250 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:43.250 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:43.250 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:43.250 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:43.250 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.250 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:43.510 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:43.510 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:43.510 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:43.510 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.510 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:43.510 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:43.510 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:36:43.510 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:43.510 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:43.510 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:43.510 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:43.510 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU0N2NlY2M1NDI3ZmE1YmZmNjI1NTQ0ZjkwMGZlN2U2YzYyNThmNjFmN2UyZmNkYTNkNzQxMWZiNWQ1ZDlmNJ1OAbY=: 00:36:43.510 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:43.510 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:43.510 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:43.510 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU0N2NlY2M1NDI3ZmE1YmZmNjI1NTQ0ZjkwMGZlN2U2YzYyNThmNjFmN2UyZmNkYTNkNzQxMWZiNWQ1ZDlmNJ1OAbY=: 00:36:43.510 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:43.510 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:36:43.510 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:43.510 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:43.510 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:43.510 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:43.510 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:43.510 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:43.510 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:43.510 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.510 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:43.510 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:43.510 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:43.510 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:43.510 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:43.510 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:43.510 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:43.510 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:43.510 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:43.510 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:43.511 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:43.511 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:43.511 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:43.511 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:43.511 17:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.081 nvme0n1 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlODZiODNhYzE0N2RhNjAxZDU1ZjE2ZTVlMTFlMjabv3Z+: 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTMwZDcyNzRjOTY4MmQ2ZjY3M2Y5YjAwN2FkZjlhZjRkNmU5Yzg5YzczMzFiM2ZlOTE3NWU3ZjhlNmNhM2I4Zhicz/w=: 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlODZiODNhYzE0N2RhNjAxZDU1ZjE2ZTVlMTFlMjabv3Z+: 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTMwZDcyNzRjOTY4MmQ2ZjY3M2Y5YjAwN2FkZjlhZjRkNmU5Yzg5YzczMzFiM2ZlOTE3NWU3ZjhlNmNhM2I4Zhicz/w=: ]] 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTMwZDcyNzRjOTY4MmQ2ZjY3M2Y5YjAwN2FkZjlhZjRkNmU5Yzg5YzczMzFiM2ZlOTE3NWU3ZjhlNmNhM2I4Zhicz/w=: 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.081 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.341 nvme0n1 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTNlMDQwYmQzMTVkYzUwOTMyMzg0ZThiOTk1NDVkYTUxM2ZmNWU1ZjFhY2M1ODMwZ/7lwA==: 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTNlMDQwYmQzMTVkYzUwOTMyMzg0ZThiOTk1NDVkYTUxM2ZmNWU1ZjFhY2M1ODMwZ/7lwA==: 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: ]] 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.341 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.602 nvme0n1 00:36:44.602 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.602 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:44.602 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:44.602 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.602 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.602 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.602 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:44.602 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:44.602 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.602 17:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.602 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.602 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:44.602 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:36:44.602 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:44.602 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:44.602 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:44.602 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:44.602 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQzODMzN2U0OTAwNjdhZjdlNDU3OGU2NDNmOTg3NTN7paym: 00:36:44.602 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: 00:36:44.602 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:44.602 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:44.602 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQzODMzN2U0OTAwNjdhZjdlNDU3OGU2NDNmOTg3NTN7paym: 00:36:44.602 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: ]] 00:36:44.602 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: 00:36:44.602 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:36:44.602 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:44.602 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:44.602 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:44.602 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:44.602 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:44.602 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:44.602 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.602 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.602 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.602 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:44.602 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:44.602 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:44.602 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:44.602 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:44.602 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:44.602 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:44.602 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:44.602 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:44.602 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:44.602 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:44.602 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:44.602 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.602 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.863 nvme0n1 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjA0OWZmODNiYTNkMjdkZGVhY2UyYWZiMzEwN2ZiNTdlNGVhMWQzZTc5NWU0ODBiBSC2kg==: 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDc5NTU3NGY0ZGYxNWYzYzcwMGVjOGZmNjZlZDM1YWQgohB1: 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjA0OWZmODNiYTNkMjdkZGVhY2UyYWZiMzEwN2ZiNTdlNGVhMWQzZTc5NWU0ODBiBSC2kg==: 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDc5NTU3NGY0ZGYxNWYzYzcwMGVjOGZmNjZlZDM1YWQgohB1: ]] 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDc5NTU3NGY0ZGYxNWYzYzcwMGVjOGZmNjZlZDM1YWQgohB1: 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.863 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.124 nvme0n1 00:36:45.124 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.124 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:45.124 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:45.124 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.124 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.124 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.124 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:45.124 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:45.124 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.124 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.124 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.124 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:45.124 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:36:45.124 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:45.124 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:45.124 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:45.124 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:45.124 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU0N2NlY2M1NDI3ZmE1YmZmNjI1NTQ0ZjkwMGZlN2U2YzYyNThmNjFmN2UyZmNkYTNkNzQxMWZiNWQ1ZDlmNJ1OAbY=: 00:36:45.124 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:45.124 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:45.124 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:45.124 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU0N2NlY2M1NDI3ZmE1YmZmNjI1NTQ0ZjkwMGZlN2U2YzYyNThmNjFmN2UyZmNkYTNkNzQxMWZiNWQ1ZDlmNJ1OAbY=: 00:36:45.124 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:45.124 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:36:45.124 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:45.124 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:45.124 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:45.124 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:45.124 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:45.124 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:45.124 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.124 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.124 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.124 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:45.124 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:45.125 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:45.125 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:45.125 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:45.125 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:45.125 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:45.125 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:45.125 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:45.125 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:45.125 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:45.125 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:45.125 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.125 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.125 nvme0n1 00:36:45.125 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.125 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:45.125 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:45.125 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.125 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.125 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlODZiODNhYzE0N2RhNjAxZDU1ZjE2ZTVlMTFlMjabv3Z+: 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTMwZDcyNzRjOTY4MmQ2ZjY3M2Y5YjAwN2FkZjlhZjRkNmU5Yzg5YzczMzFiM2ZlOTE3NWU3ZjhlNmNhM2I4Zhicz/w=: 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlODZiODNhYzE0N2RhNjAxZDU1ZjE2ZTVlMTFlMjabv3Z+: 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTMwZDcyNzRjOTY4MmQ2ZjY3M2Y5YjAwN2FkZjlhZjRkNmU5Yzg5YzczMzFiM2ZlOTE3NWU3ZjhlNmNhM2I4Zhicz/w=: ]] 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTMwZDcyNzRjOTY4MmQ2ZjY3M2Y5YjAwN2FkZjlhZjRkNmU5Yzg5YzczMzFiM2ZlOTE3NWU3ZjhlNmNhM2I4Zhicz/w=: 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.385 nvme0n1 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:45.385 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.386 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.386 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.646 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:45.646 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:45.646 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.646 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.646 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.646 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:45.646 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:36:45.646 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:45.646 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:45.646 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:45.646 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:45.646 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTNlMDQwYmQzMTVkYzUwOTMyMzg0ZThiOTk1NDVkYTUxM2ZmNWU1ZjFhY2M1ODMwZ/7lwA==: 00:36:45.646 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: 00:36:45.646 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:45.646 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:45.646 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTNlMDQwYmQzMTVkYzUwOTMyMzg0ZThiOTk1NDVkYTUxM2ZmNWU1ZjFhY2M1ODMwZ/7lwA==: 00:36:45.646 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: ]] 00:36:45.646 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: 00:36:45.646 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:36:45.646 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:45.646 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:45.646 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:45.646 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:45.646 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:45.646 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:45.646 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.646 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.646 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.646 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:45.646 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:45.646 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:45.646 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:45.646 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:45.646 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:45.646 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:45.646 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:45.646 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:45.646 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:45.646 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:45.646 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:45.646 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.646 17:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.646 nvme0n1 00:36:45.646 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.646 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:45.646 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:45.646 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.646 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.646 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQzODMzN2U0OTAwNjdhZjdlNDU3OGU2NDNmOTg3NTN7paym: 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQzODMzN2U0OTAwNjdhZjdlNDU3OGU2NDNmOTg3NTN7paym: 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: ]] 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.907 nvme0n1 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.907 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjA0OWZmODNiYTNkMjdkZGVhY2UyYWZiMzEwN2ZiNTdlNGVhMWQzZTc5NWU0ODBiBSC2kg==: 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDc5NTU3NGY0ZGYxNWYzYzcwMGVjOGZmNjZlZDM1YWQgohB1: 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjA0OWZmODNiYTNkMjdkZGVhY2UyYWZiMzEwN2ZiNTdlNGVhMWQzZTc5NWU0ODBiBSC2kg==: 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDc5NTU3NGY0ZGYxNWYzYzcwMGVjOGZmNjZlZDM1YWQgohB1: ]] 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDc5NTU3NGY0ZGYxNWYzYzcwMGVjOGZmNjZlZDM1YWQgohB1: 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.264 nvme0n1 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU0N2NlY2M1NDI3ZmE1YmZmNjI1NTQ0ZjkwMGZlN2U2YzYyNThmNjFmN2UyZmNkYTNkNzQxMWZiNWQ1ZDlmNJ1OAbY=: 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU0N2NlY2M1NDI3ZmE1YmZmNjI1NTQ0ZjkwMGZlN2U2YzYyNThmNjFmN2UyZmNkYTNkNzQxMWZiNWQ1ZDlmNJ1OAbY=: 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:46.264 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:46.265 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:46.265 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:46.265 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:46.265 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:46.265 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:46.265 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.596 nvme0n1 00:36:46.596 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:46.596 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:46.596 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:46.596 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:46.596 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.596 17:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:46.596 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:46.596 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:46.596 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:46.596 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.596 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:46.596 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:46.596 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:46.596 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:36:46.596 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:46.596 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:46.596 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:46.596 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:46.596 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlODZiODNhYzE0N2RhNjAxZDU1ZjE2ZTVlMTFlMjabv3Z+: 00:36:46.596 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTMwZDcyNzRjOTY4MmQ2ZjY3M2Y5YjAwN2FkZjlhZjRkNmU5Yzg5YzczMzFiM2ZlOTE3NWU3ZjhlNmNhM2I4Zhicz/w=: 00:36:46.596 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:46.596 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:46.596 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlODZiODNhYzE0N2RhNjAxZDU1ZjE2ZTVlMTFlMjabv3Z+: 00:36:46.596 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTMwZDcyNzRjOTY4MmQ2ZjY3M2Y5YjAwN2FkZjlhZjRkNmU5Yzg5YzczMzFiM2ZlOTE3NWU3ZjhlNmNhM2I4Zhicz/w=: ]] 00:36:46.596 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTMwZDcyNzRjOTY4MmQ2ZjY3M2Y5YjAwN2FkZjlhZjRkNmU5Yzg5YzczMzFiM2ZlOTE3NWU3ZjhlNmNhM2I4Zhicz/w=: 00:36:46.596 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:36:46.596 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:46.596 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:46.596 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:46.596 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:46.596 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:46.596 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:46.596 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:46.596 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.596 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:46.596 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:46.596 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:46.596 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:46.596 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:46.596 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:46.596 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:46.596 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:46.596 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:46.596 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:46.596 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:46.596 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:46.596 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:46.596 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:46.596 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.857 nvme0n1 00:36:46.857 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:46.857 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:46.857 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:46.857 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:46.857 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.857 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:46.857 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:46.857 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:46.857 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:46.857 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.857 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:46.857 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:46.857 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:36:46.857 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:46.857 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:46.857 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:46.857 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:46.857 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTNlMDQwYmQzMTVkYzUwOTMyMzg0ZThiOTk1NDVkYTUxM2ZmNWU1ZjFhY2M1ODMwZ/7lwA==: 00:36:46.857 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: 00:36:46.857 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:46.857 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:46.857 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTNlMDQwYmQzMTVkYzUwOTMyMzg0ZThiOTk1NDVkYTUxM2ZmNWU1ZjFhY2M1ODMwZ/7lwA==: 00:36:46.857 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: ]] 00:36:46.857 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: 00:36:46.857 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:36:46.857 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:46.857 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:46.857 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:46.857 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:46.857 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:46.857 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:46.857 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:46.857 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.118 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.118 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:47.118 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:47.118 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:47.118 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:47.118 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:47.118 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:47.118 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:47.118 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:47.118 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:47.118 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:47.118 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:47.118 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:47.118 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.118 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.380 nvme0n1 00:36:47.380 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.380 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:47.380 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:47.380 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.380 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.380 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.380 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:47.380 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:47.380 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.380 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.380 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.380 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:47.380 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:36:47.380 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:47.380 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:47.380 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:47.380 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:47.380 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQzODMzN2U0OTAwNjdhZjdlNDU3OGU2NDNmOTg3NTN7paym: 00:36:47.380 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: 00:36:47.380 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:47.380 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:47.380 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQzODMzN2U0OTAwNjdhZjdlNDU3OGU2NDNmOTg3NTN7paym: 00:36:47.380 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: ]] 00:36:47.380 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: 00:36:47.380 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:36:47.380 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:47.380 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:47.380 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:47.380 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:47.380 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:47.380 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:47.380 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.380 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.380 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.380 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:47.380 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:47.380 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:47.380 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:47.380 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:47.380 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:47.380 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:47.381 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:47.381 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:47.381 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:47.381 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:47.381 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:47.381 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.381 17:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.641 nvme0n1 00:36:47.641 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.641 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:47.641 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:47.641 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.641 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.641 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.642 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:47.642 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:47.642 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.642 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.642 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.642 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:47.642 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:36:47.642 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:47.642 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:47.642 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:47.642 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:47.642 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjA0OWZmODNiYTNkMjdkZGVhY2UyYWZiMzEwN2ZiNTdlNGVhMWQzZTc5NWU0ODBiBSC2kg==: 00:36:47.642 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDc5NTU3NGY0ZGYxNWYzYzcwMGVjOGZmNjZlZDM1YWQgohB1: 00:36:47.642 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:47.642 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:47.642 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjA0OWZmODNiYTNkMjdkZGVhY2UyYWZiMzEwN2ZiNTdlNGVhMWQzZTc5NWU0ODBiBSC2kg==: 00:36:47.642 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDc5NTU3NGY0ZGYxNWYzYzcwMGVjOGZmNjZlZDM1YWQgohB1: ]] 00:36:47.642 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDc5NTU3NGY0ZGYxNWYzYzcwMGVjOGZmNjZlZDM1YWQgohB1: 00:36:47.642 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:36:47.642 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:47.642 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:47.642 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:47.642 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:47.642 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:47.642 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:47.642 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.642 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.642 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.642 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:47.642 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:47.642 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:47.642 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:47.642 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:47.642 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:47.642 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:47.642 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:47.642 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:47.642 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:47.642 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:47.642 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:47.642 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.642 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.903 nvme0n1 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU0N2NlY2M1NDI3ZmE1YmZmNjI1NTQ0ZjkwMGZlN2U2YzYyNThmNjFmN2UyZmNkYTNkNzQxMWZiNWQ1ZDlmNJ1OAbY=: 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU0N2NlY2M1NDI3ZmE1YmZmNjI1NTQ0ZjkwMGZlN2U2YzYyNThmNjFmN2UyZmNkYTNkNzQxMWZiNWQ1ZDlmNJ1OAbY=: 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.903 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.163 nvme0n1 00:36:48.163 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.423 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:48.423 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:48.423 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.423 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.423 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.423 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:48.423 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:48.423 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.423 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.423 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.423 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:48.423 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:48.424 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:36:48.424 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:48.424 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:48.424 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:48.424 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:48.424 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlODZiODNhYzE0N2RhNjAxZDU1ZjE2ZTVlMTFlMjabv3Z+: 00:36:48.424 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTMwZDcyNzRjOTY4MmQ2ZjY3M2Y5YjAwN2FkZjlhZjRkNmU5Yzg5YzczMzFiM2ZlOTE3NWU3ZjhlNmNhM2I4Zhicz/w=: 00:36:48.424 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:48.424 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:48.424 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlODZiODNhYzE0N2RhNjAxZDU1ZjE2ZTVlMTFlMjabv3Z+: 00:36:48.424 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTMwZDcyNzRjOTY4MmQ2ZjY3M2Y5YjAwN2FkZjlhZjRkNmU5Yzg5YzczMzFiM2ZlOTE3NWU3ZjhlNmNhM2I4Zhicz/w=: ]] 00:36:48.424 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTMwZDcyNzRjOTY4MmQ2ZjY3M2Y5YjAwN2FkZjlhZjRkNmU5Yzg5YzczMzFiM2ZlOTE3NWU3ZjhlNmNhM2I4Zhicz/w=: 00:36:48.424 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:36:48.424 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:48.424 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:48.424 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:48.424 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:48.424 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:48.424 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:48.424 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.424 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.424 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.424 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:48.424 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:48.424 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:48.424 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:48.424 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:48.424 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:48.424 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:48.424 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:48.424 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:48.424 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:48.424 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:48.424 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:48.424 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.424 17:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.995 nvme0n1 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTNlMDQwYmQzMTVkYzUwOTMyMzg0ZThiOTk1NDVkYTUxM2ZmNWU1ZjFhY2M1ODMwZ/7lwA==: 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTNlMDQwYmQzMTVkYzUwOTMyMzg0ZThiOTk1NDVkYTUxM2ZmNWU1ZjFhY2M1ODMwZ/7lwA==: 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: ]] 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.995 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.256 nvme0n1 00:36:49.256 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.256 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:49.256 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:49.256 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.256 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.256 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.516 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:49.516 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:49.516 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.516 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.516 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.516 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:49.516 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:36:49.516 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:49.516 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:49.516 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:49.516 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:49.516 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQzODMzN2U0OTAwNjdhZjdlNDU3OGU2NDNmOTg3NTN7paym: 00:36:49.516 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: 00:36:49.516 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:49.516 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:49.516 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQzODMzN2U0OTAwNjdhZjdlNDU3OGU2NDNmOTg3NTN7paym: 00:36:49.516 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: ]] 00:36:49.516 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: 00:36:49.516 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:36:49.516 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:49.516 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:49.516 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:49.516 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:49.516 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:49.516 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:49.516 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.516 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.517 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.517 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:49.517 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:49.517 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:49.517 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:49.517 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:49.517 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:49.517 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:49.517 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:49.517 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:49.517 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:49.517 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:49.517 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:49.517 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.517 17:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.094 nvme0n1 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjA0OWZmODNiYTNkMjdkZGVhY2UyYWZiMzEwN2ZiNTdlNGVhMWQzZTc5NWU0ODBiBSC2kg==: 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDc5NTU3NGY0ZGYxNWYzYzcwMGVjOGZmNjZlZDM1YWQgohB1: 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjA0OWZmODNiYTNkMjdkZGVhY2UyYWZiMzEwN2ZiNTdlNGVhMWQzZTc5NWU0ODBiBSC2kg==: 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDc5NTU3NGY0ZGYxNWYzYzcwMGVjOGZmNjZlZDM1YWQgohB1: ]] 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDc5NTU3NGY0ZGYxNWYzYzcwMGVjOGZmNjZlZDM1YWQgohB1: 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:50.094 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.354 nvme0n1 00:36:50.354 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:50.354 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:50.354 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:50.354 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:50.354 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.354 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:50.615 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:50.615 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:50.615 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:50.615 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.615 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:50.615 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:50.615 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:36:50.615 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:50.615 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:50.615 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:50.615 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:50.615 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU0N2NlY2M1NDI3ZmE1YmZmNjI1NTQ0ZjkwMGZlN2U2YzYyNThmNjFmN2UyZmNkYTNkNzQxMWZiNWQ1ZDlmNJ1OAbY=: 00:36:50.615 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:50.615 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:50.615 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:50.615 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU0N2NlY2M1NDI3ZmE1YmZmNjI1NTQ0ZjkwMGZlN2U2YzYyNThmNjFmN2UyZmNkYTNkNzQxMWZiNWQ1ZDlmNJ1OAbY=: 00:36:50.615 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:50.615 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:36:50.615 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:50.615 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:50.615 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:50.615 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:50.615 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:50.615 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:50.615 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:50.615 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.615 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:50.615 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:50.615 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:50.615 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:50.615 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:50.615 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:50.615 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:50.615 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:50.615 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:50.615 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:50.615 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:50.615 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:50.615 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:50.615 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:50.615 17:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.875 nvme0n1 00:36:50.875 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:50.875 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:50.875 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:50.875 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:50.875 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.135 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.135 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:51.135 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:51.135 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.135 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.135 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.135 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:51.135 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:51.135 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:36:51.135 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:51.135 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:51.135 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:51.135 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:51.135 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlODZiODNhYzE0N2RhNjAxZDU1ZjE2ZTVlMTFlMjabv3Z+: 00:36:51.135 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTMwZDcyNzRjOTY4MmQ2ZjY3M2Y5YjAwN2FkZjlhZjRkNmU5Yzg5YzczMzFiM2ZlOTE3NWU3ZjhlNmNhM2I4Zhicz/w=: 00:36:51.135 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:51.135 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:51.135 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlODZiODNhYzE0N2RhNjAxZDU1ZjE2ZTVlMTFlMjabv3Z+: 00:36:51.135 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTMwZDcyNzRjOTY4MmQ2ZjY3M2Y5YjAwN2FkZjlhZjRkNmU5Yzg5YzczMzFiM2ZlOTE3NWU3ZjhlNmNhM2I4Zhicz/w=: ]] 00:36:51.135 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTMwZDcyNzRjOTY4MmQ2ZjY3M2Y5YjAwN2FkZjlhZjRkNmU5Yzg5YzczMzFiM2ZlOTE3NWU3ZjhlNmNhM2I4Zhicz/w=: 00:36:51.135 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:36:51.135 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:51.135 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:51.135 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:51.135 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:51.135 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:51.135 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:51.135 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.135 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.135 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.135 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:51.135 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:51.135 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:51.135 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:51.135 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:51.135 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:51.135 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:51.135 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:51.135 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:51.135 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:51.135 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:51.135 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:51.135 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.135 17:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.704 nvme0n1 00:36:51.704 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.704 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:51.704 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:51.704 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.704 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.965 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.965 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:51.965 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:51.965 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.965 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.965 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.965 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:51.965 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:36:51.965 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:51.965 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:51.965 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:51.965 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:51.965 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTNlMDQwYmQzMTVkYzUwOTMyMzg0ZThiOTk1NDVkYTUxM2ZmNWU1ZjFhY2M1ODMwZ/7lwA==: 00:36:51.965 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: 00:36:51.965 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:51.965 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:51.965 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTNlMDQwYmQzMTVkYzUwOTMyMzg0ZThiOTk1NDVkYTUxM2ZmNWU1ZjFhY2M1ODMwZ/7lwA==: 00:36:51.965 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: ]] 00:36:51.965 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: 00:36:51.965 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:36:51.965 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:51.965 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:51.965 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:51.965 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:51.965 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:51.965 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:51.965 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.965 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.965 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.965 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:51.965 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:51.965 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:51.965 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:51.965 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:51.965 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:51.965 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:51.965 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:51.965 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:51.965 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:51.965 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:51.965 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:51.965 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.965 17:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.535 nvme0n1 00:36:52.535 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:52.535 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:52.536 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:52.536 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:52.536 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.795 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:52.795 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:52.795 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:52.795 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:52.795 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.795 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:52.795 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:52.795 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:36:52.795 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:52.795 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:52.795 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:52.795 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:52.795 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQzODMzN2U0OTAwNjdhZjdlNDU3OGU2NDNmOTg3NTN7paym: 00:36:52.795 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: 00:36:52.795 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:52.795 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:52.795 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQzODMzN2U0OTAwNjdhZjdlNDU3OGU2NDNmOTg3NTN7paym: 00:36:52.795 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: ]] 00:36:52.795 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: 00:36:52.795 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:36:52.795 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:52.795 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:52.795 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:52.795 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:52.795 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:52.796 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:52.796 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:52.796 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.796 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:52.796 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:52.796 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:52.796 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:52.796 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:52.796 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:52.796 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:52.796 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:52.796 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:52.796 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:52.796 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:52.796 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:52.796 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:52.796 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:52.796 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.364 nvme0n1 00:36:53.364 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:53.364 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:53.364 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:53.364 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:53.364 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.625 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:53.625 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:53.625 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:53.625 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:53.625 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.625 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:53.625 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:53.625 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:36:53.625 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:53.625 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:53.625 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:53.625 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:53.625 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjA0OWZmODNiYTNkMjdkZGVhY2UyYWZiMzEwN2ZiNTdlNGVhMWQzZTc5NWU0ODBiBSC2kg==: 00:36:53.625 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDc5NTU3NGY0ZGYxNWYzYzcwMGVjOGZmNjZlZDM1YWQgohB1: 00:36:53.625 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:53.625 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:53.625 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjA0OWZmODNiYTNkMjdkZGVhY2UyYWZiMzEwN2ZiNTdlNGVhMWQzZTc5NWU0ODBiBSC2kg==: 00:36:53.625 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDc5NTU3NGY0ZGYxNWYzYzcwMGVjOGZmNjZlZDM1YWQgohB1: ]] 00:36:53.625 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDc5NTU3NGY0ZGYxNWYzYzcwMGVjOGZmNjZlZDM1YWQgohB1: 00:36:53.625 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:36:53.625 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:53.625 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:53.625 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:53.625 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:53.625 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:53.625 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:53.625 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:53.625 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.625 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:53.625 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:53.625 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:53.625 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:53.625 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:53.625 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:53.625 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:53.625 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:53.625 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:53.625 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:53.625 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:53.625 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:53.625 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:53.625 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:53.625 17:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.195 nvme0n1 00:36:54.195 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:54.195 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:54.195 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:54.195 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:54.195 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.455 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:54.455 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:54.455 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:54.455 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:54.455 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.455 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:54.455 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:54.455 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:36:54.455 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:54.455 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:54.455 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:54.455 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:54.455 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU0N2NlY2M1NDI3ZmE1YmZmNjI1NTQ0ZjkwMGZlN2U2YzYyNThmNjFmN2UyZmNkYTNkNzQxMWZiNWQ1ZDlmNJ1OAbY=: 00:36:54.455 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:54.455 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:54.455 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:54.455 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU0N2NlY2M1NDI3ZmE1YmZmNjI1NTQ0ZjkwMGZlN2U2YzYyNThmNjFmN2UyZmNkYTNkNzQxMWZiNWQ1ZDlmNJ1OAbY=: 00:36:54.455 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:54.455 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:36:54.455 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:54.455 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:54.455 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:54.455 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:54.455 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:54.455 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:54.455 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:54.455 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.455 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:54.455 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:54.455 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:54.455 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:54.455 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:54.455 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:54.455 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:54.455 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:54.455 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:54.455 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:54.455 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:54.455 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:54.455 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:54.455 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:54.455 17:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.026 nvme0n1 00:36:55.026 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.026 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:55.026 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.026 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.026 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:55.026 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTNlMDQwYmQzMTVkYzUwOTMyMzg0ZThiOTk1NDVkYTUxM2ZmNWU1ZjFhY2M1ODMwZ/7lwA==: 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTNlMDQwYmQzMTVkYzUwOTMyMzg0ZThiOTk1NDVkYTUxM2ZmNWU1ZjFhY2M1ODMwZ/7lwA==: 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: ]] 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.288 request: 00:36:55.288 { 00:36:55.288 "name": "nvme0", 00:36:55.288 "trtype": "tcp", 00:36:55.288 "traddr": "10.0.0.1", 00:36:55.288 "adrfam": "ipv4", 00:36:55.288 "trsvcid": "4420", 00:36:55.288 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:55.288 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:55.288 "prchk_reftag": false, 00:36:55.288 "prchk_guard": false, 00:36:55.288 "hdgst": false, 00:36:55.288 "ddgst": false, 00:36:55.288 "allow_unrecognized_csi": false, 00:36:55.288 "method": "bdev_nvme_attach_controller", 00:36:55.288 "req_id": 1 00:36:55.288 } 00:36:55.288 Got JSON-RPC error response 00:36:55.288 response: 00:36:55.288 { 00:36:55.288 "code": -5, 00:36:55.288 "message": "Input/output error" 00:36:55.288 } 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.288 request: 00:36:55.288 { 00:36:55.288 "name": "nvme0", 00:36:55.288 "trtype": "tcp", 00:36:55.288 "traddr": "10.0.0.1", 00:36:55.288 "adrfam": "ipv4", 00:36:55.288 "trsvcid": "4420", 00:36:55.288 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:55.288 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:55.288 "prchk_reftag": false, 00:36:55.288 "prchk_guard": false, 00:36:55.288 "hdgst": false, 00:36:55.288 "ddgst": false, 00:36:55.288 "dhchap_key": "key2", 00:36:55.288 "allow_unrecognized_csi": false, 00:36:55.288 "method": "bdev_nvme_attach_controller", 00:36:55.288 "req_id": 1 00:36:55.288 } 00:36:55.288 Got JSON-RPC error response 00:36:55.288 response: 00:36:55.288 { 00:36:55.288 "code": -5, 00:36:55.288 "message": "Input/output error" 00:36:55.288 } 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:55.288 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:55.289 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:36:55.289 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:36:55.289 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.289 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.549 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.549 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:36:55.549 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:36:55.549 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:55.549 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:55.549 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:55.549 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:55.549 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:55.549 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:55.549 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:55.549 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:55.549 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:55.549 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:55.549 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:55.549 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:36:55.549 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:55.549 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:55.549 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:55.549 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:55.549 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:55.549 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:55.549 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.549 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.549 request: 00:36:55.549 { 00:36:55.549 "name": "nvme0", 00:36:55.549 "trtype": "tcp", 00:36:55.549 "traddr": "10.0.0.1", 00:36:55.549 "adrfam": "ipv4", 00:36:55.549 "trsvcid": "4420", 00:36:55.549 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:55.549 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:55.549 "prchk_reftag": false, 00:36:55.549 "prchk_guard": false, 00:36:55.549 "hdgst": false, 00:36:55.549 "ddgst": false, 00:36:55.549 "dhchap_key": "key1", 00:36:55.549 "dhchap_ctrlr_key": "ckey2", 00:36:55.549 "allow_unrecognized_csi": false, 00:36:55.549 "method": "bdev_nvme_attach_controller", 00:36:55.549 "req_id": 1 00:36:55.549 } 00:36:55.549 Got JSON-RPC error response 00:36:55.549 response: 00:36:55.549 { 00:36:55.549 "code": -5, 00:36:55.549 "message": "Input/output error" 00:36:55.549 } 00:36:55.549 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:55.549 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:36:55.549 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:55.549 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:55.549 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:55.549 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:36:55.549 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:55.549 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:55.549 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:55.549 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:55.549 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:55.549 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:55.549 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:55.549 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:55.549 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:55.549 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:55.549 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:55.549 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.549 17:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.811 nvme0n1 00:36:55.811 17:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.811 17:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:55.811 17:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:55.811 17:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:55.811 17:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:55.811 17:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:55.811 17:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQzODMzN2U0OTAwNjdhZjdlNDU3OGU2NDNmOTg3NTN7paym: 00:36:55.811 17:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: 00:36:55.811 17:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:55.811 17:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:55.811 17:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQzODMzN2U0OTAwNjdhZjdlNDU3OGU2NDNmOTg3NTN7paym: 00:36:55.811 17:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: ]] 00:36:55.811 17:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: 00:36:55.811 17:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:55.811 17:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.811 17:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.811 17:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.811 17:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:36:55.811 17:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:36:55.811 17:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.811 17:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.811 17:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.811 17:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:55.811 17:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:55.811 17:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:36:55.811 17:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:55.811 17:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:55.811 17:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:55.811 17:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:55.811 17:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:55.811 17:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:55.811 17:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.811 17:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.811 request: 00:36:55.811 { 00:36:55.811 "name": "nvme0", 00:36:55.811 "dhchap_key": "key1", 00:36:55.811 "dhchap_ctrlr_key": "ckey2", 00:36:55.811 "method": "bdev_nvme_set_keys", 00:36:55.811 "req_id": 1 00:36:55.811 } 00:36:55.811 Got JSON-RPC error response 00:36:55.811 response: 00:36:55.811 { 00:36:55.811 "code": -13, 00:36:55.811 "message": "Permission denied" 00:36:55.811 } 00:36:55.811 17:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:55.811 17:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:36:55.811 17:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:55.811 17:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:55.811 17:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:55.811 17:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:55.811 17:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.811 17:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:55.811 17:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.811 17:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.811 17:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:36:55.811 17:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:36:57.195 17:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:57.195 17:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:57.195 17:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:57.195 17:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.195 17:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:57.195 17:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:36:57.195 17:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:36:58.136 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:58.136 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:58.136 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.136 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.136 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.136 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:36:58.136 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:58.136 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:58.136 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:58.136 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:58.136 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:58.136 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTNlMDQwYmQzMTVkYzUwOTMyMzg0ZThiOTk1NDVkYTUxM2ZmNWU1ZjFhY2M1ODMwZ/7lwA==: 00:36:58.136 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: 00:36:58.136 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:58.136 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:58.136 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTNlMDQwYmQzMTVkYzUwOTMyMzg0ZThiOTk1NDVkYTUxM2ZmNWU1ZjFhY2M1ODMwZ/7lwA==: 00:36:58.136 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: ]] 00:36:58.136 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTllY2MxMGI5YTE5ZTM2Yzk3M2Y1MTNkOTc3ZGZmZWMxODgyNTRhYjQ1YzRlZmMwje5BxA==: 00:36:58.136 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:36:58.136 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:58.136 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:58.136 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:58.136 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:58.136 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:58.136 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:58.136 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:58.137 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:58.137 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:58.137 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:58.137 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:58.137 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.137 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.137 nvme0n1 00:36:58.137 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.137 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:58.137 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:58.137 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:58.137 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:58.137 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:58.137 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQzODMzN2U0OTAwNjdhZjdlNDU3OGU2NDNmOTg3NTN7paym: 00:36:58.137 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: 00:36:58.137 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:58.137 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:58.137 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQzODMzN2U0OTAwNjdhZjdlNDU3OGU2NDNmOTg3NTN7paym: 00:36:58.137 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: ]] 00:36:58.137 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTA0NjYxYzgyOTMwNTEyYTNkYWU0NmNiZjc0ZjQwMmItKxti: 00:36:58.137 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:58.137 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:36:58.137 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:58.137 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:58.137 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:58.137 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:58.137 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:58.137 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:58.137 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.137 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.137 request: 00:36:58.137 { 00:36:58.137 "name": "nvme0", 00:36:58.137 "dhchap_key": "key2", 00:36:58.137 "dhchap_ctrlr_key": "ckey1", 00:36:58.137 "method": "bdev_nvme_set_keys", 00:36:58.137 "req_id": 1 00:36:58.137 } 00:36:58.137 Got JSON-RPC error response 00:36:58.137 response: 00:36:58.137 { 00:36:58.137 "code": -13, 00:36:58.137 "message": "Permission denied" 00:36:58.137 } 00:36:58.137 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:58.137 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:36:58.397 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:58.397 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:58.397 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:58.397 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:58.397 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.397 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:58.397 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.397 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.397 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:36:58.397 17:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:36:59.338 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:59.338 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:59.338 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:59.338 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:59.338 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:59.338 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:36:59.338 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:36:59.338 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:36:59.338 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:36:59.338 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:59.338 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:36:59.338 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:59.338 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:36:59.338 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:59.338 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:59.338 rmmod nvme_tcp 00:36:59.338 rmmod nvme_fabrics 00:36:59.338 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:59.338 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:36:59.338 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:36:59.338 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@515 -- # '[' -n 3262650 ']' 00:36:59.338 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # killprocess 3262650 00:36:59.338 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 3262650 ']' 00:36:59.338 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 3262650 00:36:59.338 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:36:59.338 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:59.338 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3262650 00:36:59.599 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:59.599 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:59.599 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3262650' 00:36:59.599 killing process with pid 3262650 00:36:59.599 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 3262650 00:36:59.599 17:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 3262650 00:36:59.599 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:36:59.599 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:59.599 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:59.599 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:36:59.599 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-save 00:36:59.599 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:59.599 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-restore 00:36:59.599 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:59.599 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:59.599 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:59.599 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:59.599 17:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:02.141 17:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:02.141 17:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:37:02.141 17:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:37:02.141 17:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:37:02.141 17:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:37:02.141 17:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # echo 0 00:37:02.141 17:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:02.141 17:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:37:02.141 17:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:02.141 17:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:02.141 17:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:37:02.141 17:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:37:02.141 17:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:05.438 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:05.438 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:05.438 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:05.438 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:05.438 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:05.438 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:05.438 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:05.438 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:05.438 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:05.438 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:05.438 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:05.438 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:05.438 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:05.438 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:05.438 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:05.438 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:05.438 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:05.438 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.o85 /tmp/spdk.key-null.wZE /tmp/spdk.key-sha256.zyO /tmp/spdk.key-sha384.KRF /tmp/spdk.key-sha512.ewF /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:37:05.438 17:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:08.739 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:37:08.739 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:37:08.739 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:37:08.739 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:37:08.739 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:37:08.739 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:37:08.739 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:37:08.739 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:37:08.739 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:37:08.739 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:37:08.739 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:37:08.739 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:37:08.739 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:37:08.739 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:37:08.739 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:37:08.739 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:37:08.739 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:37:09.000 00:37:09.000 real 1m2.266s 00:37:09.000 user 0m56.512s 00:37:09.000 sys 0m15.118s 00:37:09.000 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:09.000 17:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.000 ************************************ 00:37:09.000 END TEST nvmf_auth_host 00:37:09.000 ************************************ 00:37:09.000 17:37:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:37:09.000 17:37:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:37:09.000 17:37:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:09.000 17:37:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:09.000 17:37:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.261 ************************************ 00:37:09.261 START TEST nvmf_digest 00:37:09.261 ************************************ 00:37:09.261 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:37:09.261 * Looking for test storage... 00:37:09.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:09.261 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:09.261 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lcov --version 00:37:09.261 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:09.261 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:09.261 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:09.261 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:09.261 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:09.261 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:37:09.261 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:37:09.261 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:37:09.261 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:37:09.261 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:37:09.261 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:37:09.261 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:37:09.261 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:09.261 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:37:09.261 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:37:09.261 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:09.261 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:09.261 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:37:09.261 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:37:09.261 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:09.261 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:37:09.261 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:37:09.261 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:37:09.261 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:37:09.261 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:09.261 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:37:09.261 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:37:09.261 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:09.261 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:09.261 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:37:09.261 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:09.261 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:09.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:09.261 --rc genhtml_branch_coverage=1 00:37:09.261 --rc genhtml_function_coverage=1 00:37:09.261 --rc genhtml_legend=1 00:37:09.261 --rc geninfo_all_blocks=1 00:37:09.261 --rc geninfo_unexecuted_blocks=1 00:37:09.261 00:37:09.261 ' 00:37:09.261 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:09.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:09.261 --rc genhtml_branch_coverage=1 00:37:09.261 --rc genhtml_function_coverage=1 00:37:09.261 --rc genhtml_legend=1 00:37:09.261 --rc geninfo_all_blocks=1 00:37:09.261 --rc geninfo_unexecuted_blocks=1 00:37:09.261 00:37:09.261 ' 00:37:09.261 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:09.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:09.261 --rc genhtml_branch_coverage=1 00:37:09.261 --rc genhtml_function_coverage=1 00:37:09.261 --rc genhtml_legend=1 00:37:09.261 --rc geninfo_all_blocks=1 00:37:09.261 --rc geninfo_unexecuted_blocks=1 00:37:09.261 00:37:09.261 ' 00:37:09.261 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:09.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:09.261 --rc genhtml_branch_coverage=1 00:37:09.261 --rc genhtml_function_coverage=1 00:37:09.261 --rc genhtml_legend=1 00:37:09.261 --rc geninfo_all_blocks=1 00:37:09.261 --rc geninfo_unexecuted_blocks=1 00:37:09.261 00:37:09.261 ' 00:37:09.261 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:09.261 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:37:09.261 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:09.261 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:09.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # prepare_net_devs 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # local -g is_hw=no 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # remove_spdk_ns 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:37:09.262 17:37:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:15.843 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:15.843 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:15.843 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:15.844 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:15.844 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # is_hw=yes 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:15.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:15.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.579 ms 00:37:15.844 00:37:15.844 --- 10.0.0.2 ping statistics --- 00:37:15.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:15.844 rtt min/avg/max/mdev = 0.579/0.579/0.579/0.000 ms 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:15.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:15.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:37:15.844 00:37:15.844 --- 10.0.0.1 ping statistics --- 00:37:15.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:15.844 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # return 0 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:15.844 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:16.108 ************************************ 00:37:16.108 START TEST nvmf_digest_clean 00:37:16.108 ************************************ 00:37:16.108 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:37:16.108 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:37:16.108 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:37:16.108 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:37:16.108 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:37:16.108 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:37:16.108 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:16.108 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:16.108 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:16.108 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # nvmfpid=3279693 00:37:16.108 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # waitforlisten 3279693 00:37:16.108 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:37:16.108 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3279693 ']' 00:37:16.108 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:16.108 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:16.108 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:16.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:16.108 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:16.108 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:16.108 [2024-10-01 17:37:14.457820] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:37:16.108 [2024-10-01 17:37:14.457868] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:16.108 [2024-10-01 17:37:14.525829] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:16.108 [2024-10-01 17:37:14.556567] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:16.108 [2024-10-01 17:37:14.556606] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:16.108 [2024-10-01 17:37:14.556615] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:16.108 [2024-10-01 17:37:14.556623] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:16.108 [2024-10-01 17:37:14.556629] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:16.108 [2024-10-01 17:37:14.556648] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:16.108 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:16.109 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:37:16.109 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:16.109 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:16.109 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:16.109 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:16.109 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:37:16.109 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:37:16.109 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:37:16.109 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.109 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:16.369 null0 00:37:16.369 [2024-10-01 17:37:14.700472] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:16.369 [2024-10-01 17:37:14.724676] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:16.369 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.369 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:37:16.369 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:16.369 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:16.369 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:37:16.369 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:37:16.369 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:37:16.369 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:16.369 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3279713 00:37:16.369 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3279713 /var/tmp/bperf.sock 00:37:16.369 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3279713 ']' 00:37:16.369 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:16.369 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:16.369 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:16.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:16.369 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:16.369 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:16.369 17:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:37:16.369 [2024-10-01 17:37:14.779841] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:37:16.369 [2024-10-01 17:37:14.779889] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3279713 ] 00:37:16.369 [2024-10-01 17:37:14.855816] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:16.369 [2024-10-01 17:37:14.886313] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:17.310 17:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:17.310 17:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:37:17.310 17:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:17.310 17:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:17.310 17:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:17.310 17:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:17.310 17:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:17.571 nvme0n1 00:37:17.571 17:37:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:17.571 17:37:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:17.571 Running I/O for 2 seconds... 00:37:19.891 19681.00 IOPS, 76.88 MiB/s 19724.00 IOPS, 77.05 MiB/s 00:37:19.891 Latency(us) 00:37:19.891 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:19.891 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:19.891 nvme0n1 : 2.00 19741.01 77.11 0.00 0.00 6476.39 2990.08 14745.60 00:37:19.891 =================================================================================================================== 00:37:19.891 Total : 19741.01 77.11 0.00 0.00 6476.39 2990.08 14745.60 00:37:19.891 { 00:37:19.891 "results": [ 00:37:19.891 { 00:37:19.891 "job": "nvme0n1", 00:37:19.891 "core_mask": "0x2", 00:37:19.891 "workload": "randread", 00:37:19.891 "status": "finished", 00:37:19.891 "queue_depth": 128, 00:37:19.891 "io_size": 4096, 00:37:19.891 "runtime": 2.004761, 00:37:19.891 "iops": 19741.006533945943, 00:37:19.891 "mibps": 77.11330677322634, 00:37:19.891 "io_failed": 0, 00:37:19.891 "io_timeout": 0, 00:37:19.891 "avg_latency_us": 6476.385321743818, 00:37:19.891 "min_latency_us": 2990.08, 00:37:19.891 "max_latency_us": 14745.6 00:37:19.891 } 00:37:19.892 ], 00:37:19.892 "core_count": 1 00:37:19.892 } 00:37:19.892 17:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:19.892 17:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:19.892 17:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:19.892 17:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:19.892 | select(.opcode=="crc32c") 00:37:19.892 | "\(.module_name) \(.executed)"' 00:37:19.892 17:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:19.892 17:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:19.892 17:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:19.892 17:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:19.892 17:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:19.892 17:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3279713 00:37:19.892 17:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3279713 ']' 00:37:19.892 17:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3279713 00:37:19.892 17:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:37:19.892 17:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:19.892 17:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3279713 00:37:19.892 17:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:19.892 17:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:19.892 17:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3279713' 00:37:19.892 killing process with pid 3279713 00:37:19.892 17:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3279713 00:37:19.892 Received shutdown signal, test time was about 2.000000 seconds 00:37:19.892 00:37:19.892 Latency(us) 00:37:19.892 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:19.892 =================================================================================================================== 00:37:19.892 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:19.892 17:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3279713 00:37:20.152 17:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:37:20.152 17:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:20.152 17:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:20.152 17:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:37:20.152 17:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:37:20.152 17:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:37:20.152 17:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:20.152 17:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3280402 00:37:20.152 17:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3280402 /var/tmp/bperf.sock 00:37:20.152 17:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3280402 ']' 00:37:20.152 17:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:20.152 17:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:37:20.152 17:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:20.152 17:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:20.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:20.152 17:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:20.152 17:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:20.152 [2024-10-01 17:37:18.542401] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:37:20.152 [2024-10-01 17:37:18.542459] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3280402 ] 00:37:20.152 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:20.152 Zero copy mechanism will not be used. 00:37:20.152 [2024-10-01 17:37:18.617604] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:20.152 [2024-10-01 17:37:18.647944] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:21.096 17:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:21.096 17:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:37:21.096 17:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:21.097 17:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:21.097 17:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:21.097 17:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:21.097 17:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:21.356 nvme0n1 00:37:21.356 17:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:21.356 17:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:21.356 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:21.356 Zero copy mechanism will not be used. 00:37:21.356 Running I/O for 2 seconds... 00:37:23.677 3307.00 IOPS, 413.38 MiB/s 3360.50 IOPS, 420.06 MiB/s 00:37:23.677 Latency(us) 00:37:23.677 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:23.677 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:37:23.678 nvme0n1 : 2.00 3363.38 420.42 0.00 0.00 4754.17 686.08 7536.64 00:37:23.678 =================================================================================================================== 00:37:23.678 Total : 3363.38 420.42 0.00 0.00 4754.17 686.08 7536.64 00:37:23.678 { 00:37:23.678 "results": [ 00:37:23.678 { 00:37:23.678 "job": "nvme0n1", 00:37:23.678 "core_mask": "0x2", 00:37:23.678 "workload": "randread", 00:37:23.678 "status": "finished", 00:37:23.678 "queue_depth": 16, 00:37:23.678 "io_size": 131072, 00:37:23.678 "runtime": 2.003044, 00:37:23.678 "iops": 3363.38093421812, 00:37:23.678 "mibps": 420.422616777265, 00:37:23.678 "io_failed": 0, 00:37:23.678 "io_timeout": 0, 00:37:23.678 "avg_latency_us": 4754.170160803522, 00:37:23.678 "min_latency_us": 686.08, 00:37:23.678 "max_latency_us": 7536.64 00:37:23.678 } 00:37:23.678 ], 00:37:23.678 "core_count": 1 00:37:23.678 } 00:37:23.678 17:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:23.678 17:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:23.678 17:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:23.678 17:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:23.678 | select(.opcode=="crc32c") 00:37:23.678 | "\(.module_name) \(.executed)"' 00:37:23.678 17:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:23.678 17:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:23.678 17:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:23.678 17:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:23.678 17:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:23.678 17:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3280402 00:37:23.678 17:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3280402 ']' 00:37:23.678 17:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3280402 00:37:23.678 17:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:37:23.678 17:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:23.678 17:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3280402 00:37:23.678 17:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:23.678 17:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:23.678 17:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3280402' 00:37:23.678 killing process with pid 3280402 00:37:23.678 17:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3280402 00:37:23.678 Received shutdown signal, test time was about 2.000000 seconds 00:37:23.678 00:37:23.678 Latency(us) 00:37:23.678 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:23.678 =================================================================================================================== 00:37:23.678 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:23.678 17:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3280402 00:37:23.939 17:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:37:23.939 17:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:23.939 17:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:23.939 17:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:37:23.939 17:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:37:23.939 17:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:37:23.939 17:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:23.939 17:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3281083 00:37:23.939 17:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3281083 /var/tmp/bperf.sock 00:37:23.939 17:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3281083 ']' 00:37:23.939 17:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:37:23.939 17:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:23.939 17:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:23.939 17:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:23.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:23.939 17:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:23.939 17:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:23.939 [2024-10-01 17:37:22.338106] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:37:23.939 [2024-10-01 17:37:22.338189] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3281083 ] 00:37:23.939 [2024-10-01 17:37:22.420293] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:23.939 [2024-10-01 17:37:22.446497] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:24.881 17:37:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:24.881 17:37:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:37:24.881 17:37:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:24.881 17:37:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:24.881 17:37:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:24.881 17:37:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:24.881 17:37:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:25.453 nvme0n1 00:37:25.453 17:37:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:25.453 17:37:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:25.453 Running I/O for 2 seconds... 00:37:27.337 21442.00 IOPS, 83.76 MiB/s 21513.50 IOPS, 84.04 MiB/s 00:37:27.337 Latency(us) 00:37:27.337 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:27.337 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:27.337 nvme0n1 : 2.01 21538.78 84.14 0.00 0.00 5934.56 2266.45 12451.84 00:37:27.337 =================================================================================================================== 00:37:27.337 Total : 21538.78 84.14 0.00 0.00 5934.56 2266.45 12451.84 00:37:27.337 { 00:37:27.337 "results": [ 00:37:27.337 { 00:37:27.337 "job": "nvme0n1", 00:37:27.337 "core_mask": "0x2", 00:37:27.337 "workload": "randwrite", 00:37:27.337 "status": "finished", 00:37:27.338 "queue_depth": 128, 00:37:27.338 "io_size": 4096, 00:37:27.338 "runtime": 2.006567, 00:37:27.338 "iops": 21538.77742432722, 00:37:27.338 "mibps": 84.13584931377821, 00:37:27.338 "io_failed": 0, 00:37:27.338 "io_timeout": 0, 00:37:27.338 "avg_latency_us": 5934.559907448113, 00:37:27.338 "min_latency_us": 2266.4533333333334, 00:37:27.338 "max_latency_us": 12451.84 00:37:27.338 } 00:37:27.338 ], 00:37:27.338 "core_count": 1 00:37:27.338 } 00:37:27.338 17:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:27.338 17:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:27.338 17:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:27.338 17:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:27.338 | select(.opcode=="crc32c") 00:37:27.338 | "\(.module_name) \(.executed)"' 00:37:27.338 17:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:27.598 17:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:27.599 17:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:27.599 17:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:27.599 17:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:27.599 17:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3281083 00:37:27.599 17:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3281083 ']' 00:37:27.599 17:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3281083 00:37:27.599 17:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:37:27.599 17:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:27.599 17:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3281083 00:37:27.599 17:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:27.599 17:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:27.599 17:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3281083' 00:37:27.599 killing process with pid 3281083 00:37:27.599 17:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3281083 00:37:27.599 Received shutdown signal, test time was about 2.000000 seconds 00:37:27.599 00:37:27.599 Latency(us) 00:37:27.599 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:27.599 =================================================================================================================== 00:37:27.599 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:27.599 17:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3281083 00:37:27.859 17:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:37:27.859 17:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:27.859 17:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:27.859 17:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:37:27.859 17:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:37:27.859 17:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:37:27.859 17:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:27.859 17:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3281790 00:37:27.859 17:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3281790 /var/tmp/bperf.sock 00:37:27.859 17:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3281790 ']' 00:37:27.859 17:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:37:27.859 17:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:27.859 17:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:27.859 17:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:27.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:27.859 17:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:27.859 17:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:27.859 [2024-10-01 17:37:26.221225] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:37:27.859 [2024-10-01 17:37:26.221281] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3281790 ] 00:37:27.859 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:27.859 Zero copy mechanism will not be used. 00:37:27.859 [2024-10-01 17:37:26.298837] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:27.859 [2024-10-01 17:37:26.325985] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:28.801 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:28.801 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:37:28.801 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:28.801 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:28.801 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:28.801 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:28.801 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:29.371 nvme0n1 00:37:29.371 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:29.371 17:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:29.371 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:29.371 Zero copy mechanism will not be used. 00:37:29.371 Running I/O for 2 seconds... 00:37:31.253 5138.00 IOPS, 642.25 MiB/s 4771.50 IOPS, 596.44 MiB/s 00:37:31.253 Latency(us) 00:37:31.253 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:31.253 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:37:31.253 nvme0n1 : 2.00 4774.08 596.76 0.00 0.00 3347.86 1645.23 6853.97 00:37:31.253 =================================================================================================================== 00:37:31.253 Total : 4774.08 596.76 0.00 0.00 3347.86 1645.23 6853.97 00:37:31.253 { 00:37:31.253 "results": [ 00:37:31.253 { 00:37:31.253 "job": "nvme0n1", 00:37:31.253 "core_mask": "0x2", 00:37:31.253 "workload": "randwrite", 00:37:31.253 "status": "finished", 00:37:31.253 "queue_depth": 16, 00:37:31.253 "io_size": 131072, 00:37:31.253 "runtime": 2.00227, 00:37:31.253 "iops": 4774.081417591035, 00:37:31.253 "mibps": 596.7601771988793, 00:37:31.253 "io_failed": 0, 00:37:31.253 "io_timeout": 0, 00:37:31.253 "avg_latency_us": 3347.8601081005686, 00:37:31.253 "min_latency_us": 1645.2266666666667, 00:37:31.253 "max_latency_us": 6853.973333333333 00:37:31.253 } 00:37:31.253 ], 00:37:31.253 "core_count": 1 00:37:31.253 } 00:37:31.253 17:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:31.253 17:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:31.253 17:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:31.253 17:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:31.253 | select(.opcode=="crc32c") 00:37:31.253 | "\(.module_name) \(.executed)"' 00:37:31.253 17:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:31.514 17:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:31.514 17:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:31.514 17:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:31.514 17:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:31.514 17:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3281790 00:37:31.514 17:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3281790 ']' 00:37:31.514 17:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3281790 00:37:31.514 17:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:37:31.514 17:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:31.514 17:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3281790 00:37:31.514 17:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:31.514 17:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:31.514 17:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3281790' 00:37:31.514 killing process with pid 3281790 00:37:31.514 17:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3281790 00:37:31.514 Received shutdown signal, test time was about 2.000000 seconds 00:37:31.514 00:37:31.514 Latency(us) 00:37:31.514 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:31.514 =================================================================================================================== 00:37:31.514 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:31.514 17:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3281790 00:37:31.775 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3279693 00:37:31.775 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3279693 ']' 00:37:31.775 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3279693 00:37:31.775 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:37:31.775 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:31.775 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3279693 00:37:31.775 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:31.775 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:31.775 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3279693' 00:37:31.775 killing process with pid 3279693 00:37:31.775 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3279693 00:37:31.775 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3279693 00:37:31.775 00:37:31.775 real 0m15.875s 00:37:31.775 user 0m31.860s 00:37:31.775 sys 0m3.503s 00:37:31.775 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:31.775 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:31.775 ************************************ 00:37:31.775 END TEST nvmf_digest_clean 00:37:31.775 ************************************ 00:37:31.775 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:37:31.775 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:31.775 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:31.775 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:32.036 ************************************ 00:37:32.036 START TEST nvmf_digest_error 00:37:32.036 ************************************ 00:37:32.036 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:37:32.036 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:37:32.036 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:32.036 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:32.036 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:32.036 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # nvmfpid=3282758 00:37:32.036 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # waitforlisten 3282758 00:37:32.036 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3282758 ']' 00:37:32.036 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:32.036 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:32.036 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:32.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:32.036 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:32.036 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:32.036 17:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:37:32.036 [2024-10-01 17:37:30.405215] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:37:32.036 [2024-10-01 17:37:30.405263] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:32.036 [2024-10-01 17:37:30.471321] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:32.036 [2024-10-01 17:37:30.501579] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:32.036 [2024-10-01 17:37:30.501618] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:32.036 [2024-10-01 17:37:30.501627] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:32.036 [2024-10-01 17:37:30.501634] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:32.036 [2024-10-01 17:37:30.501640] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:32.036 [2024-10-01 17:37:30.501659] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:32.976 17:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:32.976 17:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:37:32.976 17:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:32.976 17:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:32.976 17:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:32.976 17:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:32.976 17:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:37:32.976 17:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:32.976 17:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:32.976 [2024-10-01 17:37:31.227714] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:37:32.976 17:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:32.976 17:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:37:32.976 17:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:37:32.976 17:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:32.976 17:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:32.976 null0 00:37:32.976 [2024-10-01 17:37:31.299452] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:32.976 [2024-10-01 17:37:31.323663] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:32.976 17:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:32.976 17:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:37:32.976 17:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:32.976 17:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:37:32.976 17:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:37:32.976 17:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:37:32.976 17:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3282829 00:37:32.976 17:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3282829 /var/tmp/bperf.sock 00:37:32.976 17:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3282829 ']' 00:37:32.976 17:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:32.976 17:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:32.976 17:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:32.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:32.976 17:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:32.976 17:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:32.976 17:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:37:32.976 [2024-10-01 17:37:31.376411] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:37:32.976 [2024-10-01 17:37:31.376460] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3282829 ] 00:37:32.976 [2024-10-01 17:37:31.452398] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:32.976 [2024-10-01 17:37:31.480782] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:33.917 17:37:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:33.917 17:37:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:37:33.917 17:37:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:33.917 17:37:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:33.917 17:37:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:33.917 17:37:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.917 17:37:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:33.917 17:37:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:33.917 17:37:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:33.917 17:37:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:34.178 nvme0n1 00:37:34.178 17:37:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:37:34.178 17:37:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:34.178 17:37:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:34.178 17:37:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.178 17:37:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:34.178 17:37:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:34.439 Running I/O for 2 seconds... 00:37:34.439 [2024-10-01 17:37:32.763001] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.439 [2024-10-01 17:37:32.763032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.439 [2024-10-01 17:37:32.763041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.439 [2024-10-01 17:37:32.773090] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.439 [2024-10-01 17:37:32.773110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.439 [2024-10-01 17:37:32.773118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.439 [2024-10-01 17:37:32.788358] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.439 [2024-10-01 17:37:32.788376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.439 [2024-10-01 17:37:32.788383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.439 [2024-10-01 17:37:32.801958] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.439 [2024-10-01 17:37:32.801977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.439 [2024-10-01 17:37:32.801984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.439 [2024-10-01 17:37:32.816210] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.439 [2024-10-01 17:37:32.816229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.439 [2024-10-01 17:37:32.816236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.439 [2024-10-01 17:37:32.828791] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.439 [2024-10-01 17:37:32.828809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.439 [2024-10-01 17:37:32.828816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.439 [2024-10-01 17:37:32.842914] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.439 [2024-10-01 17:37:32.842932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.439 [2024-10-01 17:37:32.842938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.439 [2024-10-01 17:37:32.854941] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.439 [2024-10-01 17:37:32.854963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.439 [2024-10-01 17:37:32.854970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.439 [2024-10-01 17:37:32.867560] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.439 [2024-10-01 17:37:32.867577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.439 [2024-10-01 17:37:32.867584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.439 [2024-10-01 17:37:32.880223] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.439 [2024-10-01 17:37:32.880240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.439 [2024-10-01 17:37:32.880246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.439 [2024-10-01 17:37:32.892535] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.439 [2024-10-01 17:37:32.892552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.439 [2024-10-01 17:37:32.892559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.439 [2024-10-01 17:37:32.906006] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.439 [2024-10-01 17:37:32.906024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.439 [2024-10-01 17:37:32.906030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.439 [2024-10-01 17:37:32.918945] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.439 [2024-10-01 17:37:32.918963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.439 [2024-10-01 17:37:32.918970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.439 [2024-10-01 17:37:32.932192] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.439 [2024-10-01 17:37:32.932210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.439 [2024-10-01 17:37:32.932217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.439 [2024-10-01 17:37:32.945451] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.439 [2024-10-01 17:37:32.945469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.439 [2024-10-01 17:37:32.945476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.439 [2024-10-01 17:37:32.957825] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.439 [2024-10-01 17:37:32.957842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.439 [2024-10-01 17:37:32.957849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.439 [2024-10-01 17:37:32.969498] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.439 [2024-10-01 17:37:32.969515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.439 [2024-10-01 17:37:32.969522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.439 [2024-10-01 17:37:32.983232] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.439 [2024-10-01 17:37:32.983249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.439 [2024-10-01 17:37:32.983256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.700 [2024-10-01 17:37:32.993531] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.700 [2024-10-01 17:37:32.993550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.700 [2024-10-01 17:37:32.993556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.700 [2024-10-01 17:37:33.009166] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.700 [2024-10-01 17:37:33.009183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.700 [2024-10-01 17:37:33.009189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.700 [2024-10-01 17:37:33.019556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.700 [2024-10-01 17:37:33.019573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.700 [2024-10-01 17:37:33.019580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.700 [2024-10-01 17:37:33.033110] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.700 [2024-10-01 17:37:33.033127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.700 [2024-10-01 17:37:33.033134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.700 [2024-10-01 17:37:33.046958] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.700 [2024-10-01 17:37:33.046975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.700 [2024-10-01 17:37:33.046982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.700 [2024-10-01 17:37:33.059166] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.700 [2024-10-01 17:37:33.059184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.700 [2024-10-01 17:37:33.059190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.700 [2024-10-01 17:37:33.071706] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.700 [2024-10-01 17:37:33.071723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.701 [2024-10-01 17:37:33.071736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.701 [2024-10-01 17:37:33.086245] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.701 [2024-10-01 17:37:33.086263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.701 [2024-10-01 17:37:33.086269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.701 [2024-10-01 17:37:33.100318] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.701 [2024-10-01 17:37:33.100336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.701 [2024-10-01 17:37:33.100343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.701 [2024-10-01 17:37:33.113005] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.701 [2024-10-01 17:37:33.113023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.701 [2024-10-01 17:37:33.113029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.701 [2024-10-01 17:37:33.125723] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.701 [2024-10-01 17:37:33.125741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.701 [2024-10-01 17:37:33.125747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.701 [2024-10-01 17:37:33.138861] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.701 [2024-10-01 17:37:33.138878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.701 [2024-10-01 17:37:33.138884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.701 [2024-10-01 17:37:33.151310] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.701 [2024-10-01 17:37:33.151327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.701 [2024-10-01 17:37:33.151334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.701 [2024-10-01 17:37:33.163363] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.701 [2024-10-01 17:37:33.163380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.701 [2024-10-01 17:37:33.163387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.701 [2024-10-01 17:37:33.176490] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.701 [2024-10-01 17:37:33.176507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.701 [2024-10-01 17:37:33.176514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.701 [2024-10-01 17:37:33.188774] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.701 [2024-10-01 17:37:33.188795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.701 [2024-10-01 17:37:33.188802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.701 [2024-10-01 17:37:33.200884] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.701 [2024-10-01 17:37:33.200902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.701 [2024-10-01 17:37:33.200909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.701 [2024-10-01 17:37:33.214449] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.701 [2024-10-01 17:37:33.214467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.701 [2024-10-01 17:37:33.214473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.701 [2024-10-01 17:37:33.228961] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.701 [2024-10-01 17:37:33.228978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.701 [2024-10-01 17:37:33.228985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.701 [2024-10-01 17:37:33.238641] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.701 [2024-10-01 17:37:33.238659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.701 [2024-10-01 17:37:33.238666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.962 [2024-10-01 17:37:33.252590] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.962 [2024-10-01 17:37:33.252608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.962 [2024-10-01 17:37:33.252615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.962 [2024-10-01 17:37:33.266738] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.962 [2024-10-01 17:37:33.266757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.962 [2024-10-01 17:37:33.266763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.962 [2024-10-01 17:37:33.279315] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.962 [2024-10-01 17:37:33.279333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.962 [2024-10-01 17:37:33.279341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.962 [2024-10-01 17:37:33.293965] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.962 [2024-10-01 17:37:33.293983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.962 [2024-10-01 17:37:33.293990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.962 [2024-10-01 17:37:33.307479] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.962 [2024-10-01 17:37:33.307497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.962 [2024-10-01 17:37:33.307504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.962 [2024-10-01 17:37:33.317707] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.962 [2024-10-01 17:37:33.317726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.962 [2024-10-01 17:37:33.317733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.962 [2024-10-01 17:37:33.333294] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.962 [2024-10-01 17:37:33.333313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.962 [2024-10-01 17:37:33.333319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.962 [2024-10-01 17:37:33.346179] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.962 [2024-10-01 17:37:33.346197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.962 [2024-10-01 17:37:33.346204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.962 [2024-10-01 17:37:33.359923] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.962 [2024-10-01 17:37:33.359942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.962 [2024-10-01 17:37:33.359949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.962 [2024-10-01 17:37:33.371460] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.963 [2024-10-01 17:37:33.371478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.963 [2024-10-01 17:37:33.371485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.963 [2024-10-01 17:37:33.383999] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.963 [2024-10-01 17:37:33.384017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.963 [2024-10-01 17:37:33.384024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.963 [2024-10-01 17:37:33.397992] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.963 [2024-10-01 17:37:33.398016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.963 [2024-10-01 17:37:33.398023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.963 [2024-10-01 17:37:33.411711] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.963 [2024-10-01 17:37:33.411729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.963 [2024-10-01 17:37:33.411739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.963 [2024-10-01 17:37:33.422783] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.963 [2024-10-01 17:37:33.422801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.963 [2024-10-01 17:37:33.422808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.963 [2024-10-01 17:37:33.435676] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.963 [2024-10-01 17:37:33.435693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.963 [2024-10-01 17:37:33.435700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.963 [2024-10-01 17:37:33.448406] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.963 [2024-10-01 17:37:33.448424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.963 [2024-10-01 17:37:33.448430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.963 [2024-10-01 17:37:33.462097] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.963 [2024-10-01 17:37:33.462115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.963 [2024-10-01 17:37:33.462122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.963 [2024-10-01 17:37:33.476755] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.963 [2024-10-01 17:37:33.476773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.963 [2024-10-01 17:37:33.476780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.963 [2024-10-01 17:37:33.489075] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.963 [2024-10-01 17:37:33.489093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.963 [2024-10-01 17:37:33.489099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.963 [2024-10-01 17:37:33.500432] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:34.963 [2024-10-01 17:37:33.500449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.963 [2024-10-01 17:37:33.500456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.225 [2024-10-01 17:37:33.514015] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.225 [2024-10-01 17:37:33.514033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.225 [2024-10-01 17:37:33.514039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.225 [2024-10-01 17:37:33.527910] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.225 [2024-10-01 17:37:33.527928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.225 [2024-10-01 17:37:33.527935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.225 [2024-10-01 17:37:33.541227] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.225 [2024-10-01 17:37:33.541244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.225 [2024-10-01 17:37:33.541251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.225 [2024-10-01 17:37:33.551328] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.225 [2024-10-01 17:37:33.551345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.225 [2024-10-01 17:37:33.551352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.225 [2024-10-01 17:37:33.565435] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.225 [2024-10-01 17:37:33.565453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.225 [2024-10-01 17:37:33.565460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.225 [2024-10-01 17:37:33.578111] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.225 [2024-10-01 17:37:33.578128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.225 [2024-10-01 17:37:33.578135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.225 [2024-10-01 17:37:33.592521] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.225 [2024-10-01 17:37:33.592539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.225 [2024-10-01 17:37:33.592546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.225 [2024-10-01 17:37:33.604926] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.225 [2024-10-01 17:37:33.604944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.225 [2024-10-01 17:37:33.604951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.225 [2024-10-01 17:37:33.618337] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.225 [2024-10-01 17:37:33.618354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.225 [2024-10-01 17:37:33.618361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.225 [2024-10-01 17:37:33.631083] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.225 [2024-10-01 17:37:33.631100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.225 [2024-10-01 17:37:33.631110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.225 [2024-10-01 17:37:33.643414] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.225 [2024-10-01 17:37:33.643432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.225 [2024-10-01 17:37:33.643438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.225 [2024-10-01 17:37:33.654350] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.225 [2024-10-01 17:37:33.654368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.225 [2024-10-01 17:37:33.654374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.225 [2024-10-01 17:37:33.669084] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.225 [2024-10-01 17:37:33.669102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.225 [2024-10-01 17:37:33.669109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.225 [2024-10-01 17:37:33.681858] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.225 [2024-10-01 17:37:33.681876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.225 [2024-10-01 17:37:33.681882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.225 [2024-10-01 17:37:33.693359] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.225 [2024-10-01 17:37:33.693377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.225 [2024-10-01 17:37:33.693384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.225 [2024-10-01 17:37:33.707679] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.225 [2024-10-01 17:37:33.707697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.225 [2024-10-01 17:37:33.707703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.225 [2024-10-01 17:37:33.719716] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.225 [2024-10-01 17:37:33.719734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.225 [2024-10-01 17:37:33.719740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.225 [2024-10-01 17:37:33.732416] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.225 [2024-10-01 17:37:33.732434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.225 [2024-10-01 17:37:33.732440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.225 19467.00 IOPS, 76.04 MiB/s [2024-10-01 17:37:33.747143] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.225 [2024-10-01 17:37:33.747164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.225 [2024-10-01 17:37:33.747171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.225 [2024-10-01 17:37:33.760210] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.225 [2024-10-01 17:37:33.760229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.225 [2024-10-01 17:37:33.760235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.486 [2024-10-01 17:37:33.772293] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.486 [2024-10-01 17:37:33.772311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.486 [2024-10-01 17:37:33.772318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.486 [2024-10-01 17:37:33.784471] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.486 [2024-10-01 17:37:33.784489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.486 [2024-10-01 17:37:33.784496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.486 [2024-10-01 17:37:33.797719] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.486 [2024-10-01 17:37:33.797737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.486 [2024-10-01 17:37:33.797744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.486 [2024-10-01 17:37:33.809600] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.486 [2024-10-01 17:37:33.809618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.486 [2024-10-01 17:37:33.809624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.486 [2024-10-01 17:37:33.822506] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.486 [2024-10-01 17:37:33.822525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:13836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.486 [2024-10-01 17:37:33.822532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.486 [2024-10-01 17:37:33.835224] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.486 [2024-10-01 17:37:33.835242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.486 [2024-10-01 17:37:33.835249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.486 [2024-10-01 17:37:33.849390] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.486 [2024-10-01 17:37:33.849409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.486 [2024-10-01 17:37:33.849415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.486 [2024-10-01 17:37:33.862236] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.486 [2024-10-01 17:37:33.862253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.486 [2024-10-01 17:37:33.862260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.486 [2024-10-01 17:37:33.873103] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.486 [2024-10-01 17:37:33.873121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.486 [2024-10-01 17:37:33.873128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.486 [2024-10-01 17:37:33.886785] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.486 [2024-10-01 17:37:33.886804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.486 [2024-10-01 17:37:33.886811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.486 [2024-10-01 17:37:33.901751] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.486 [2024-10-01 17:37:33.901769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.486 [2024-10-01 17:37:33.901775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.486 [2024-10-01 17:37:33.915639] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.486 [2024-10-01 17:37:33.915657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.486 [2024-10-01 17:37:33.915664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.486 [2024-10-01 17:37:33.927859] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.486 [2024-10-01 17:37:33.927876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.486 [2024-10-01 17:37:33.927883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.486 [2024-10-01 17:37:33.938544] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.486 [2024-10-01 17:37:33.938562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.487 [2024-10-01 17:37:33.938568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.487 [2024-10-01 17:37:33.952919] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.487 [2024-10-01 17:37:33.952938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.487 [2024-10-01 17:37:33.952944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.487 [2024-10-01 17:37:33.967017] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.487 [2024-10-01 17:37:33.967038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.487 [2024-10-01 17:37:33.967045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.487 [2024-10-01 17:37:33.979794] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.487 [2024-10-01 17:37:33.979812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.487 [2024-10-01 17:37:33.979819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.487 [2024-10-01 17:37:33.990372] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.487 [2024-10-01 17:37:33.990390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.487 [2024-10-01 17:37:33.990397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.487 [2024-10-01 17:37:34.003860] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.487 [2024-10-01 17:37:34.003878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.487 [2024-10-01 17:37:34.003885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.487 [2024-10-01 17:37:34.018054] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.487 [2024-10-01 17:37:34.018072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.487 [2024-10-01 17:37:34.018079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.487 [2024-10-01 17:37:34.031589] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.487 [2024-10-01 17:37:34.031607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.487 [2024-10-01 17:37:34.031614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.747 [2024-10-01 17:37:34.044415] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.747 [2024-10-01 17:37:34.044433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.747 [2024-10-01 17:37:34.044440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.747 [2024-10-01 17:37:34.056397] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.747 [2024-10-01 17:37:34.056415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.747 [2024-10-01 17:37:34.056422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.747 [2024-10-01 17:37:34.070507] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.748 [2024-10-01 17:37:34.070525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.748 [2024-10-01 17:37:34.070533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.748 [2024-10-01 17:37:34.081553] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.748 [2024-10-01 17:37:34.081570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.748 [2024-10-01 17:37:34.081577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.748 [2024-10-01 17:37:34.094921] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.748 [2024-10-01 17:37:34.094939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.748 [2024-10-01 17:37:34.094945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.748 [2024-10-01 17:37:34.108206] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.748 [2024-10-01 17:37:34.108224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.748 [2024-10-01 17:37:34.108230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.748 [2024-10-01 17:37:34.122401] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.748 [2024-10-01 17:37:34.122419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.748 [2024-10-01 17:37:34.122426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.748 [2024-10-01 17:37:34.134801] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.748 [2024-10-01 17:37:34.134818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.748 [2024-10-01 17:37:34.134825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.748 [2024-10-01 17:37:34.147320] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.748 [2024-10-01 17:37:34.147338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.748 [2024-10-01 17:37:34.147344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.748 [2024-10-01 17:37:34.161340] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.748 [2024-10-01 17:37:34.161358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.748 [2024-10-01 17:37:34.161365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.748 [2024-10-01 17:37:34.171196] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.748 [2024-10-01 17:37:34.171214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.748 [2024-10-01 17:37:34.171220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.748 [2024-10-01 17:37:34.185639] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.748 [2024-10-01 17:37:34.185656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.748 [2024-10-01 17:37:34.185667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.748 [2024-10-01 17:37:34.198998] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.748 [2024-10-01 17:37:34.199016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.748 [2024-10-01 17:37:34.199023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.748 [2024-10-01 17:37:34.211682] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.748 [2024-10-01 17:37:34.211699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.748 [2024-10-01 17:37:34.211706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.748 [2024-10-01 17:37:34.223511] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.748 [2024-10-01 17:37:34.223528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.748 [2024-10-01 17:37:34.223534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.748 [2024-10-01 17:37:34.238302] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.748 [2024-10-01 17:37:34.238320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.748 [2024-10-01 17:37:34.238327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.748 [2024-10-01 17:37:34.251166] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.748 [2024-10-01 17:37:34.251183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.748 [2024-10-01 17:37:34.251190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.748 [2024-10-01 17:37:34.261955] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.748 [2024-10-01 17:37:34.261972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.748 [2024-10-01 17:37:34.261979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.748 [2024-10-01 17:37:34.276765] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.748 [2024-10-01 17:37:34.276783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.748 [2024-10-01 17:37:34.276790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.748 [2024-10-01 17:37:34.290004] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:35.748 [2024-10-01 17:37:34.290022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.748 [2024-10-01 17:37:34.290029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.009 [2024-10-01 17:37:34.303786] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:36.009 [2024-10-01 17:37:34.303807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.009 [2024-10-01 17:37:34.303814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.009 [2024-10-01 17:37:34.316795] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:36.009 [2024-10-01 17:37:34.316812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.009 [2024-10-01 17:37:34.316819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.009 [2024-10-01 17:37:34.329069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:36.009 [2024-10-01 17:37:34.329086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.009 [2024-10-01 17:37:34.329092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.009 [2024-10-01 17:37:34.342920] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:36.009 [2024-10-01 17:37:34.342937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:91 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.009 [2024-10-01 17:37:34.342944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.009 [2024-10-01 17:37:34.355386] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:36.009 [2024-10-01 17:37:34.355403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.009 [2024-10-01 17:37:34.355410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.009 [2024-10-01 17:37:34.365822] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:36.009 [2024-10-01 17:37:34.365839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.009 [2024-10-01 17:37:34.365846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.009 [2024-10-01 17:37:34.380275] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:36.009 [2024-10-01 17:37:34.380293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.009 [2024-10-01 17:37:34.380300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.009 [2024-10-01 17:37:34.394105] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:36.009 [2024-10-01 17:37:34.394123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.009 [2024-10-01 17:37:34.394129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.009 [2024-10-01 17:37:34.405578] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:36.009 [2024-10-01 17:37:34.405596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.009 [2024-10-01 17:37:34.405603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.009 [2024-10-01 17:37:34.418137] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:36.009 [2024-10-01 17:37:34.418155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.010 [2024-10-01 17:37:34.418161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.010 [2024-10-01 17:37:34.431094] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:36.010 [2024-10-01 17:37:34.431111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.010 [2024-10-01 17:37:34.431118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.010 [2024-10-01 17:37:34.441826] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:36.010 [2024-10-01 17:37:34.441844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.010 [2024-10-01 17:37:34.441850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.010 [2024-10-01 17:37:34.456957] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:36.010 [2024-10-01 17:37:34.456974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.010 [2024-10-01 17:37:34.456980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.010 [2024-10-01 17:37:34.470115] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:36.010 [2024-10-01 17:37:34.470132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.010 [2024-10-01 17:37:34.470139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.010 [2024-10-01 17:37:34.483116] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:36.010 [2024-10-01 17:37:34.483134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.010 [2024-10-01 17:37:34.483141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.010 [2024-10-01 17:37:34.495804] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:36.010 [2024-10-01 17:37:34.495821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.010 [2024-10-01 17:37:34.495828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.010 [2024-10-01 17:37:34.507538] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:36.010 [2024-10-01 17:37:34.507555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.010 [2024-10-01 17:37:34.507562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.010 [2024-10-01 17:37:34.521810] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:36.010 [2024-10-01 17:37:34.521827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.010 [2024-10-01 17:37:34.521837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.010 [2024-10-01 17:37:34.533448] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:36.010 [2024-10-01 17:37:34.533465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.010 [2024-10-01 17:37:34.533471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.010 [2024-10-01 17:37:34.546756] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:36.010 [2024-10-01 17:37:34.546773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.010 [2024-10-01 17:37:34.546780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.270 [2024-10-01 17:37:34.559467] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:36.270 [2024-10-01 17:37:34.559485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.270 [2024-10-01 17:37:34.559492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.270 [2024-10-01 17:37:34.571611] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:36.270 [2024-10-01 17:37:34.571628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.270 [2024-10-01 17:37:34.571635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.271 [2024-10-01 17:37:34.584876] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:36.271 [2024-10-01 17:37:34.584894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.271 [2024-10-01 17:37:34.584901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.271 [2024-10-01 17:37:34.597286] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:36.271 [2024-10-01 17:37:34.597303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.271 [2024-10-01 17:37:34.597310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.271 [2024-10-01 17:37:34.609536] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:36.271 [2024-10-01 17:37:34.609553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.271 [2024-10-01 17:37:34.609560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.271 [2024-10-01 17:37:34.624289] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:36.271 [2024-10-01 17:37:34.624307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.271 [2024-10-01 17:37:34.624313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.271 [2024-10-01 17:37:34.635575] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:36.271 [2024-10-01 17:37:34.635592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:13141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.271 [2024-10-01 17:37:34.635599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.271 [2024-10-01 17:37:34.647277] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:36.271 [2024-10-01 17:37:34.647296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.271 [2024-10-01 17:37:34.647302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.271 [2024-10-01 17:37:34.662228] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:36.271 [2024-10-01 17:37:34.662246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.271 [2024-10-01 17:37:34.662253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.271 [2024-10-01 17:37:34.674585] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:36.271 [2024-10-01 17:37:34.674602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.271 [2024-10-01 17:37:34.674609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.271 [2024-10-01 17:37:34.688115] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:36.271 [2024-10-01 17:37:34.688132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.271 [2024-10-01 17:37:34.688139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.271 [2024-10-01 17:37:34.700462] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:36.271 [2024-10-01 17:37:34.700479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.271 [2024-10-01 17:37:34.700486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.271 [2024-10-01 17:37:34.714307] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:36.271 [2024-10-01 17:37:34.714324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.271 [2024-10-01 17:37:34.714331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.271 [2024-10-01 17:37:34.725249] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:36.271 [2024-10-01 17:37:34.725266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.271 [2024-10-01 17:37:34.725272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.271 [2024-10-01 17:37:34.739738] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15caa90) 00:37:36.271 [2024-10-01 17:37:34.739755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:36.271 [2024-10-01 17:37:34.739768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.271 19654.00 IOPS, 76.77 MiB/s 00:37:36.271 Latency(us) 00:37:36.271 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:36.271 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:36.271 nvme0n1 : 2.01 19667.99 76.83 0.00 0.00 6499.03 2225.49 22063.79 00:37:36.271 =================================================================================================================== 00:37:36.271 Total : 19667.99 76.83 0.00 0.00 6499.03 2225.49 22063.79 00:37:36.271 { 00:37:36.271 "results": [ 00:37:36.271 { 00:37:36.271 "job": "nvme0n1", 00:37:36.271 "core_mask": "0x2", 00:37:36.271 "workload": "randread", 00:37:36.271 "status": "finished", 00:37:36.271 "queue_depth": 128, 00:37:36.271 "io_size": 4096, 00:37:36.271 "runtime": 2.005085, 00:37:36.271 "iops": 19667.994124937348, 00:37:36.271 "mibps": 76.82810205053651, 00:37:36.271 "io_failed": 0, 00:37:36.271 "io_timeout": 0, 00:37:36.271 "avg_latency_us": 6499.029597322244, 00:37:36.271 "min_latency_us": 2225.4933333333333, 00:37:36.271 "max_latency_us": 22063.786666666667 00:37:36.271 } 00:37:36.271 ], 00:37:36.271 "core_count": 1 00:37:36.271 } 00:37:36.271 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:36.271 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:36.271 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:36.271 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:36.271 | .driver_specific 00:37:36.271 | .nvme_error 00:37:36.271 | .status_code 00:37:36.271 | .command_transient_transport_error' 00:37:36.532 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 154 > 0 )) 00:37:36.532 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3282829 00:37:36.532 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3282829 ']' 00:37:36.532 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3282829 00:37:36.532 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:37:36.532 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:36.532 17:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3282829 00:37:36.532 17:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:36.532 17:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:36.532 17:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3282829' 00:37:36.532 killing process with pid 3282829 00:37:36.532 17:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3282829 00:37:36.532 Received shutdown signal, test time was about 2.000000 seconds 00:37:36.532 00:37:36.532 Latency(us) 00:37:36.532 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:36.532 =================================================================================================================== 00:37:36.532 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:36.532 17:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3282829 00:37:36.793 17:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:37:36.793 17:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:36.793 17:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:37:36.793 17:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:37:36.793 17:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:37:36.793 17:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3283528 00:37:36.793 17:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3283528 /var/tmp/bperf.sock 00:37:36.793 17:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3283528 ']' 00:37:36.793 17:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:37:36.793 17:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:36.793 17:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:36.793 17:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:36.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:36.793 17:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:36.793 17:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:36.793 [2024-10-01 17:37:35.177422] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:37:36.793 [2024-10-01 17:37:35.177480] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3283528 ] 00:37:36.793 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:36.793 Zero copy mechanism will not be used. 00:37:36.793 [2024-10-01 17:37:35.252542] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:36.793 [2024-10-01 17:37:35.280872] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:37.735 17:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:37.735 17:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:37:37.735 17:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:37.736 17:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:37.736 17:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:37.736 17:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.736 17:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:37.736 17:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.736 17:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:37.736 17:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:37.996 nvme0n1 00:37:37.996 17:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:37:37.996 17:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.996 17:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:37.996 17:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.996 17:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:37.996 17:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:37.996 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:37.996 Zero copy mechanism will not be used. 00:37:37.996 Running I/O for 2 seconds... 00:37:37.996 [2024-10-01 17:37:36.525196] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:37.996 [2024-10-01 17:37:36.525227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.996 [2024-10-01 17:37:36.525236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:37.996 [2024-10-01 17:37:36.537355] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:37.996 [2024-10-01 17:37:36.537377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:37.996 [2024-10-01 17:37:36.537384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:38.258 [2024-10-01 17:37:36.548226] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.258 [2024-10-01 17:37:36.548246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.258 [2024-10-01 17:37:36.548254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:38.258 [2024-10-01 17:37:36.559655] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.258 [2024-10-01 17:37:36.559674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.258 [2024-10-01 17:37:36.559682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.258 [2024-10-01 17:37:36.570509] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.258 [2024-10-01 17:37:36.570528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.258 [2024-10-01 17:37:36.570535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:38.258 [2024-10-01 17:37:36.581938] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.258 [2024-10-01 17:37:36.581957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.258 [2024-10-01 17:37:36.581964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:38.258 [2024-10-01 17:37:36.591926] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.258 [2024-10-01 17:37:36.591944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.258 [2024-10-01 17:37:36.591951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:38.258 [2024-10-01 17:37:36.599432] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.258 [2024-10-01 17:37:36.599450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.258 [2024-10-01 17:37:36.599457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.258 [2024-10-01 17:37:36.611610] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.258 [2024-10-01 17:37:36.611628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.258 [2024-10-01 17:37:36.611635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:38.258 [2024-10-01 17:37:36.620241] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.258 [2024-10-01 17:37:36.620259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.258 [2024-10-01 17:37:36.620266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:38.258 [2024-10-01 17:37:36.630523] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.258 [2024-10-01 17:37:36.630541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.258 [2024-10-01 17:37:36.630548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:38.258 [2024-10-01 17:37:36.639794] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.258 [2024-10-01 17:37:36.639812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.258 [2024-10-01 17:37:36.639818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.258 [2024-10-01 17:37:36.649617] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.258 [2024-10-01 17:37:36.649635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.258 [2024-10-01 17:37:36.649641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:38.258 [2024-10-01 17:37:36.660265] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.258 [2024-10-01 17:37:36.660283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.258 [2024-10-01 17:37:36.660290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:38.258 [2024-10-01 17:37:36.668884] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.258 [2024-10-01 17:37:36.668902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.258 [2024-10-01 17:37:36.668909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:38.258 [2024-10-01 17:37:36.681584] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.258 [2024-10-01 17:37:36.681603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.258 [2024-10-01 17:37:36.681613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.258 [2024-10-01 17:37:36.688230] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.258 [2024-10-01 17:37:36.688250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.258 [2024-10-01 17:37:36.688256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:38.258 [2024-10-01 17:37:36.695983] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.258 [2024-10-01 17:37:36.696007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.258 [2024-10-01 17:37:36.696014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:38.258 [2024-10-01 17:37:36.707098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.258 [2024-10-01 17:37:36.707117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.258 [2024-10-01 17:37:36.707123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:38.258 [2024-10-01 17:37:36.718588] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.258 [2024-10-01 17:37:36.718608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.258 [2024-10-01 17:37:36.718614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.259 [2024-10-01 17:37:36.729107] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.259 [2024-10-01 17:37:36.729125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.259 [2024-10-01 17:37:36.729132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:38.259 [2024-10-01 17:37:36.740233] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.259 [2024-10-01 17:37:36.740252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.259 [2024-10-01 17:37:36.740258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:38.259 [2024-10-01 17:37:36.750359] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.259 [2024-10-01 17:37:36.750378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.259 [2024-10-01 17:37:36.750385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:38.259 [2024-10-01 17:37:36.761196] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.259 [2024-10-01 17:37:36.761215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.259 [2024-10-01 17:37:36.761222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.259 [2024-10-01 17:37:36.771908] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.259 [2024-10-01 17:37:36.771933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.259 [2024-10-01 17:37:36.771940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:38.259 [2024-10-01 17:37:36.782455] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.259 [2024-10-01 17:37:36.782474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.259 [2024-10-01 17:37:36.782481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:38.259 [2024-10-01 17:37:36.791573] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.259 [2024-10-01 17:37:36.791592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.259 [2024-10-01 17:37:36.791598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:38.259 [2024-10-01 17:37:36.802136] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.259 [2024-10-01 17:37:36.802155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.259 [2024-10-01 17:37:36.802162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.520 [2024-10-01 17:37:36.813122] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.520 [2024-10-01 17:37:36.813142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.520 [2024-10-01 17:37:36.813148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:38.520 [2024-10-01 17:37:36.824540] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.520 [2024-10-01 17:37:36.824559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.520 [2024-10-01 17:37:36.824567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:38.520 [2024-10-01 17:37:36.835264] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.520 [2024-10-01 17:37:36.835282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.520 [2024-10-01 17:37:36.835289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:38.520 [2024-10-01 17:37:36.844374] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.520 [2024-10-01 17:37:36.844394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.520 [2024-10-01 17:37:36.844401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.520 [2024-10-01 17:37:36.855295] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.520 [2024-10-01 17:37:36.855315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.520 [2024-10-01 17:37:36.855321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:38.520 [2024-10-01 17:37:36.865601] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.520 [2024-10-01 17:37:36.865621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.520 [2024-10-01 17:37:36.865627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:38.520 [2024-10-01 17:37:36.874920] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.520 [2024-10-01 17:37:36.874939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.520 [2024-10-01 17:37:36.874946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:38.520 [2024-10-01 17:37:36.883346] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.520 [2024-10-01 17:37:36.883366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.520 [2024-10-01 17:37:36.883372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.520 [2024-10-01 17:37:36.893028] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.520 [2024-10-01 17:37:36.893047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.521 [2024-10-01 17:37:36.893053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:38.521 [2024-10-01 17:37:36.903576] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.521 [2024-10-01 17:37:36.903595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.521 [2024-10-01 17:37:36.903602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:38.521 [2024-10-01 17:37:36.912339] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.521 [2024-10-01 17:37:36.912359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.521 [2024-10-01 17:37:36.912365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:38.521 [2024-10-01 17:37:36.922842] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.521 [2024-10-01 17:37:36.922862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.521 [2024-10-01 17:37:36.922868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.521 [2024-10-01 17:37:36.931334] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.521 [2024-10-01 17:37:36.931353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.521 [2024-10-01 17:37:36.931360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:38.521 [2024-10-01 17:37:36.941673] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.521 [2024-10-01 17:37:36.941692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.521 [2024-10-01 17:37:36.941702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:38.521 [2024-10-01 17:37:36.949869] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.521 [2024-10-01 17:37:36.949889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.521 [2024-10-01 17:37:36.949895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:38.521 [2024-10-01 17:37:36.959600] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.521 [2024-10-01 17:37:36.959620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.521 [2024-10-01 17:37:36.959627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.521 [2024-10-01 17:37:36.968473] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.521 [2024-10-01 17:37:36.968493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.521 [2024-10-01 17:37:36.968500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:38.521 [2024-10-01 17:37:36.981211] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.521 [2024-10-01 17:37:36.981231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.521 [2024-10-01 17:37:36.981238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:38.521 [2024-10-01 17:37:36.991747] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.521 [2024-10-01 17:37:36.991767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.521 [2024-10-01 17:37:36.991773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:38.521 [2024-10-01 17:37:37.002617] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.521 [2024-10-01 17:37:37.002636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.521 [2024-10-01 17:37:37.002643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.521 [2024-10-01 17:37:37.012472] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.521 [2024-10-01 17:37:37.012491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.521 [2024-10-01 17:37:37.012497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:38.521 [2024-10-01 17:37:37.022916] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.521 [2024-10-01 17:37:37.022935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.521 [2024-10-01 17:37:37.022942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:38.521 [2024-10-01 17:37:37.033290] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.521 [2024-10-01 17:37:37.033314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.521 [2024-10-01 17:37:37.033320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:38.521 [2024-10-01 17:37:37.044477] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.521 [2024-10-01 17:37:37.044496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.521 [2024-10-01 17:37:37.044503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.521 [2024-10-01 17:37:37.052636] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.521 [2024-10-01 17:37:37.052656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.521 [2024-10-01 17:37:37.052663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:38.521 [2024-10-01 17:37:37.061071] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.521 [2024-10-01 17:37:37.061090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.521 [2024-10-01 17:37:37.061096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:38.783 [2024-10-01 17:37:37.071806] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.783 [2024-10-01 17:37:37.071826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.783 [2024-10-01 17:37:37.071832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:38.783 [2024-10-01 17:37:37.081376] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.783 [2024-10-01 17:37:37.081395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.783 [2024-10-01 17:37:37.081402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.783 [2024-10-01 17:37:37.092840] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.783 [2024-10-01 17:37:37.092860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.783 [2024-10-01 17:37:37.092867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:38.783 [2024-10-01 17:37:37.101602] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.783 [2024-10-01 17:37:37.101621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.783 [2024-10-01 17:37:37.101628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:38.783 [2024-10-01 17:37:37.108848] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.783 [2024-10-01 17:37:37.108867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.783 [2024-10-01 17:37:37.108874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:38.783 [2024-10-01 17:37:37.116967] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.783 [2024-10-01 17:37:37.116986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.783 [2024-10-01 17:37:37.116998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.783 [2024-10-01 17:37:37.127694] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.783 [2024-10-01 17:37:37.127713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.783 [2024-10-01 17:37:37.127719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:38.783 [2024-10-01 17:37:37.136761] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.783 [2024-10-01 17:37:37.136780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.783 [2024-10-01 17:37:37.136787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:38.783 [2024-10-01 17:37:37.146026] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.783 [2024-10-01 17:37:37.146046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.783 [2024-10-01 17:37:37.146052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:38.783 [2024-10-01 17:37:37.156771] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.783 [2024-10-01 17:37:37.156790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.783 [2024-10-01 17:37:37.156796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.783 [2024-10-01 17:37:37.168015] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.784 [2024-10-01 17:37:37.168034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.784 [2024-10-01 17:37:37.168041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:38.784 [2024-10-01 17:37:37.176111] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.784 [2024-10-01 17:37:37.176130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.784 [2024-10-01 17:37:37.176136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:38.784 [2024-10-01 17:37:37.186289] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.784 [2024-10-01 17:37:37.186309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.784 [2024-10-01 17:37:37.186315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:38.784 [2024-10-01 17:37:37.198623] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.784 [2024-10-01 17:37:37.198642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.784 [2024-10-01 17:37:37.198652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.784 [2024-10-01 17:37:37.205788] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.784 [2024-10-01 17:37:37.205808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.784 [2024-10-01 17:37:37.205814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:38.784 [2024-10-01 17:37:37.216760] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.784 [2024-10-01 17:37:37.216779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.784 [2024-10-01 17:37:37.216786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:38.784 [2024-10-01 17:37:37.224275] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.784 [2024-10-01 17:37:37.224294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.784 [2024-10-01 17:37:37.224301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:38.784 [2024-10-01 17:37:37.234237] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.784 [2024-10-01 17:37:37.234256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.784 [2024-10-01 17:37:37.234263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.784 [2024-10-01 17:37:37.240366] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.784 [2024-10-01 17:37:37.240385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.784 [2024-10-01 17:37:37.240391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:38.784 [2024-10-01 17:37:37.250923] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.784 [2024-10-01 17:37:37.250943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.784 [2024-10-01 17:37:37.250950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:38.784 [2024-10-01 17:37:37.259047] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.784 [2024-10-01 17:37:37.259066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.784 [2024-10-01 17:37:37.259072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:38.784 [2024-10-01 17:37:37.270074] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.784 [2024-10-01 17:37:37.270093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.784 [2024-10-01 17:37:37.270100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.784 [2024-10-01 17:37:37.279280] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.784 [2024-10-01 17:37:37.279304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.784 [2024-10-01 17:37:37.279311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:38.784 [2024-10-01 17:37:37.290458] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.784 [2024-10-01 17:37:37.290477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.784 [2024-10-01 17:37:37.290484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:38.784 [2024-10-01 17:37:37.298679] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.784 [2024-10-01 17:37:37.298698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.784 [2024-10-01 17:37:37.298705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:38.784 [2024-10-01 17:37:37.309454] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.784 [2024-10-01 17:37:37.309472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.784 [2024-10-01 17:37:37.309479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:38.784 [2024-10-01 17:37:37.318210] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.784 [2024-10-01 17:37:37.318230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.784 [2024-10-01 17:37:37.318236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:38.784 [2024-10-01 17:37:37.327352] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:38.784 [2024-10-01 17:37:37.327372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.784 [2024-10-01 17:37:37.327379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.047 [2024-10-01 17:37:37.336813] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.047 [2024-10-01 17:37:37.336833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.047 [2024-10-01 17:37:37.336839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.047 [2024-10-01 17:37:37.345452] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.047 [2024-10-01 17:37:37.345472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.047 [2024-10-01 17:37:37.345478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.047 [2024-10-01 17:37:37.355526] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.047 [2024-10-01 17:37:37.355546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.047 [2024-10-01 17:37:37.355552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.047 [2024-10-01 17:37:37.364901] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.047 [2024-10-01 17:37:37.364920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.047 [2024-10-01 17:37:37.364927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.047 [2024-10-01 17:37:37.373040] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.048 [2024-10-01 17:37:37.373060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.048 [2024-10-01 17:37:37.373066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.048 [2024-10-01 17:37:37.381639] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.048 [2024-10-01 17:37:37.381658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.048 [2024-10-01 17:37:37.381665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.048 [2024-10-01 17:37:37.389540] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.048 [2024-10-01 17:37:37.389560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.048 [2024-10-01 17:37:37.389566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.048 [2024-10-01 17:37:37.395366] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.048 [2024-10-01 17:37:37.395386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.048 [2024-10-01 17:37:37.395392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.048 [2024-10-01 17:37:37.402386] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.048 [2024-10-01 17:37:37.402406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.048 [2024-10-01 17:37:37.402412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.048 [2024-10-01 17:37:37.411917] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.048 [2024-10-01 17:37:37.411937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.048 [2024-10-01 17:37:37.411944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.048 [2024-10-01 17:37:37.418460] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.048 [2024-10-01 17:37:37.418480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.048 [2024-10-01 17:37:37.418486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.048 [2024-10-01 17:37:37.424249] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.048 [2024-10-01 17:37:37.424269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.048 [2024-10-01 17:37:37.424278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.048 [2024-10-01 17:37:37.434356] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.048 [2024-10-01 17:37:37.434375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.048 [2024-10-01 17:37:37.434382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.048 [2024-10-01 17:37:37.442975] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.048 [2024-10-01 17:37:37.442998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.048 [2024-10-01 17:37:37.443005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.048 [2024-10-01 17:37:37.448367] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.048 [2024-10-01 17:37:37.448386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.048 [2024-10-01 17:37:37.448392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.048 [2024-10-01 17:37:37.455770] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.048 [2024-10-01 17:37:37.455790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.048 [2024-10-01 17:37:37.455796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.048 [2024-10-01 17:37:37.465957] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.048 [2024-10-01 17:37:37.465977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.048 [2024-10-01 17:37:37.465983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.048 [2024-10-01 17:37:37.473247] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.048 [2024-10-01 17:37:37.473267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.048 [2024-10-01 17:37:37.473273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.048 [2024-10-01 17:37:37.484501] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.048 [2024-10-01 17:37:37.484520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.048 [2024-10-01 17:37:37.484527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.048 [2024-10-01 17:37:37.492183] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.048 [2024-10-01 17:37:37.492202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.048 [2024-10-01 17:37:37.492209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.048 [2024-10-01 17:37:37.500982] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.048 [2024-10-01 17:37:37.501010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.048 [2024-10-01 17:37:37.501016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.048 3214.00 IOPS, 401.75 MiB/s [2024-10-01 17:37:37.513869] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.048 [2024-10-01 17:37:37.513889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.048 [2024-10-01 17:37:37.513896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.048 [2024-10-01 17:37:37.523174] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.048 [2024-10-01 17:37:37.523193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.048 [2024-10-01 17:37:37.523200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.048 [2024-10-01 17:37:37.534917] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.048 [2024-10-01 17:37:37.534937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.048 [2024-10-01 17:37:37.534944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.048 [2024-10-01 17:37:37.541554] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.048 [2024-10-01 17:37:37.541573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.048 [2024-10-01 17:37:37.541580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.048 [2024-10-01 17:37:37.547809] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.048 [2024-10-01 17:37:37.547828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.048 [2024-10-01 17:37:37.547835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.048 [2024-10-01 17:37:37.553374] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.048 [2024-10-01 17:37:37.553394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.048 [2024-10-01 17:37:37.553400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.048 [2024-10-01 17:37:37.561115] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.048 [2024-10-01 17:37:37.561134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.048 [2024-10-01 17:37:37.561141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.048 [2024-10-01 17:37:37.571923] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.048 [2024-10-01 17:37:37.571943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.048 [2024-10-01 17:37:37.571953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.048 [2024-10-01 17:37:37.582324] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.048 [2024-10-01 17:37:37.582344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.048 [2024-10-01 17:37:37.582351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.310 [2024-10-01 17:37:37.593820] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.310 [2024-10-01 17:37:37.593840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.310 [2024-10-01 17:37:37.593847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.310 [2024-10-01 17:37:37.602324] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.310 [2024-10-01 17:37:37.602343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.310 [2024-10-01 17:37:37.602350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.310 [2024-10-01 17:37:37.611102] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.310 [2024-10-01 17:37:37.611121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.310 [2024-10-01 17:37:37.611128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.310 [2024-10-01 17:37:37.617423] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.310 [2024-10-01 17:37:37.617442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.310 [2024-10-01 17:37:37.617449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.310 [2024-10-01 17:37:37.625286] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.310 [2024-10-01 17:37:37.625305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.310 [2024-10-01 17:37:37.625312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.310 [2024-10-01 17:37:37.636151] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.310 [2024-10-01 17:37:37.636170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.310 [2024-10-01 17:37:37.636177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.310 [2024-10-01 17:37:37.647601] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.310 [2024-10-01 17:37:37.647621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.310 [2024-10-01 17:37:37.647628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.310 [2024-10-01 17:37:37.660182] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.310 [2024-10-01 17:37:37.660206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.310 [2024-10-01 17:37:37.660212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.310 [2024-10-01 17:37:37.669436] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.310 [2024-10-01 17:37:37.669456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.310 [2024-10-01 17:37:37.669463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.310 [2024-10-01 17:37:37.677770] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.310 [2024-10-01 17:37:37.677789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.310 [2024-10-01 17:37:37.677796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.310 [2024-10-01 17:37:37.685816] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.310 [2024-10-01 17:37:37.685835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.310 [2024-10-01 17:37:37.685842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.310 [2024-10-01 17:37:37.694839] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.310 [2024-10-01 17:37:37.694857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.310 [2024-10-01 17:37:37.694864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.310 [2024-10-01 17:37:37.703266] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.310 [2024-10-01 17:37:37.703285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.310 [2024-10-01 17:37:37.703292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.310 [2024-10-01 17:37:37.711884] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.310 [2024-10-01 17:37:37.711904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.310 [2024-10-01 17:37:37.711910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.310 [2024-10-01 17:37:37.718109] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.310 [2024-10-01 17:37:37.718128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.310 [2024-10-01 17:37:37.718135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.310 [2024-10-01 17:37:37.728492] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.310 [2024-10-01 17:37:37.728511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.310 [2024-10-01 17:37:37.728517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.311 [2024-10-01 17:37:37.735128] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.311 [2024-10-01 17:37:37.735147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.311 [2024-10-01 17:37:37.735153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.311 [2024-10-01 17:37:37.744854] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.311 [2024-10-01 17:37:37.744873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.311 [2024-10-01 17:37:37.744879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.311 [2024-10-01 17:37:37.752762] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.311 [2024-10-01 17:37:37.752781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.311 [2024-10-01 17:37:37.752787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.311 [2024-10-01 17:37:37.758294] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.311 [2024-10-01 17:37:37.758313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.311 [2024-10-01 17:37:37.758319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.311 [2024-10-01 17:37:37.768625] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.311 [2024-10-01 17:37:37.768644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.311 [2024-10-01 17:37:37.768650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.311 [2024-10-01 17:37:37.776419] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.311 [2024-10-01 17:37:37.776438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.311 [2024-10-01 17:37:37.776445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.311 [2024-10-01 17:37:37.785595] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.311 [2024-10-01 17:37:37.785614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.311 [2024-10-01 17:37:37.785621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.311 [2024-10-01 17:37:37.790822] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.311 [2024-10-01 17:37:37.790840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.311 [2024-10-01 17:37:37.790847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.311 [2024-10-01 17:37:37.795076] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.311 [2024-10-01 17:37:37.795094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.311 [2024-10-01 17:37:37.795104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.311 [2024-10-01 17:37:37.798907] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.311 [2024-10-01 17:37:37.798926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.311 [2024-10-01 17:37:37.798932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.311 [2024-10-01 17:37:37.805450] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.311 [2024-10-01 17:37:37.805469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.311 [2024-10-01 17:37:37.805475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.311 [2024-10-01 17:37:37.812806] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.311 [2024-10-01 17:37:37.812825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.311 [2024-10-01 17:37:37.812832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.311 [2024-10-01 17:37:37.822739] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.311 [2024-10-01 17:37:37.822758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.311 [2024-10-01 17:37:37.822765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.311 [2024-10-01 17:37:37.831656] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.311 [2024-10-01 17:37:37.831675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.311 [2024-10-01 17:37:37.831681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.311 [2024-10-01 17:37:37.837290] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.311 [2024-10-01 17:37:37.837309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.311 [2024-10-01 17:37:37.837316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.311 [2024-10-01 17:37:37.842946] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.311 [2024-10-01 17:37:37.842965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.311 [2024-10-01 17:37:37.842972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.311 [2024-10-01 17:37:37.853378] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.311 [2024-10-01 17:37:37.853397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.311 [2024-10-01 17:37:37.853404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.572 [2024-10-01 17:37:37.858920] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.572 [2024-10-01 17:37:37.858944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.572 [2024-10-01 17:37:37.858950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.572 [2024-10-01 17:37:37.866560] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.572 [2024-10-01 17:37:37.866579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.572 [2024-10-01 17:37:37.866586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.572 [2024-10-01 17:37:37.874097] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.572 [2024-10-01 17:37:37.874118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.572 [2024-10-01 17:37:37.874124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.572 [2024-10-01 17:37:37.882898] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.572 [2024-10-01 17:37:37.882917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.572 [2024-10-01 17:37:37.882924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.572 [2024-10-01 17:37:37.891982] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.572 [2024-10-01 17:37:37.892007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.572 [2024-10-01 17:37:37.892014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.573 [2024-10-01 17:37:37.903344] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.573 [2024-10-01 17:37:37.903364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.573 [2024-10-01 17:37:37.903371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.573 [2024-10-01 17:37:37.911838] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.573 [2024-10-01 17:37:37.911857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.573 [2024-10-01 17:37:37.911864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.573 [2024-10-01 17:37:37.917318] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.573 [2024-10-01 17:37:37.917337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.573 [2024-10-01 17:37:37.917344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.573 [2024-10-01 17:37:37.927645] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.573 [2024-10-01 17:37:37.927664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.573 [2024-10-01 17:37:37.927671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.573 [2024-10-01 17:37:37.933543] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.573 [2024-10-01 17:37:37.933563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.573 [2024-10-01 17:37:37.933570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.573 [2024-10-01 17:37:37.941025] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.573 [2024-10-01 17:37:37.941043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.573 [2024-10-01 17:37:37.941050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.573 [2024-10-01 17:37:37.951522] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.573 [2024-10-01 17:37:37.951541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.573 [2024-10-01 17:37:37.951548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.573 [2024-10-01 17:37:37.960835] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.573 [2024-10-01 17:37:37.960854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.573 [2024-10-01 17:37:37.960861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.573 [2024-10-01 17:37:37.973094] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.573 [2024-10-01 17:37:37.973113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.573 [2024-10-01 17:37:37.973119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.573 [2024-10-01 17:37:37.985033] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.573 [2024-10-01 17:37:37.985052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.573 [2024-10-01 17:37:37.985059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.573 [2024-10-01 17:37:37.995513] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.573 [2024-10-01 17:37:37.995532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.573 [2024-10-01 17:37:37.995539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.573 [2024-10-01 17:37:38.003539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.573 [2024-10-01 17:37:38.003558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.573 [2024-10-01 17:37:38.003564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.573 [2024-10-01 17:37:38.013291] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.573 [2024-10-01 17:37:38.013310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.573 [2024-10-01 17:37:38.013320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.573 [2024-10-01 17:37:38.021267] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.573 [2024-10-01 17:37:38.021286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.573 [2024-10-01 17:37:38.021292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.573 [2024-10-01 17:37:38.027701] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.573 [2024-10-01 17:37:38.027719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.573 [2024-10-01 17:37:38.027725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.573 [2024-10-01 17:37:38.038868] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.573 [2024-10-01 17:37:38.038887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.573 [2024-10-01 17:37:38.038894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.573 [2024-10-01 17:37:38.049271] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.573 [2024-10-01 17:37:38.049290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.573 [2024-10-01 17:37:38.049297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.573 [2024-10-01 17:37:38.058103] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.573 [2024-10-01 17:37:38.058121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.573 [2024-10-01 17:37:38.058128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.573 [2024-10-01 17:37:38.064340] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.573 [2024-10-01 17:37:38.064359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.573 [2024-10-01 17:37:38.064365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.573 [2024-10-01 17:37:38.071432] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.573 [2024-10-01 17:37:38.071451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.573 [2024-10-01 17:37:38.071458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.573 [2024-10-01 17:37:38.080730] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.573 [2024-10-01 17:37:38.080751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.573 [2024-10-01 17:37:38.080757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.573 [2024-10-01 17:37:38.091782] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.573 [2024-10-01 17:37:38.091804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.573 [2024-10-01 17:37:38.091811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.573 [2024-10-01 17:37:38.103037] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.573 [2024-10-01 17:37:38.103057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.573 [2024-10-01 17:37:38.103063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.573 [2024-10-01 17:37:38.112477] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.573 [2024-10-01 17:37:38.112496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.573 [2024-10-01 17:37:38.112502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.835 [2024-10-01 17:37:38.121014] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.835 [2024-10-01 17:37:38.121034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.835 [2024-10-01 17:37:38.121040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.835 [2024-10-01 17:37:38.129481] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.835 [2024-10-01 17:37:38.129500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.835 [2024-10-01 17:37:38.129507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.835 [2024-10-01 17:37:38.138182] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.835 [2024-10-01 17:37:38.138201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.835 [2024-10-01 17:37:38.138207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.835 [2024-10-01 17:37:38.148490] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.835 [2024-10-01 17:37:38.148508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.835 [2024-10-01 17:37:38.148515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.835 [2024-10-01 17:37:38.156089] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.835 [2024-10-01 17:37:38.156109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.835 [2024-10-01 17:37:38.156116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.835 [2024-10-01 17:37:38.165510] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.835 [2024-10-01 17:37:38.165530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.835 [2024-10-01 17:37:38.165536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.835 [2024-10-01 17:37:38.172622] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.835 [2024-10-01 17:37:38.172640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.835 [2024-10-01 17:37:38.172647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.835 [2024-10-01 17:37:38.182654] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.835 [2024-10-01 17:37:38.182673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.835 [2024-10-01 17:37:38.182680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.835 [2024-10-01 17:37:38.190629] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.835 [2024-10-01 17:37:38.190648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.835 [2024-10-01 17:37:38.190654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.835 [2024-10-01 17:37:38.196178] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.835 [2024-10-01 17:37:38.196197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.835 [2024-10-01 17:37:38.196203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.835 [2024-10-01 17:37:38.204666] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.835 [2024-10-01 17:37:38.204685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.835 [2024-10-01 17:37:38.204692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.835 [2024-10-01 17:37:38.214099] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.835 [2024-10-01 17:37:38.214117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.835 [2024-10-01 17:37:38.214123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.835 [2024-10-01 17:37:38.220316] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.835 [2024-10-01 17:37:38.220335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.835 [2024-10-01 17:37:38.220341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.835 [2024-10-01 17:37:38.225540] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.835 [2024-10-01 17:37:38.225559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.835 [2024-10-01 17:37:38.225566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.835 [2024-10-01 17:37:38.232338] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.835 [2024-10-01 17:37:38.232358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.835 [2024-10-01 17:37:38.232367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.835 [2024-10-01 17:37:38.237331] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.835 [2024-10-01 17:37:38.237349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.835 [2024-10-01 17:37:38.237355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.835 [2024-10-01 17:37:38.242567] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.835 [2024-10-01 17:37:38.242586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.835 [2024-10-01 17:37:38.242592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.835 [2024-10-01 17:37:38.247848] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.835 [2024-10-01 17:37:38.247867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.835 [2024-10-01 17:37:38.247873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.835 [2024-10-01 17:37:38.253039] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.835 [2024-10-01 17:37:38.253058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.835 [2024-10-01 17:37:38.253064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.835 [2024-10-01 17:37:38.260868] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.835 [2024-10-01 17:37:38.260887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.835 [2024-10-01 17:37:38.260894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.835 [2024-10-01 17:37:38.266145] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.835 [2024-10-01 17:37:38.266164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.835 [2024-10-01 17:37:38.266170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.835 [2024-10-01 17:37:38.276437] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.835 [2024-10-01 17:37:38.276456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.835 [2024-10-01 17:37:38.276463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.835 [2024-10-01 17:37:38.287907] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.835 [2024-10-01 17:37:38.287927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.835 [2024-10-01 17:37:38.287933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.835 [2024-10-01 17:37:38.298836] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.835 [2024-10-01 17:37:38.298855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.835 [2024-10-01 17:37:38.298861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.835 [2024-10-01 17:37:38.304176] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.836 [2024-10-01 17:37:38.304195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.836 [2024-10-01 17:37:38.304201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.836 [2024-10-01 17:37:38.312904] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.836 [2024-10-01 17:37:38.312923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.836 [2024-10-01 17:37:38.312930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.836 [2024-10-01 17:37:38.321063] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.836 [2024-10-01 17:37:38.321082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.836 [2024-10-01 17:37:38.321089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.836 [2024-10-01 17:37:38.332762] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.836 [2024-10-01 17:37:38.332782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.836 [2024-10-01 17:37:38.332788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:39.836 [2024-10-01 17:37:38.345421] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.836 [2024-10-01 17:37:38.345441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.836 [2024-10-01 17:37:38.345447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:39.836 [2024-10-01 17:37:38.357184] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.836 [2024-10-01 17:37:38.357202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.836 [2024-10-01 17:37:38.357209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.836 [2024-10-01 17:37:38.368493] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.836 [2024-10-01 17:37:38.368512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.836 [2024-10-01 17:37:38.368519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:39.836 [2024-10-01 17:37:38.378685] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:39.836 [2024-10-01 17:37:38.378704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.836 [2024-10-01 17:37:38.378714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:40.097 [2024-10-01 17:37:38.388640] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:40.097 [2024-10-01 17:37:38.388659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.097 [2024-10-01 17:37:38.388666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:40.097 [2024-10-01 17:37:38.395794] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:40.097 [2024-10-01 17:37:38.395813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.097 [2024-10-01 17:37:38.395819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.097 [2024-10-01 17:37:38.403065] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:40.097 [2024-10-01 17:37:38.403084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.097 [2024-10-01 17:37:38.403090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:40.097 [2024-10-01 17:37:38.411511] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:40.097 [2024-10-01 17:37:38.411530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.097 [2024-10-01 17:37:38.411536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:40.097 [2024-10-01 17:37:38.420658] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:40.097 [2024-10-01 17:37:38.420678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.097 [2024-10-01 17:37:38.420684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:40.097 [2024-10-01 17:37:38.430838] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:40.097 [2024-10-01 17:37:38.430857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.097 [2024-10-01 17:37:38.430863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.097 [2024-10-01 17:37:38.439842] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:40.097 [2024-10-01 17:37:38.439861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.097 [2024-10-01 17:37:38.439867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:40.097 [2024-10-01 17:37:38.451043] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:40.097 [2024-10-01 17:37:38.451062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.097 [2024-10-01 17:37:38.451068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:40.097 [2024-10-01 17:37:38.462970] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:40.097 [2024-10-01 17:37:38.462992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.097 [2024-10-01 17:37:38.463003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:40.097 [2024-10-01 17:37:38.473148] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:40.097 [2024-10-01 17:37:38.473167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.097 [2024-10-01 17:37:38.473173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.097 [2024-10-01 17:37:38.483105] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:40.097 [2024-10-01 17:37:38.483124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.097 [2024-10-01 17:37:38.483130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:40.097 [2024-10-01 17:37:38.493998] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:40.097 [2024-10-01 17:37:38.494017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.098 [2024-10-01 17:37:38.494023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:40.098 [2024-10-01 17:37:38.506242] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:40.098 [2024-10-01 17:37:38.506262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.098 [2024-10-01 17:37:38.506268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:40.098 3408.50 IOPS, 426.06 MiB/s [2024-10-01 17:37:38.516347] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe45730) 00:37:40.098 [2024-10-01 17:37:38.516367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.098 [2024-10-01 17:37:38.516373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.098 00:37:40.098 Latency(us) 00:37:40.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:40.098 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:37:40.098 nvme0n1 : 2.01 3406.66 425.83 0.00 0.00 4692.41 706.56 13544.11 00:37:40.098 =================================================================================================================== 00:37:40.098 Total : 3406.66 425.83 0.00 0.00 4692.41 706.56 13544.11 00:37:40.098 { 00:37:40.098 "results": [ 00:37:40.098 { 00:37:40.098 "job": "nvme0n1", 00:37:40.098 "core_mask": "0x2", 00:37:40.098 "workload": "randread", 00:37:40.098 "status": "finished", 00:37:40.098 "queue_depth": 16, 00:37:40.098 "io_size": 131072, 00:37:40.098 "runtime": 2.005775, 00:37:40.098 "iops": 3406.66325983722, 00:37:40.098 "mibps": 425.8329074796525, 00:37:40.098 "io_failed": 0, 00:37:40.098 "io_timeout": 0, 00:37:40.098 "avg_latency_us": 4692.411438606761, 00:37:40.098 "min_latency_us": 706.56, 00:37:40.098 "max_latency_us": 13544.106666666667 00:37:40.098 } 00:37:40.098 ], 00:37:40.098 "core_count": 1 00:37:40.098 } 00:37:40.098 17:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:40.098 17:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:40.098 17:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:40.098 | .driver_specific 00:37:40.098 | .nvme_error 00:37:40.098 | .status_code 00:37:40.098 | .command_transient_transport_error' 00:37:40.098 17:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:40.359 17:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 220 > 0 )) 00:37:40.359 17:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3283528 00:37:40.359 17:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3283528 ']' 00:37:40.359 17:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3283528 00:37:40.359 17:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:37:40.359 17:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:40.359 17:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3283528 00:37:40.359 17:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:40.359 17:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:40.359 17:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3283528' 00:37:40.359 killing process with pid 3283528 00:37:40.359 17:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3283528 00:37:40.359 Received shutdown signal, test time was about 2.000000 seconds 00:37:40.359 00:37:40.359 Latency(us) 00:37:40.359 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:40.359 =================================================================================================================== 00:37:40.359 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:40.359 17:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3283528 00:37:40.359 17:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:37:40.359 17:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:40.359 17:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:37:40.359 17:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:37:40.359 17:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:37:40.359 17:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3284264 00:37:40.359 17:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3284264 /var/tmp/bperf.sock 00:37:40.359 17:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3284264 ']' 00:37:40.359 17:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:40.359 17:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:40.359 17:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:40.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:40.359 17:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:40.359 17:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:40.359 17:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:37:40.618 [2024-10-01 17:37:38.943575] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:37:40.618 [2024-10-01 17:37:38.943633] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3284264 ] 00:37:40.618 [2024-10-01 17:37:39.020327] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:40.618 [2024-10-01 17:37:39.048579] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:41.186 17:37:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:41.186 17:37:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:37:41.186 17:37:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:41.186 17:37:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:41.445 17:37:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:41.445 17:37:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:41.445 17:37:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:41.446 17:37:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:41.446 17:37:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:41.446 17:37:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:41.705 nvme0n1 00:37:41.705 17:37:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:37:41.705 17:37:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:41.705 17:37:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:41.705 17:37:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:41.705 17:37:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:41.705 17:37:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:41.965 Running I/O for 2 seconds... 00:37:41.965 [2024-10-01 17:37:40.292255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198eb760 00:37:41.965 [2024-10-01 17:37:40.293882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.965 [2024-10-01 17:37:40.293910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:37:41.965 [2024-10-01 17:37:40.301973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198f2d80 00:37:41.965 [2024-10-01 17:37:40.302934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.965 [2024-10-01 17:37:40.302952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:37:41.965 [2024-10-01 17:37:40.315107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198e5220 00:37:41.965 [2024-10-01 17:37:40.316190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.965 [2024-10-01 17:37:40.316208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:37:41.965 [2024-10-01 17:37:40.327139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198e4140 00:37:41.965 [2024-10-01 17:37:40.328240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.965 [2024-10-01 17:37:40.328257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:37:41.965 [2024-10-01 17:37:40.339117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198e3060 00:37:41.966 [2024-10-01 17:37:40.340195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.966 [2024-10-01 17:37:40.340211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:37:41.966 [2024-10-01 17:37:40.351067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198fb048 00:37:41.966 [2024-10-01 17:37:40.352157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.966 [2024-10-01 17:37:40.352174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:37:41.966 [2024-10-01 17:37:40.364587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198fdeb0 00:37:41.966 [2024-10-01 17:37:40.366328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.966 [2024-10-01 17:37:40.366345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:37:41.966 [2024-10-01 17:37:40.375015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198e7818 00:37:41.966 [2024-10-01 17:37:40.376119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.966 [2024-10-01 17:37:40.376135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:37:41.966 [2024-10-01 17:37:40.387025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198e88f8 00:37:41.966 [2024-10-01 17:37:40.388128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.966 [2024-10-01 17:37:40.388144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:37:41.966 [2024-10-01 17:37:40.399022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198e99d8 00:37:41.966 [2024-10-01 17:37:40.400115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.966 [2024-10-01 17:37:40.400131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:37:41.966 [2024-10-01 17:37:40.412602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198ec840 00:37:41.966 [2024-10-01 17:37:40.414340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.966 [2024-10-01 17:37:40.414355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:37:41.966 [2024-10-01 17:37:40.424458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198e5a90 00:37:41.966 [2024-10-01 17:37:40.426165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.966 [2024-10-01 17:37:40.426181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:37:41.966 [2024-10-01 17:37:40.434845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198ed920 00:37:41.966 [2024-10-01 17:37:40.435926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.966 [2024-10-01 17:37:40.435943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:41.966 [2024-10-01 17:37:40.445988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198ea248 00:37:41.966 [2024-10-01 17:37:40.447057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.966 [2024-10-01 17:37:40.447073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:37:41.966 [2024-10-01 17:37:40.460290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198e9168 00:37:41.966 [2024-10-01 17:37:40.462016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.966 [2024-10-01 17:37:40.462032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:37:41.966 [2024-10-01 17:37:40.471034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198f6020 00:37:41.966 [2024-10-01 17:37:40.472252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.966 [2024-10-01 17:37:40.472269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:41.966 [2024-10-01 17:37:40.484692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198edd58 00:37:41.966 [2024-10-01 17:37:40.486580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.966 [2024-10-01 17:37:40.486596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:41.966 [2024-10-01 17:37:40.495032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198f8e88 00:37:41.966 [2024-10-01 17:37:40.496271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.966 [2024-10-01 17:37:40.496287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:41.966 [2024-10-01 17:37:40.506899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198ee5c8 00:37:41.966 [2024-10-01 17:37:40.508116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:41.966 [2024-10-01 17:37:40.508132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:42.227 [2024-10-01 17:37:40.518843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198f6020 00:37:42.227 [2024-10-01 17:37:40.519937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.227 [2024-10-01 17:37:40.519957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:42.227 [2024-10-01 17:37:40.532324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198e0ea0 00:37:42.227 [2024-10-01 17:37:40.534188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.227 [2024-10-01 17:37:40.534204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:42.227 [2024-10-01 17:37:40.542784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198e1f80 00:37:42.227 [2024-10-01 17:37:40.544021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.227 [2024-10-01 17:37:40.544039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:42.227 [2024-10-01 17:37:40.553952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198f7538 00:37:42.227 [2024-10-01 17:37:40.555191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:14948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.227 [2024-10-01 17:37:40.555207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:37:42.227 [2024-10-01 17:37:40.568132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198f7538 00:37:42.227 [2024-10-01 17:37:40.569989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.227 [2024-10-01 17:37:40.570008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:42.227 [2024-10-01 17:37:40.579081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198eb760 00:37:42.227 [2024-10-01 17:37:40.580469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.227 [2024-10-01 17:37:40.580486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:42.227 [2024-10-01 17:37:40.591238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198f7970 00:37:42.227 [2024-10-01 17:37:40.592622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.227 [2024-10-01 17:37:40.592639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:37:42.227 [2024-10-01 17:37:40.603198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198e1f80 00:37:42.227 [2024-10-01 17:37:40.604583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.227 [2024-10-01 17:37:40.604599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:37:42.227 [2024-10-01 17:37:40.614399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198fa7d8 00:37:42.227 [2024-10-01 17:37:40.615747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.227 [2024-10-01 17:37:40.615763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:37:42.227 [2024-10-01 17:37:40.628689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198fe720 00:37:42.227 [2024-10-01 17:37:40.630723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.227 [2024-10-01 17:37:40.630740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:37:42.227 [2024-10-01 17:37:40.639079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198ebfd0 00:37:42.227 [2024-10-01 17:37:40.640458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:10345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.227 [2024-10-01 17:37:40.640475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:37:42.227 [2024-10-01 17:37:40.651013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198ebfd0 00:37:42.227 [2024-10-01 17:37:40.652357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.227 [2024-10-01 17:37:40.652373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:37:42.227 [2024-10-01 17:37:40.664428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198ebfd0 00:37:42.227 [2024-10-01 17:37:40.666452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.227 [2024-10-01 17:37:40.666469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:37:42.227 [2024-10-01 17:37:40.674782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198fb048 00:37:42.227 [2024-10-01 17:37:40.676120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.227 [2024-10-01 17:37:40.676136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:42.227 [2024-10-01 17:37:40.686652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198e0a68 00:37:42.227 [2024-10-01 17:37:40.688017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.227 [2024-10-01 17:37:40.688033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:42.227 [2024-10-01 17:37:40.698576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198f8a50 00:37:42.227 [2024-10-01 17:37:40.699937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.227 [2024-10-01 17:37:40.699954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:37:42.227 [2024-10-01 17:37:40.710535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198f9b30 00:37:42.227 [2024-10-01 17:37:40.711907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.227 [2024-10-01 17:37:40.711924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:37:42.227 [2024-10-01 17:37:40.724007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198eb328 00:37:42.227 [2024-10-01 17:37:40.726015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.227 [2024-10-01 17:37:40.726031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:42.227 [2024-10-01 17:37:40.734366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198fdeb0 00:37:42.227 [2024-10-01 17:37:40.735731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:6652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.228 [2024-10-01 17:37:40.735747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:37:42.228 [2024-10-01 17:37:40.746299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198fdeb0 00:37:42.228 [2024-10-01 17:37:40.747666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.228 [2024-10-01 17:37:40.747683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:37:42.228 [2024-10-01 17:37:40.758177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198eaab8 00:37:42.228 [2024-10-01 17:37:40.759545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.228 [2024-10-01 17:37:40.759562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:37:42.228 [2024-10-01 17:37:40.769365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198ee190 00:37:42.228 [2024-10-01 17:37:40.770710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.228 [2024-10-01 17:37:40.770726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:42.488 [2024-10-01 17:37:40.782060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198ee190 00:37:42.488 [2024-10-01 17:37:40.783376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.488 [2024-10-01 17:37:40.783393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:37:42.488 [2024-10-01 17:37:40.793985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198ee190 00:37:42.488 [2024-10-01 17:37:40.795351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.488 [2024-10-01 17:37:40.795368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:37:42.488 [2024-10-01 17:37:40.807428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198ee190 00:37:42.488 [2024-10-01 17:37:40.809420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.488 [2024-10-01 17:37:40.809438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:37:42.488 [2024-10-01 17:37:40.817860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198e0630 00:37:42.488 [2024-10-01 17:37:40.819234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.488 [2024-10-01 17:37:40.819250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:37:42.488 [2024-10-01 17:37:40.829790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198fdeb0 00:37:42.488 [2024-10-01 17:37:40.831114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.488 [2024-10-01 17:37:40.831133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:37:42.488 [2024-10-01 17:37:40.841763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198ea680 00:37:42.488 [2024-10-01 17:37:40.843133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.488 [2024-10-01 17:37:40.843150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:37:42.488 [2024-10-01 17:37:40.853718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198e95a0 00:37:42.488 [2024-10-01 17:37:40.855063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.488 [2024-10-01 17:37:40.855079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:37:42.488 [2024-10-01 17:37:40.865669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198ed920 00:37:42.488 [2024-10-01 17:37:40.867004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:53 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.489 [2024-10-01 17:37:40.867020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:37:42.489 [2024-10-01 17:37:40.877613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198e1f80 00:37:42.489 [2024-10-01 17:37:40.878928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.489 [2024-10-01 17:37:40.878945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:37:42.489 [2024-10-01 17:37:40.891079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198fdeb0 00:37:42.489 [2024-10-01 17:37:40.893060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.489 [2024-10-01 17:37:40.893076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:37:42.489 [2024-10-01 17:37:40.901487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198e8d30 00:37:42.489 [2024-10-01 17:37:40.902843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.489 [2024-10-01 17:37:40.902860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:37:42.489 [2024-10-01 17:37:40.913424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198ee190 00:37:42.489 [2024-10-01 17:37:40.914771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.489 [2024-10-01 17:37:40.914788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:37:42.489 [2024-10-01 17:37:40.925340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198f2948 00:37:42.489 [2024-10-01 17:37:40.926695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.489 [2024-10-01 17:37:40.926712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:37:42.489 [2024-10-01 17:37:40.937289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198f2948 00:37:42.489 [2024-10-01 17:37:40.938633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.489 [2024-10-01 17:37:40.938652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:37:42.489 [2024-10-01 17:37:40.949177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198f2948 00:37:42.489 [2024-10-01 17:37:40.950518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.489 [2024-10-01 17:37:40.950535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:37:42.489 [2024-10-01 17:37:40.960282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198fac10 00:37:42.489 [2024-10-01 17:37:40.961610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.489 [2024-10-01 17:37:40.961627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:37:42.489 [2024-10-01 17:37:40.974500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198fac10 00:37:42.489 [2024-10-01 17:37:40.976476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.489 [2024-10-01 17:37:40.976493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:37:42.489 [2024-10-01 17:37:40.984902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198e7818 00:37:42.489 [2024-10-01 17:37:40.986237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.489 [2024-10-01 17:37:40.986254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:37:42.489 [2024-10-01 17:37:40.996148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198edd58 00:37:42.489 [2024-10-01 17:37:40.997464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.489 [2024-10-01 17:37:40.997481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:37:42.489 [2024-10-01 17:37:41.008820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198edd58 00:37:42.489 [2024-10-01 17:37:41.010142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.489 [2024-10-01 17:37:41.010159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:37:42.489 [2024-10-01 17:37:41.020722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198edd58 00:37:42.489 [2024-10-01 17:37:41.022041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.489 [2024-10-01 17:37:41.022057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:37:42.489 [2024-10-01 17:37:41.032679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198edd58 00:37:42.489 [2024-10-01 17:37:41.034000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.489 [2024-10-01 17:37:41.034017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:37:42.750 [2024-10-01 17:37:41.044588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198edd58 00:37:42.750 [2024-10-01 17:37:41.045913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.750 [2024-10-01 17:37:41.045929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:37:42.750 [2024-10-01 17:37:41.056542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198f20d8 00:37:42.750 [2024-10-01 17:37:41.057873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.750 [2024-10-01 17:37:41.057890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:42.750 [2024-10-01 17:37:41.070064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198fac10 00:37:42.750 [2024-10-01 17:37:41.072014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.750 [2024-10-01 17:37:41.072031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:37:42.750 [2024-10-01 17:37:41.080397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198fb8b8 00:37:42.750 [2024-10-01 17:37:41.081706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.750 [2024-10-01 17:37:41.081722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:42.750 [2024-10-01 17:37:41.092316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198fb8b8 00:37:42.750 [2024-10-01 17:37:41.093628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.750 [2024-10-01 17:37:41.093645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:42.750 [2024-10-01 17:37:41.104252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198fb8b8 00:37:42.750 [2024-10-01 17:37:41.105565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.750 [2024-10-01 17:37:41.105582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:42.750 [2024-10-01 17:37:41.116162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198fb8b8 00:37:42.750 [2024-10-01 17:37:41.117473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.750 [2024-10-01 17:37:41.117489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:42.750 [2024-10-01 17:37:41.128107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198fb8b8 00:37:42.750 [2024-10-01 17:37:41.129425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.750 [2024-10-01 17:37:41.129442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:42.750 [2024-10-01 17:37:41.139989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198fa3a0 00:37:42.750 [2024-10-01 17:37:41.141306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.750 [2024-10-01 17:37:41.141324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:42.750 [2024-10-01 17:37:41.151150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198f2948 00:37:42.750 [2024-10-01 17:37:41.152444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.750 [2024-10-01 17:37:41.152461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:42.750 [2024-10-01 17:37:41.163832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198f2948 00:37:42.750 [2024-10-01 17:37:41.165138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.750 [2024-10-01 17:37:41.165155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:42.750 [2024-10-01 17:37:41.175678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198ed920 00:37:42.750 [2024-10-01 17:37:41.176986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.750 [2024-10-01 17:37:41.177006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:37:42.750 [2024-10-01 17:37:41.187619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198ed920 00:37:42.750 [2024-10-01 17:37:41.188913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.750 [2024-10-01 17:37:41.188930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:37:42.750 [2024-10-01 17:37:41.199544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198ed920 00:37:42.750 [2024-10-01 17:37:41.200836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.750 [2024-10-01 17:37:41.200852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:37:42.750 [2024-10-01 17:37:41.211426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198e1f80 00:37:42.750 [2024-10-01 17:37:41.212749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.750 [2024-10-01 17:37:41.212767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:42.750 [2024-10-01 17:37:41.222617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198e5658 00:37:42.750 [2024-10-01 17:37:41.223893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.750 [2024-10-01 17:37:41.223910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:37:42.750 [2024-10-01 17:37:41.235341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198fb8b8 00:37:42.750 [2024-10-01 17:37:41.236615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.750 [2024-10-01 17:37:41.236631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:42.750 [2024-10-01 17:37:41.246492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198ed0b0 00:37:42.750 [2024-10-01 17:37:41.247772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.750 [2024-10-01 17:37:41.247790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:37:42.750 [2024-10-01 17:37:41.259181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198ed0b0 00:37:42.750 [2024-10-01 17:37:41.260434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.750 [2024-10-01 17:37:41.260452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:42.750 [2024-10-01 17:37:41.271049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198fc128 00:37:42.750 [2024-10-01 17:37:41.272306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.750 [2024-10-01 17:37:41.272322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:42.750 21206.00 IOPS, 82.84 MiB/s [2024-10-01 17:37:41.284525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198e7c50 00:37:42.750 [2024-10-01 17:37:41.286446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.750 [2024-10-01 17:37:41.286462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:42.750 [2024-10-01 17:37:41.294934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198e5ec8 00:37:42.750 [2024-10-01 17:37:41.296128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:42.750 [2024-10-01 17:37:41.296145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:37:43.010 [2024-10-01 17:37:41.306835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198dfdc0 00:37:43.010 [2024-10-01 17:37:41.308066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.011 [2024-10-01 17:37:41.308082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:43.011 [2024-10-01 17:37:41.318811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198ed920 00:37:43.011 [2024-10-01 17:37:41.320068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.011 [2024-10-01 17:37:41.320084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:37:43.011 [2024-10-01 17:37:41.330755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198f5be8 00:37:43.011 [2024-10-01 17:37:41.332036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.011 [2024-10-01 17:37:41.332053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:37:43.011 [2024-10-01 17:37:41.342654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198e73e0 00:37:43.011 [2024-10-01 17:37:41.343920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.011 [2024-10-01 17:37:41.343936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:43.011 [2024-10-01 17:37:41.356183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198e84c0 00:37:43.011 [2024-10-01 17:37:41.358103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.011 [2024-10-01 17:37:41.358119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:43.011 [2024-10-01 17:37:41.365791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198f0788 00:37:43.011 [2024-10-01 17:37:41.367053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.011 [2024-10-01 17:37:41.367069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:37:43.011 [2024-10-01 17:37:41.378467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198f0788 00:37:43.011 [2024-10-01 17:37:41.379742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.011 [2024-10-01 17:37:41.379759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:37:43.011 [2024-10-01 17:37:41.390400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198f0788 00:37:43.011 [2024-10-01 17:37:41.391670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.011 [2024-10-01 17:37:41.391686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:37:43.011 [2024-10-01 17:37:41.402299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198f0788 00:37:43.011 [2024-10-01 17:37:41.403574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.011 [2024-10-01 17:37:41.403590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:37:43.011 [2024-10-01 17:37:41.414205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198f0788 00:37:43.011 [2024-10-01 17:37:41.415555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.011 [2024-10-01 17:37:41.415571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:37:43.011 [2024-10-01 17:37:41.426215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198f0788 00:37:43.011 [2024-10-01 17:37:41.427477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.011 [2024-10-01 17:37:41.427493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:37:43.011 [2024-10-01 17:37:41.438143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198f0788 00:37:43.011 [2024-10-01 17:37:41.439379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.011 [2024-10-01 17:37:41.439396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:37:43.011 [2024-10-01 17:37:41.450030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198e4de8 00:37:43.011 [2024-10-01 17:37:41.451284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.011 [2024-10-01 17:37:41.451303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:37:43.011 [2024-10-01 17:37:41.461963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198e7c50 00:37:43.011 [2024-10-01 17:37:41.463251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.011 [2024-10-01 17:37:41.463267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:43.011 [2024-10-01 17:37:41.473098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198e3d08 00:37:43.011 [2024-10-01 17:37:41.474356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.011 [2024-10-01 17:37:41.474371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:37:43.011 [2024-10-01 17:37:41.487295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198e3d08 00:37:43.011 [2024-10-01 17:37:41.489159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.011 [2024-10-01 17:37:41.489175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:43.011 [2024-10-01 17:37:41.497675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198f0350 00:37:43.011 [2024-10-01 17:37:41.498936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.011 [2024-10-01 17:37:41.498952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:43.011 [2024-10-01 17:37:41.511136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198e4578 00:37:43.011 [2024-10-01 17:37:41.513042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.011 [2024-10-01 17:37:41.513058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:43.011 [2024-10-01 17:37:41.521919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198f7100 00:37:43.011 [2024-10-01 17:37:41.523308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.011 [2024-10-01 17:37:41.523324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:43.011 [2024-10-01 17:37:41.535567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198e5a90 00:37:43.011 [2024-10-01 17:37:41.537632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.011 [2024-10-01 17:37:41.537648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:43.011 [2024-10-01 17:37:41.545913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198f5378 00:37:43.011 [2024-10-01 17:37:41.547246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.011 [2024-10-01 17:37:41.547262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:37:43.270 [2024-10-01 17:37:41.557821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198e5220 00:37:43.270 [2024-10-01 17:37:41.559231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.270 [2024-10-01 17:37:41.559250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:43.270 [2024-10-01 17:37:41.569770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198fa3a0 00:37:43.270 [2024-10-01 17:37:41.571179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.270 [2024-10-01 17:37:41.571195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:43.270 [2024-10-01 17:37:41.581084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198e3d08 00:37:43.270 [2024-10-01 17:37:41.582485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.270 [2024-10-01 17:37:41.582501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:37:43.270 [2024-10-01 17:37:41.593801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198f1868 00:37:43.270 [2024-10-01 17:37:41.595213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.270 [2024-10-01 17:37:41.595230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:37:43.270 [2024-10-01 17:37:41.605704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198efae0 00:37:43.270 [2024-10-01 17:37:41.607124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.270 [2024-10-01 17:37:41.607140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:43.270 [2024-10-01 17:37:41.617628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198f0788 00:37:43.270 [2024-10-01 17:37:41.619027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.270 [2024-10-01 17:37:41.619043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:37:43.270 [2024-10-01 17:37:41.629563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198f5378 00:37:43.270 [2024-10-01 17:37:41.630979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.270 [2024-10-01 17:37:41.631000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:37:43.270 [2024-10-01 17:37:41.643086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198e6b70 00:37:43.270 [2024-10-01 17:37:41.645144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.270 [2024-10-01 17:37:41.645160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:37:43.270 [2024-10-01 17:37:41.652728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198f5be8 00:37:43.270 [2024-10-01 17:37:41.654107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.270 [2024-10-01 17:37:41.654124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:37:43.270 [2024-10-01 17:37:41.666992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198e3d08 00:37:43.270 [2024-10-01 17:37:41.669056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.270 [2024-10-01 17:37:41.669072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:37:43.270 [2024-10-01 17:37:41.677363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198efae0 00:37:43.270 [2024-10-01 17:37:41.678764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:25246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.270 [2024-10-01 17:37:41.678780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:43.270 [2024-10-01 17:37:41.689277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198efae0 00:37:43.270 [2024-10-01 17:37:41.690682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.270 [2024-10-01 17:37:41.690699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:43.270 [2024-10-01 17:37:41.701185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198efae0 00:37:43.270 [2024-10-01 17:37:41.702604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.270 [2024-10-01 17:37:41.702621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:43.270 [2024-10-01 17:37:41.713111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198efae0 00:37:43.270 [2024-10-01 17:37:41.714507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.270 [2024-10-01 17:37:41.714523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:43.270 [2024-10-01 17:37:41.725003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198efae0 00:37:43.270 [2024-10-01 17:37:41.726409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.270 [2024-10-01 17:37:41.726425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:43.270 [2024-10-01 17:37:41.736874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198efae0 00:37:43.270 [2024-10-01 17:37:41.738280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.270 [2024-10-01 17:37:41.738296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:43.270 [2024-10-01 17:37:41.748803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198efae0 00:37:43.270 [2024-10-01 17:37:41.750179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.270 [2024-10-01 17:37:41.750195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:43.270 [2024-10-01 17:37:41.760706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198e6738 00:37:43.270 [2024-10-01 17:37:41.762059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.270 [2024-10-01 17:37:41.762076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:43.270 [2024-10-01 17:37:41.772638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198f3e60 00:37:43.270 [2024-10-01 17:37:41.774020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.270 [2024-10-01 17:37:41.774037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:43.270 [2024-10-01 17:37:41.786129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198f1868 00:37:43.270 [2024-10-01 17:37:41.788148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.270 [2024-10-01 17:37:41.788164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:43.270 [2024-10-01 17:37:41.795711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198f9f68 00:37:43.270 [2024-10-01 17:37:41.797101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.270 [2024-10-01 17:37:41.797117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:43.270 [2024-10-01 17:37:41.807560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198f96f8 00:37:43.270 [2024-10-01 17:37:41.808927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.270 [2024-10-01 17:37:41.808943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:37:43.529 [2024-10-01 17:37:41.820237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198f96f8 00:37:43.529 [2024-10-01 17:37:41.821502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.529 [2024-10-01 17:37:41.821518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:43.529 [2024-10-01 17:37:41.833660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198fe2e8 00:37:43.529 [2024-10-01 17:37:41.835677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.529 [2024-10-01 17:37:41.835693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:43.529 [2024-10-01 17:37:41.843283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198dece0 00:37:43.529 [2024-10-01 17:37:41.844643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.530 [2024-10-01 17:37:41.844658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:37:43.530 [2024-10-01 17:37:41.856035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198efae0 00:37:43.530 [2024-10-01 17:37:41.857383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.530 [2024-10-01 17:37:41.857399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:43.530 [2024-10-01 17:37:41.867973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198f96f8 00:37:43.530 [2024-10-01 17:37:41.869351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.530 [2024-10-01 17:37:41.869370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:37:43.530 [2024-10-01 17:37:41.879929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198e6fa8 00:37:43.530 [2024-10-01 17:37:41.881287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.530 [2024-10-01 17:37:41.881303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:43.530 [2024-10-01 17:37:41.891862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198f3e60 00:37:43.530 [2024-10-01 17:37:41.893238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.530 [2024-10-01 17:37:41.893254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:37:43.530 [2024-10-01 17:37:41.903773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198efae0 00:37:43.530 [2024-10-01 17:37:41.905137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.530 [2024-10-01 17:37:41.905154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:43.530 [2024-10-01 17:37:41.915721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198fc998 00:37:43.530 [2024-10-01 17:37:41.917050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.530 [2024-10-01 17:37:41.917066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:43.530 [2024-10-01 17:37:41.927622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198e4140 00:37:43.530 [2024-10-01 17:37:41.928976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.530 [2024-10-01 17:37:41.928992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:43.530 [2024-10-01 17:37:41.938532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198f96f8 00:37:43.530 [2024-10-01 17:37:41.939437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.530 [2024-10-01 17:37:41.939453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:37:43.530 [2024-10-01 17:37:41.951300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198e9e10 00:37:43.530 [2024-10-01 17:37:41.952816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:17310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.530 [2024-10-01 17:37:41.952832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:37:43.530 [2024-10-01 17:37:41.961635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198eee38 00:37:43.530 [2024-10-01 17:37:41.962526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.530 [2024-10-01 17:37:41.962543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:37:43.530 [2024-10-01 17:37:41.973568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198eee38 00:37:43.530 [2024-10-01 17:37:41.974433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.530 [2024-10-01 17:37:41.974449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:37:43.530 [2024-10-01 17:37:41.985449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198eee38 00:37:43.530 [2024-10-01 17:37:41.986325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.530 [2024-10-01 17:37:41.986341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:37:43.530 [2024-10-01 17:37:41.997347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198eee38 00:37:43.530 [2024-10-01 17:37:41.998223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:20851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.530 [2024-10-01 17:37:41.998239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:37:43.530 [2024-10-01 17:37:42.009281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198eee38 00:37:43.530 [2024-10-01 17:37:42.010165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.530 [2024-10-01 17:37:42.010181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:37:43.530 [2024-10-01 17:37:42.021210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198eee38 00:37:43.530 [2024-10-01 17:37:42.022077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.530 [2024-10-01 17:37:42.022094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:37:43.530 [2024-10-01 17:37:42.033123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198eee38 00:37:43.530 [2024-10-01 17:37:42.033997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.530 [2024-10-01 17:37:42.034013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:37:43.530 [2024-10-01 17:37:42.045020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198eee38 00:37:43.530 [2024-10-01 17:37:42.045892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.530 [2024-10-01 17:37:42.045908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:37:43.530 [2024-10-01 17:37:42.056908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198eee38 00:37:43.530 [2024-10-01 17:37:42.057687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.530 [2024-10-01 17:37:42.057704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:37:43.530 [2024-10-01 17:37:42.068845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198e2c28 00:37:43.530 [2024-10-01 17:37:42.069707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.530 [2024-10-01 17:37:42.069723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:37:43.790 [2024-10-01 17:37:42.082529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198f92c0 00:37:43.790 [2024-10-01 17:37:42.084043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.790 [2024-10-01 17:37:42.084059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:43.790 [2024-10-01 17:37:42.092488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198ebfd0 00:37:43.790 [2024-10-01 17:37:42.093506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.790 [2024-10-01 17:37:42.093522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:37:43.790 [2024-10-01 17:37:42.105190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198f57b0 00:37:43.790 [2024-10-01 17:37:42.106219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.790 [2024-10-01 17:37:42.106236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:37:43.790 [2024-10-01 17:37:42.116316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198df988 00:37:43.790 [2024-10-01 17:37:42.117294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.790 [2024-10-01 17:37:42.117309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:37:43.790 [2024-10-01 17:37:42.129013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198eb760 00:37:43.790 [2024-10-01 17:37:42.130033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.790 [2024-10-01 17:37:42.130049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:43.790 [2024-10-01 17:37:42.140992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198de470 00:37:43.790 [2024-10-01 17:37:42.142014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.790 [2024-10-01 17:37:42.142031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:43.790 [2024-10-01 17:37:42.152939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198f20d8 00:37:43.790 [2024-10-01 17:37:42.153951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.790 [2024-10-01 17:37:42.153967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:43.790 [2024-10-01 17:37:42.164879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198fa7d8 00:37:43.790 [2024-10-01 17:37:42.165921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.790 [2024-10-01 17:37:42.165937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:37:43.790 [2024-10-01 17:37:42.176800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198ea248 00:37:43.790 [2024-10-01 17:37:42.177823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.790 [2024-10-01 17:37:42.177842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:43.790 [2024-10-01 17:37:42.188730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198e6738 00:37:43.790 [2024-10-01 17:37:42.189774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.790 [2024-10-01 17:37:42.189791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:37:43.790 [2024-10-01 17:37:42.200681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198f20d8 00:37:43.790 [2024-10-01 17:37:42.201701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.790 [2024-10-01 17:37:42.201717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:43.790 [2024-10-01 17:37:42.212635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198ff3c8 00:37:43.790 [2024-10-01 17:37:42.213618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.790 [2024-10-01 17:37:42.213634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:43.790 [2024-10-01 17:37:42.224570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198fa7d8 00:37:43.790 [2024-10-01 17:37:42.225619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.790 [2024-10-01 17:37:42.225635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:37:43.790 [2024-10-01 17:37:42.236511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198fb480 00:37:43.790 [2024-10-01 17:37:42.237487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.790 [2024-10-01 17:37:42.237503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:43.790 [2024-10-01 17:37:42.248421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198ea248 00:37:43.790 [2024-10-01 17:37:42.249437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.791 [2024-10-01 17:37:42.249453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:43.791 [2024-10-01 17:37:42.260360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198e6738 00:37:43.791 [2024-10-01 17:37:42.261415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.791 [2024-10-01 17:37:42.261432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:37:43.791 [2024-10-01 17:37:42.271556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445550) with pdu=0x2000198f6020 00:37:43.791 [2024-10-01 17:37:42.272579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:43.791 [2024-10-01 17:37:42.272595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:37:43.791 21309.50 IOPS, 83.24 MiB/s 00:37:43.791 Latency(us) 00:37:43.791 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:43.791 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:43.791 nvme0n1 : 2.01 21341.64 83.37 0.00 0.00 5988.87 2252.80 14199.47 00:37:43.791 =================================================================================================================== 00:37:43.791 Total : 21341.64 83.37 0.00 0.00 5988.87 2252.80 14199.47 00:37:43.791 { 00:37:43.791 "results": [ 00:37:43.791 { 00:37:43.791 "job": "nvme0n1", 00:37:43.791 "core_mask": "0x2", 00:37:43.791 "workload": "randwrite", 00:37:43.791 "status": "finished", 00:37:43.791 "queue_depth": 128, 00:37:43.791 "io_size": 4096, 00:37:43.791 "runtime": 2.005985, 00:37:43.791 "iops": 21341.635156793294, 00:37:43.791 "mibps": 83.3657623312238, 00:37:43.791 "io_failed": 0, 00:37:43.791 "io_timeout": 0, 00:37:43.791 "avg_latency_us": 5988.869896054752, 00:37:43.791 "min_latency_us": 2252.8, 00:37:43.791 "max_latency_us": 14199.466666666667 00:37:43.791 } 00:37:43.791 ], 00:37:43.791 "core_count": 1 00:37:43.791 } 00:37:43.791 17:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:43.791 17:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:43.791 | .driver_specific 00:37:43.791 | .nvme_error 00:37:43.791 | .status_code 00:37:43.791 | .command_transient_transport_error' 00:37:43.791 17:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:43.791 17:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:44.051 17:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 167 > 0 )) 00:37:44.051 17:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3284264 00:37:44.051 17:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3284264 ']' 00:37:44.051 17:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3284264 00:37:44.051 17:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:37:44.051 17:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:44.051 17:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3284264 00:37:44.051 17:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:44.051 17:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:44.051 17:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3284264' 00:37:44.051 killing process with pid 3284264 00:37:44.051 17:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3284264 00:37:44.051 Received shutdown signal, test time was about 2.000000 seconds 00:37:44.051 00:37:44.051 Latency(us) 00:37:44.051 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:44.051 =================================================================================================================== 00:37:44.051 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:44.051 17:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3284264 00:37:44.311 17:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:37:44.311 17:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:44.311 17:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:37:44.311 17:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:37:44.311 17:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:37:44.311 17:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3285012 00:37:44.311 17:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3285012 /var/tmp/bperf.sock 00:37:44.311 17:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3285012 ']' 00:37:44.311 17:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:44.311 17:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:44.311 17:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:44.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:44.311 17:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:44.311 17:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:44.311 17:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:37:44.311 [2024-10-01 17:37:42.705938] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:37:44.311 [2024-10-01 17:37:42.706007] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3285012 ] 00:37:44.311 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:44.311 Zero copy mechanism will not be used. 00:37:44.311 [2024-10-01 17:37:42.780139] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:44.311 [2024-10-01 17:37:42.808326] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:45.251 17:37:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:45.251 17:37:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:37:45.251 17:37:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:45.251 17:37:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:45.251 17:37:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:45.251 17:37:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:45.251 17:37:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:45.252 17:37:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:45.252 17:37:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:45.252 17:37:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:45.510 nvme0n1 00:37:45.510 17:37:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:37:45.510 17:37:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:45.510 17:37:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:45.510 17:37:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:45.510 17:37:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:45.510 17:37:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:45.510 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:45.510 Zero copy mechanism will not be used. 00:37:45.510 Running I/O for 2 seconds... 00:37:45.770 [2024-10-01 17:37:44.062267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:45.770 [2024-10-01 17:37:44.062624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.770 [2024-10-01 17:37:44.062652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:45.770 [2024-10-01 17:37:44.071109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:45.770 [2024-10-01 17:37:44.071457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.770 [2024-10-01 17:37:44.071479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:45.770 [2024-10-01 17:37:44.079935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:45.770 [2024-10-01 17:37:44.080251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.770 [2024-10-01 17:37:44.080271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:45.770 [2024-10-01 17:37:44.087818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:45.770 [2024-10-01 17:37:44.088159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.770 [2024-10-01 17:37:44.088178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:45.770 [2024-10-01 17:37:44.097139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:45.770 [2024-10-01 17:37:44.097472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.770 [2024-10-01 17:37:44.097491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:45.770 [2024-10-01 17:37:44.103844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:45.770 [2024-10-01 17:37:44.104183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.770 [2024-10-01 17:37:44.104202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:45.770 [2024-10-01 17:37:44.110072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:45.770 [2024-10-01 17:37:44.110407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.770 [2024-10-01 17:37:44.110425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:45.770 [2024-10-01 17:37:44.118928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:45.770 [2024-10-01 17:37:44.119271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.770 [2024-10-01 17:37:44.119294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:45.770 [2024-10-01 17:37:44.127031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:45.770 [2024-10-01 17:37:44.127395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.770 [2024-10-01 17:37:44.127413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:45.770 [2024-10-01 17:37:44.135152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:45.770 [2024-10-01 17:37:44.135499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.770 [2024-10-01 17:37:44.135517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:45.770 [2024-10-01 17:37:44.143719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:45.770 [2024-10-01 17:37:44.144028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.770 [2024-10-01 17:37:44.144046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:45.770 [2024-10-01 17:37:44.152925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:45.770 [2024-10-01 17:37:44.153263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.770 [2024-10-01 17:37:44.153282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:45.770 [2024-10-01 17:37:44.162683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:45.770 [2024-10-01 17:37:44.163030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.770 [2024-10-01 17:37:44.163048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:45.770 [2024-10-01 17:37:44.171559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:45.770 [2024-10-01 17:37:44.171902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.770 [2024-10-01 17:37:44.171920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:45.770 [2024-10-01 17:37:44.179647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:45.771 [2024-10-01 17:37:44.179997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.771 [2024-10-01 17:37:44.180016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:45.771 [2024-10-01 17:37:44.189374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:45.771 [2024-10-01 17:37:44.189714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.771 [2024-10-01 17:37:44.189732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:45.771 [2024-10-01 17:37:44.199011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:45.771 [2024-10-01 17:37:44.199351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.771 [2024-10-01 17:37:44.199369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:45.771 [2024-10-01 17:37:44.207392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:45.771 [2024-10-01 17:37:44.207718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.771 [2024-10-01 17:37:44.207736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:45.771 [2024-10-01 17:37:44.214639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:45.771 [2024-10-01 17:37:44.214970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.771 [2024-10-01 17:37:44.214988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:45.771 [2024-10-01 17:37:44.222734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:45.771 [2024-10-01 17:37:44.223079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.771 [2024-10-01 17:37:44.223096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:45.771 [2024-10-01 17:37:44.230506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:45.771 [2024-10-01 17:37:44.230852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.771 [2024-10-01 17:37:44.230869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:45.771 [2024-10-01 17:37:44.238345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:45.771 [2024-10-01 17:37:44.238672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.771 [2024-10-01 17:37:44.238689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:45.771 [2024-10-01 17:37:44.247408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:45.771 [2024-10-01 17:37:44.247712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.771 [2024-10-01 17:37:44.247729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:45.771 [2024-10-01 17:37:44.254270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:45.771 [2024-10-01 17:37:44.254473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.771 [2024-10-01 17:37:44.254490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:45.771 [2024-10-01 17:37:44.261610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:45.771 [2024-10-01 17:37:44.261953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.771 [2024-10-01 17:37:44.261971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:45.771 [2024-10-01 17:37:44.269249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:45.771 [2024-10-01 17:37:44.269610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.771 [2024-10-01 17:37:44.269628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:45.771 [2024-10-01 17:37:44.278100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:45.771 [2024-10-01 17:37:44.278404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.771 [2024-10-01 17:37:44.278422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:45.771 [2024-10-01 17:37:44.286268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:45.771 [2024-10-01 17:37:44.286461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.771 [2024-10-01 17:37:44.286477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:45.771 [2024-10-01 17:37:44.294814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:45.771 [2024-10-01 17:37:44.295144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.771 [2024-10-01 17:37:44.295162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:45.771 [2024-10-01 17:37:44.305839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:45.771 [2024-10-01 17:37:44.306145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.771 [2024-10-01 17:37:44.306163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:45.771 [2024-10-01 17:37:44.314419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:45.771 [2024-10-01 17:37:44.314733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:45.771 [2024-10-01 17:37:44.314752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.032 [2024-10-01 17:37:44.322084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.032 [2024-10-01 17:37:44.322287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.032 [2024-10-01 17:37:44.322304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.032 [2024-10-01 17:37:44.330341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.032 [2024-10-01 17:37:44.330543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.032 [2024-10-01 17:37:44.330560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.032 [2024-10-01 17:37:44.338280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.032 [2024-10-01 17:37:44.338531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.032 [2024-10-01 17:37:44.338554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.032 [2024-10-01 17:37:44.345849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.032 [2024-10-01 17:37:44.346262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.032 [2024-10-01 17:37:44.346280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.032 [2024-10-01 17:37:44.353630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.032 [2024-10-01 17:37:44.353971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.032 [2024-10-01 17:37:44.353989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.032 [2024-10-01 17:37:44.362172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.032 [2024-10-01 17:37:44.362514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.032 [2024-10-01 17:37:44.362532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.032 [2024-10-01 17:37:44.368250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.032 [2024-10-01 17:37:44.368452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.032 [2024-10-01 17:37:44.368469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.032 [2024-10-01 17:37:44.375978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.032 [2024-10-01 17:37:44.376429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.032 [2024-10-01 17:37:44.376447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.032 [2024-10-01 17:37:44.385899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.032 [2024-10-01 17:37:44.386241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.032 [2024-10-01 17:37:44.386259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.032 [2024-10-01 17:37:44.395930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.032 [2024-10-01 17:37:44.396234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.032 [2024-10-01 17:37:44.396252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.032 [2024-10-01 17:37:44.405632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.032 [2024-10-01 17:37:44.405942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.032 [2024-10-01 17:37:44.405959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.032 [2024-10-01 17:37:44.413769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.032 [2024-10-01 17:37:44.413829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.032 [2024-10-01 17:37:44.413844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.032 [2024-10-01 17:37:44.423834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.032 [2024-10-01 17:37:44.424090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.032 [2024-10-01 17:37:44.424107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.032 [2024-10-01 17:37:44.433686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.032 [2024-10-01 17:37:44.433887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.032 [2024-10-01 17:37:44.433903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.032 [2024-10-01 17:37:44.444042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.032 [2024-10-01 17:37:44.444303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.032 [2024-10-01 17:37:44.444320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.032 [2024-10-01 17:37:44.456037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.032 [2024-10-01 17:37:44.456398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.032 [2024-10-01 17:37:44.456416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.032 [2024-10-01 17:37:44.467497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.032 [2024-10-01 17:37:44.467759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.032 [2024-10-01 17:37:44.467775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.032 [2024-10-01 17:37:44.479519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.032 [2024-10-01 17:37:44.479933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.033 [2024-10-01 17:37:44.479951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.033 [2024-10-01 17:37:44.491263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.033 [2024-10-01 17:37:44.491605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.033 [2024-10-01 17:37:44.491622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.033 [2024-10-01 17:37:44.502308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.033 [2024-10-01 17:37:44.502509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.033 [2024-10-01 17:37:44.502526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.033 [2024-10-01 17:37:44.511374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.033 [2024-10-01 17:37:44.511583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.033 [2024-10-01 17:37:44.511600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.033 [2024-10-01 17:37:44.521591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.033 [2024-10-01 17:37:44.521907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.033 [2024-10-01 17:37:44.521925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.033 [2024-10-01 17:37:44.532859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.033 [2024-10-01 17:37:44.533199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.033 [2024-10-01 17:37:44.533217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.033 [2024-10-01 17:37:44.543605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.033 [2024-10-01 17:37:44.543973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.033 [2024-10-01 17:37:44.543991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.033 [2024-10-01 17:37:44.551628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.033 [2024-10-01 17:37:44.551791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.033 [2024-10-01 17:37:44.551807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.033 [2024-10-01 17:37:44.557161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.033 [2024-10-01 17:37:44.557362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.033 [2024-10-01 17:37:44.557379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.033 [2024-10-01 17:37:44.562388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.033 [2024-10-01 17:37:44.562589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.033 [2024-10-01 17:37:44.562605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.033 [2024-10-01 17:37:44.567205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.033 [2024-10-01 17:37:44.567415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.033 [2024-10-01 17:37:44.567432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.033 [2024-10-01 17:37:44.571647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.033 [2024-10-01 17:37:44.571845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.033 [2024-10-01 17:37:44.571866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.033 [2024-10-01 17:37:44.575941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.033 [2024-10-01 17:37:44.576146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.033 [2024-10-01 17:37:44.576163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.293 [2024-10-01 17:37:44.580328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.293 [2024-10-01 17:37:44.580529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.293 [2024-10-01 17:37:44.580546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.293 [2024-10-01 17:37:44.584455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.293 [2024-10-01 17:37:44.584654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.293 [2024-10-01 17:37:44.584671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.293 [2024-10-01 17:37:44.588441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.293 [2024-10-01 17:37:44.588648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.293 [2024-10-01 17:37:44.588665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.293 [2024-10-01 17:37:44.597794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.293 [2024-10-01 17:37:44.598000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.293 [2024-10-01 17:37:44.598017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.293 [2024-10-01 17:37:44.607272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.293 [2024-10-01 17:37:44.607574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.293 [2024-10-01 17:37:44.607592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.293 [2024-10-01 17:37:44.618856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.293 [2024-10-01 17:37:44.619233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.293 [2024-10-01 17:37:44.619251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.293 [2024-10-01 17:37:44.629744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.293 [2024-10-01 17:37:44.630107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.293 [2024-10-01 17:37:44.630124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.293 [2024-10-01 17:37:44.638824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.293 [2024-10-01 17:37:44.639162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.293 [2024-10-01 17:37:44.639180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.293 [2024-10-01 17:37:44.645027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.293 [2024-10-01 17:37:44.645395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.293 [2024-10-01 17:37:44.645413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.293 [2024-10-01 17:37:44.653561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.293 [2024-10-01 17:37:44.653901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.293 [2024-10-01 17:37:44.653919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.293 [2024-10-01 17:37:44.659373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.293 [2024-10-01 17:37:44.659573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.293 [2024-10-01 17:37:44.659590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.293 [2024-10-01 17:37:44.666877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.293 [2024-10-01 17:37:44.667208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.293 [2024-10-01 17:37:44.667227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.293 [2024-10-01 17:37:44.673446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.293 [2024-10-01 17:37:44.673649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.293 [2024-10-01 17:37:44.673666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.293 [2024-10-01 17:37:44.680891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.294 [2024-10-01 17:37:44.681096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.294 [2024-10-01 17:37:44.681113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.294 [2024-10-01 17:37:44.689641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.294 [2024-10-01 17:37:44.689915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.294 [2024-10-01 17:37:44.689932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.294 [2024-10-01 17:37:44.697971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.294 [2024-10-01 17:37:44.698324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.294 [2024-10-01 17:37:44.698345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.294 [2024-10-01 17:37:44.703049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.294 [2024-10-01 17:37:44.703252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.294 [2024-10-01 17:37:44.703268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.294 [2024-10-01 17:37:44.708580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.294 [2024-10-01 17:37:44.708781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.294 [2024-10-01 17:37:44.708798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.294 [2024-10-01 17:37:44.717708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.294 [2024-10-01 17:37:44.718036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.294 [2024-10-01 17:37:44.718053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.294 [2024-10-01 17:37:44.725137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.294 [2024-10-01 17:37:44.725456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.294 [2024-10-01 17:37:44.725473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.294 [2024-10-01 17:37:44.732450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.294 [2024-10-01 17:37:44.732649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.294 [2024-10-01 17:37:44.732665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.294 [2024-10-01 17:37:44.738850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.294 [2024-10-01 17:37:44.739056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.294 [2024-10-01 17:37:44.739073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.294 [2024-10-01 17:37:44.746298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.294 [2024-10-01 17:37:44.746726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.294 [2024-10-01 17:37:44.746744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.294 [2024-10-01 17:37:44.754189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.294 [2024-10-01 17:37:44.754483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.294 [2024-10-01 17:37:44.754501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.294 [2024-10-01 17:37:44.761519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.294 [2024-10-01 17:37:44.761846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.294 [2024-10-01 17:37:44.761863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.294 [2024-10-01 17:37:44.767946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.294 [2024-10-01 17:37:44.768221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.294 [2024-10-01 17:37:44.768239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.294 [2024-10-01 17:37:44.775530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.294 [2024-10-01 17:37:44.775840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.294 [2024-10-01 17:37:44.775858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.294 [2024-10-01 17:37:44.782399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.294 [2024-10-01 17:37:44.782703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.294 [2024-10-01 17:37:44.782720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.294 [2024-10-01 17:37:44.789905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.294 [2024-10-01 17:37:44.790221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.294 [2024-10-01 17:37:44.790238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.294 [2024-10-01 17:37:44.798506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.294 [2024-10-01 17:37:44.798805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.294 [2024-10-01 17:37:44.798822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.294 [2024-10-01 17:37:44.805594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.294 [2024-10-01 17:37:44.805935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.294 [2024-10-01 17:37:44.805953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.294 [2024-10-01 17:37:44.814840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.294 [2024-10-01 17:37:44.815182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.294 [2024-10-01 17:37:44.815200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.294 [2024-10-01 17:37:44.823671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.294 [2024-10-01 17:37:44.824013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.294 [2024-10-01 17:37:44.824031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.294 [2024-10-01 17:37:44.830115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.294 [2024-10-01 17:37:44.830418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.294 [2024-10-01 17:37:44.830435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.294 [2024-10-01 17:37:44.836454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.294 [2024-10-01 17:37:44.836767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.294 [2024-10-01 17:37:44.836785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.554 [2024-10-01 17:37:44.845597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.554 [2024-10-01 17:37:44.845921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.554 [2024-10-01 17:37:44.845938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.554 [2024-10-01 17:37:44.855521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.554 [2024-10-01 17:37:44.855856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.554 [2024-10-01 17:37:44.855873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.554 [2024-10-01 17:37:44.867038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.554 [2024-10-01 17:37:44.867356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.554 [2024-10-01 17:37:44.867373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.554 [2024-10-01 17:37:44.878489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.554 [2024-10-01 17:37:44.878844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.554 [2024-10-01 17:37:44.878862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.554 [2024-10-01 17:37:44.889838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.554 [2024-10-01 17:37:44.890146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.554 [2024-10-01 17:37:44.890163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.554 [2024-10-01 17:37:44.901536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.554 [2024-10-01 17:37:44.901838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.554 [2024-10-01 17:37:44.901856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.554 [2024-10-01 17:37:44.911254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.554 [2024-10-01 17:37:44.911594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.554 [2024-10-01 17:37:44.911615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.554 [2024-10-01 17:37:44.917224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.554 [2024-10-01 17:37:44.917425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.554 [2024-10-01 17:37:44.917442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.554 [2024-10-01 17:37:44.924269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.554 [2024-10-01 17:37:44.924561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.554 [2024-10-01 17:37:44.924578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.554 [2024-10-01 17:37:44.930204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.554 [2024-10-01 17:37:44.930499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.554 [2024-10-01 17:37:44.930516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.554 [2024-10-01 17:37:44.937720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.554 [2024-10-01 17:37:44.938068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.554 [2024-10-01 17:37:44.938085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.554 [2024-10-01 17:37:44.945863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.554 [2024-10-01 17:37:44.946079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.554 [2024-10-01 17:37:44.946096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.554 [2024-10-01 17:37:44.956273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.554 [2024-10-01 17:37:44.956611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.554 [2024-10-01 17:37:44.956628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.554 [2024-10-01 17:37:44.967318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.554 [2024-10-01 17:37:44.967620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.554 [2024-10-01 17:37:44.967638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.554 [2024-10-01 17:37:44.977663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.554 [2024-10-01 17:37:44.978045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.554 [2024-10-01 17:37:44.978062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.554 [2024-10-01 17:37:44.988804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.554 [2024-10-01 17:37:44.989142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.554 [2024-10-01 17:37:44.989160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.555 [2024-10-01 17:37:44.999042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.555 [2024-10-01 17:37:44.999336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.555 [2024-10-01 17:37:44.999354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.555 [2024-10-01 17:37:45.010249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.555 [2024-10-01 17:37:45.010555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.555 [2024-10-01 17:37:45.010573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.555 [2024-10-01 17:37:45.020891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.555 [2024-10-01 17:37:45.021188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.555 [2024-10-01 17:37:45.021208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.555 [2024-10-01 17:37:45.029582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.555 [2024-10-01 17:37:45.029849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.555 [2024-10-01 17:37:45.029866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.555 [2024-10-01 17:37:45.034630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.555 [2024-10-01 17:37:45.034830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.555 [2024-10-01 17:37:45.034847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.555 [2024-10-01 17:37:45.043763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.555 [2024-10-01 17:37:45.044077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.555 [2024-10-01 17:37:45.044095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.555 [2024-10-01 17:37:45.051750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.555 [2024-10-01 17:37:45.053210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.555 [2024-10-01 17:37:45.053228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.555 3689.00 IOPS, 461.12 MiB/s [2024-10-01 17:37:45.058840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.555 [2024-10-01 17:37:45.059155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.555 [2024-10-01 17:37:45.059172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.555 [2024-10-01 17:37:45.066447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.555 [2024-10-01 17:37:45.066664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.555 [2024-10-01 17:37:45.066679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.555 [2024-10-01 17:37:45.074833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.555 [2024-10-01 17:37:45.075150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.555 [2024-10-01 17:37:45.075168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.555 [2024-10-01 17:37:45.085736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.555 [2024-10-01 17:37:45.085962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.555 [2024-10-01 17:37:45.085979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.555 [2024-10-01 17:37:45.096846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.555 [2024-10-01 17:37:45.097211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.555 [2024-10-01 17:37:45.097229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.815 [2024-10-01 17:37:45.108430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.815 [2024-10-01 17:37:45.108799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.815 [2024-10-01 17:37:45.108817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.815 [2024-10-01 17:37:45.120114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.815 [2024-10-01 17:37:45.120411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.815 [2024-10-01 17:37:45.120430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.815 [2024-10-01 17:37:45.131738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.815 [2024-10-01 17:37:45.131947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.815 [2024-10-01 17:37:45.131963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.815 [2024-10-01 17:37:45.143185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.815 [2024-10-01 17:37:45.143557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.815 [2024-10-01 17:37:45.143574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.815 [2024-10-01 17:37:45.155494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.815 [2024-10-01 17:37:45.155866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.815 [2024-10-01 17:37:45.155888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.815 [2024-10-01 17:37:45.166816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.815 [2024-10-01 17:37:45.167045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.815 [2024-10-01 17:37:45.167061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.815 [2024-10-01 17:37:45.178497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.815 [2024-10-01 17:37:45.178804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.816 [2024-10-01 17:37:45.178821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.816 [2024-10-01 17:37:45.189479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.816 [2024-10-01 17:37:45.189800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.816 [2024-10-01 17:37:45.189818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.816 [2024-10-01 17:37:45.199683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.816 [2024-10-01 17:37:45.200013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.816 [2024-10-01 17:37:45.200032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.816 [2024-10-01 17:37:45.209158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.816 [2024-10-01 17:37:45.209461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.816 [2024-10-01 17:37:45.209478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.816 [2024-10-01 17:37:45.218346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.816 [2024-10-01 17:37:45.218688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.816 [2024-10-01 17:37:45.218706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.816 [2024-10-01 17:37:45.227720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.816 [2024-10-01 17:37:45.228047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.816 [2024-10-01 17:37:45.228065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.816 [2024-10-01 17:37:45.235277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.816 [2024-10-01 17:37:45.235482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.816 [2024-10-01 17:37:45.235498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.816 [2024-10-01 17:37:45.243731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.816 [2024-10-01 17:37:45.244071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.816 [2024-10-01 17:37:45.244088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.816 [2024-10-01 17:37:45.251910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.816 [2024-10-01 17:37:45.252214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.816 [2024-10-01 17:37:45.252232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.816 [2024-10-01 17:37:45.259465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.816 [2024-10-01 17:37:45.259709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.816 [2024-10-01 17:37:45.259725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.816 [2024-10-01 17:37:45.267516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.816 [2024-10-01 17:37:45.267816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.816 [2024-10-01 17:37:45.267834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.816 [2024-10-01 17:37:45.275840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.816 [2024-10-01 17:37:45.276136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.816 [2024-10-01 17:37:45.276154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.816 [2024-10-01 17:37:45.284710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.816 [2024-10-01 17:37:45.285016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.816 [2024-10-01 17:37:45.285033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.816 [2024-10-01 17:37:45.293838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.816 [2024-10-01 17:37:45.294141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.816 [2024-10-01 17:37:45.294165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.816 [2024-10-01 17:37:45.301206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.816 [2024-10-01 17:37:45.301509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.816 [2024-10-01 17:37:45.301526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.816 [2024-10-01 17:37:45.309914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.816 [2024-10-01 17:37:45.310247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.816 [2024-10-01 17:37:45.310268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.816 [2024-10-01 17:37:45.319665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.816 [2024-10-01 17:37:45.319933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.816 [2024-10-01 17:37:45.319949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.816 [2024-10-01 17:37:45.330784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.816 [2024-10-01 17:37:45.330999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.816 [2024-10-01 17:37:45.331015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.816 [2024-10-01 17:37:45.340323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.816 [2024-10-01 17:37:45.340524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.816 [2024-10-01 17:37:45.340541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.816 [2024-10-01 17:37:45.347538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.816 [2024-10-01 17:37:45.347739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.816 [2024-10-01 17:37:45.347756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.816 [2024-10-01 17:37:45.355237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.816 [2024-10-01 17:37:45.355439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.816 [2024-10-01 17:37:45.355456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.816 [2024-10-01 17:37:45.361297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:46.816 [2024-10-01 17:37:45.361497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.816 [2024-10-01 17:37:45.361513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.077 [2024-10-01 17:37:45.370757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.077 [2024-10-01 17:37:45.371062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.077 [2024-10-01 17:37:45.371080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.077 [2024-10-01 17:37:45.380591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.077 [2024-10-01 17:37:45.380915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.077 [2024-10-01 17:37:45.380932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.077 [2024-10-01 17:37:45.389482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.077 [2024-10-01 17:37:45.389787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.077 [2024-10-01 17:37:45.389805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.077 [2024-10-01 17:37:45.398862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.077 [2024-10-01 17:37:45.399198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.077 [2024-10-01 17:37:45.399215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.077 [2024-10-01 17:37:45.405655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.077 [2024-10-01 17:37:45.405864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.077 [2024-10-01 17:37:45.405881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.077 [2024-10-01 17:37:45.411637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.077 [2024-10-01 17:37:45.411847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.077 [2024-10-01 17:37:45.411864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.077 [2024-10-01 17:37:45.417691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.077 [2024-10-01 17:37:45.417892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.077 [2024-10-01 17:37:45.417909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.077 [2024-10-01 17:37:45.422903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.077 [2024-10-01 17:37:45.423107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.077 [2024-10-01 17:37:45.423123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.077 [2024-10-01 17:37:45.428528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.077 [2024-10-01 17:37:45.428727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.077 [2024-10-01 17:37:45.428744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.077 [2024-10-01 17:37:45.436096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.077 [2024-10-01 17:37:45.436436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.077 [2024-10-01 17:37:45.436453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.077 [2024-10-01 17:37:45.445402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.077 [2024-10-01 17:37:45.445698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.077 [2024-10-01 17:37:45.445716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.077 [2024-10-01 17:37:45.452067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.077 [2024-10-01 17:37:45.452359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.077 [2024-10-01 17:37:45.452377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.077 [2024-10-01 17:37:45.457350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.077 [2024-10-01 17:37:45.457558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.077 [2024-10-01 17:37:45.457575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.077 [2024-10-01 17:37:45.465246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.077 [2024-10-01 17:37:45.465509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.077 [2024-10-01 17:37:45.465525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.077 [2024-10-01 17:37:45.473333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.077 [2024-10-01 17:37:45.473533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.077 [2024-10-01 17:37:45.473549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.077 [2024-10-01 17:37:45.479220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.077 [2024-10-01 17:37:45.479467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.077 [2024-10-01 17:37:45.479482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.077 [2024-10-01 17:37:45.488569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.077 [2024-10-01 17:37:45.488914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.077 [2024-10-01 17:37:45.488931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.077 [2024-10-01 17:37:45.497349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.077 [2024-10-01 17:37:45.497656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.078 [2024-10-01 17:37:45.497674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.078 [2024-10-01 17:37:45.505503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.078 [2024-10-01 17:37:45.505795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.078 [2024-10-01 17:37:45.505812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.078 [2024-10-01 17:37:45.514191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.078 [2024-10-01 17:37:45.514582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.078 [2024-10-01 17:37:45.514603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.078 [2024-10-01 17:37:45.522370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.078 [2024-10-01 17:37:45.522570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.078 [2024-10-01 17:37:45.522587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.078 [2024-10-01 17:37:45.527114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.078 [2024-10-01 17:37:45.527195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.078 [2024-10-01 17:37:45.527211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.078 [2024-10-01 17:37:45.534625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.078 [2024-10-01 17:37:45.534834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.078 [2024-10-01 17:37:45.534851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.078 [2024-10-01 17:37:45.541833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.078 [2024-10-01 17:37:45.542147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.078 [2024-10-01 17:37:45.542164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.078 [2024-10-01 17:37:45.551811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.078 [2024-10-01 17:37:45.552130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.078 [2024-10-01 17:37:45.552148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.078 [2024-10-01 17:37:45.562795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.078 [2024-10-01 17:37:45.563095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.078 [2024-10-01 17:37:45.563112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.078 [2024-10-01 17:37:45.573638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.078 [2024-10-01 17:37:45.573837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.078 [2024-10-01 17:37:45.573854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.078 [2024-10-01 17:37:45.581545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.078 [2024-10-01 17:37:45.581839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.078 [2024-10-01 17:37:45.581856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.078 [2024-10-01 17:37:45.587254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.078 [2024-10-01 17:37:45.587459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.078 [2024-10-01 17:37:45.587476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.078 [2024-10-01 17:37:45.596829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.078 [2024-10-01 17:37:45.597125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.078 [2024-10-01 17:37:45.597142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.078 [2024-10-01 17:37:45.603151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.078 [2024-10-01 17:37:45.603352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.078 [2024-10-01 17:37:45.603369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.078 [2024-10-01 17:37:45.607785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.078 [2024-10-01 17:37:45.607985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.078 [2024-10-01 17:37:45.608008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.078 [2024-10-01 17:37:45.613236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.078 [2024-10-01 17:37:45.613531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.078 [2024-10-01 17:37:45.613548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.078 [2024-10-01 17:37:45.621754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.078 [2024-10-01 17:37:45.622061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.078 [2024-10-01 17:37:45.622078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.339 [2024-10-01 17:37:45.631227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.339 [2024-10-01 17:37:45.631559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.339 [2024-10-01 17:37:45.631577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.339 [2024-10-01 17:37:45.640176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.339 [2024-10-01 17:37:45.640520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.339 [2024-10-01 17:37:45.640538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.339 [2024-10-01 17:37:45.649350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.339 [2024-10-01 17:37:45.649727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.339 [2024-10-01 17:37:45.649744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.339 [2024-10-01 17:37:45.656700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.339 [2024-10-01 17:37:45.656902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.339 [2024-10-01 17:37:45.656919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.339 [2024-10-01 17:37:45.662985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.339 [2024-10-01 17:37:45.663293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.339 [2024-10-01 17:37:45.663312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.339 [2024-10-01 17:37:45.669380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.339 [2024-10-01 17:37:45.669591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.339 [2024-10-01 17:37:45.669607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.339 [2024-10-01 17:37:45.675138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.339 [2024-10-01 17:37:45.675341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.339 [2024-10-01 17:37:45.675358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.339 [2024-10-01 17:37:45.682373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.339 [2024-10-01 17:37:45.682572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.339 [2024-10-01 17:37:45.682589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.339 [2024-10-01 17:37:45.689504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.339 [2024-10-01 17:37:45.689706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.339 [2024-10-01 17:37:45.689722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.339 [2024-10-01 17:37:45.696601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.339 [2024-10-01 17:37:45.696802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.339 [2024-10-01 17:37:45.696819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.339 [2024-10-01 17:37:45.704782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.339 [2024-10-01 17:37:45.705087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.339 [2024-10-01 17:37:45.705105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.339 [2024-10-01 17:37:45.715051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.339 [2024-10-01 17:37:45.715352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.339 [2024-10-01 17:37:45.715373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.339 [2024-10-01 17:37:45.722684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.339 [2024-10-01 17:37:45.722884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.339 [2024-10-01 17:37:45.722901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.339 [2024-10-01 17:37:45.727237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.339 [2024-10-01 17:37:45.727437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.339 [2024-10-01 17:37:45.727454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.339 [2024-10-01 17:37:45.732226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.339 [2024-10-01 17:37:45.732527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.339 [2024-10-01 17:37:45.732544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.339 [2024-10-01 17:37:45.739209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.339 [2024-10-01 17:37:45.739414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.339 [2024-10-01 17:37:45.739430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.339 [2024-10-01 17:37:45.743509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.339 [2024-10-01 17:37:45.743712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.339 [2024-10-01 17:37:45.743728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.339 [2024-10-01 17:37:45.752021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.339 [2024-10-01 17:37:45.752334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.340 [2024-10-01 17:37:45.752352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.340 [2024-10-01 17:37:45.757145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.340 [2024-10-01 17:37:45.757345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.340 [2024-10-01 17:37:45.757361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.340 [2024-10-01 17:37:45.761321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.340 [2024-10-01 17:37:45.761522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.340 [2024-10-01 17:37:45.761539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.340 [2024-10-01 17:37:45.765271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.340 [2024-10-01 17:37:45.765475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.340 [2024-10-01 17:37:45.765492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.340 [2024-10-01 17:37:45.770368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.340 [2024-10-01 17:37:45.770625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.340 [2024-10-01 17:37:45.770642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.340 [2024-10-01 17:37:45.776811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.340 [2024-10-01 17:37:45.777134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.340 [2024-10-01 17:37:45.777152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.340 [2024-10-01 17:37:45.786559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.340 [2024-10-01 17:37:45.786869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.340 [2024-10-01 17:37:45.786887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.340 [2024-10-01 17:37:45.792899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.340 [2024-10-01 17:37:45.793234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.340 [2024-10-01 17:37:45.793252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.340 [2024-10-01 17:37:45.800816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.340 [2024-10-01 17:37:45.801178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.340 [2024-10-01 17:37:45.801196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.340 [2024-10-01 17:37:45.806911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.340 [2024-10-01 17:37:45.807116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.340 [2024-10-01 17:37:45.807133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.340 [2024-10-01 17:37:45.811069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.340 [2024-10-01 17:37:45.811270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.340 [2024-10-01 17:37:45.811286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.340 [2024-10-01 17:37:45.815482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.340 [2024-10-01 17:37:45.815681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.340 [2024-10-01 17:37:45.815698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.340 [2024-10-01 17:37:45.822794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.340 [2024-10-01 17:37:45.823012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.340 [2024-10-01 17:37:45.823028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.340 [2024-10-01 17:37:45.830917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.340 [2024-10-01 17:37:45.831121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.340 [2024-10-01 17:37:45.831137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.340 [2024-10-01 17:37:45.837774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.340 [2024-10-01 17:37:45.837983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.340 [2024-10-01 17:37:45.838005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.340 [2024-10-01 17:37:45.849320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.340 [2024-10-01 17:37:45.849669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.340 [2024-10-01 17:37:45.849686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.340 [2024-10-01 17:37:45.858381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.340 [2024-10-01 17:37:45.858628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.340 [2024-10-01 17:37:45.858644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.340 [2024-10-01 17:37:45.866934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.340 [2024-10-01 17:37:45.867150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.340 [2024-10-01 17:37:45.867166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.340 [2024-10-01 17:37:45.871362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.340 [2024-10-01 17:37:45.871560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.340 [2024-10-01 17:37:45.871577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.340 [2024-10-01 17:37:45.879088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.340 [2024-10-01 17:37:45.879406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.340 [2024-10-01 17:37:45.879425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.600 [2024-10-01 17:37:45.889325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.600 [2024-10-01 17:37:45.889682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.600 [2024-10-01 17:37:45.889699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.600 [2024-10-01 17:37:45.899953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.600 [2024-10-01 17:37:45.900303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.600 [2024-10-01 17:37:45.900321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.600 [2024-10-01 17:37:45.910517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.600 [2024-10-01 17:37:45.910818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.600 [2024-10-01 17:37:45.910836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.600 [2024-10-01 17:37:45.915306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.600 [2024-10-01 17:37:45.915506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.600 [2024-10-01 17:37:45.915523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.600 [2024-10-01 17:37:45.919705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.600 [2024-10-01 17:37:45.920009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.600 [2024-10-01 17:37:45.920027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.600 [2024-10-01 17:37:45.924097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.600 [2024-10-01 17:37:45.924296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.600 [2024-10-01 17:37:45.924313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.600 [2024-10-01 17:37:45.928199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.600 [2024-10-01 17:37:45.928398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.600 [2024-10-01 17:37:45.928414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.600 [2024-10-01 17:37:45.932309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.600 [2024-10-01 17:37:45.932507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.600 [2024-10-01 17:37:45.932524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.601 [2024-10-01 17:37:45.936458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.601 [2024-10-01 17:37:45.936659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.601 [2024-10-01 17:37:45.936677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.601 [2024-10-01 17:37:45.941167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.601 [2024-10-01 17:37:45.941378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.601 [2024-10-01 17:37:45.941394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.601 [2024-10-01 17:37:45.949612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.601 [2024-10-01 17:37:45.949813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.601 [2024-10-01 17:37:45.949829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.601 [2024-10-01 17:37:45.953915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.601 [2024-10-01 17:37:45.954121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.601 [2024-10-01 17:37:45.954138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.601 [2024-10-01 17:37:45.957979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.601 [2024-10-01 17:37:45.958186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.601 [2024-10-01 17:37:45.958203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.601 [2024-10-01 17:37:45.962027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.601 [2024-10-01 17:37:45.962226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.601 [2024-10-01 17:37:45.962243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.601 [2024-10-01 17:37:45.969341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.601 [2024-10-01 17:37:45.969706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.601 [2024-10-01 17:37:45.969723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.601 [2024-10-01 17:37:45.977265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.601 [2024-10-01 17:37:45.977467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.601 [2024-10-01 17:37:45.977483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.601 [2024-10-01 17:37:45.981686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.601 [2024-10-01 17:37:45.981886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.601 [2024-10-01 17:37:45.981902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.601 [2024-10-01 17:37:45.986144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.601 [2024-10-01 17:37:45.986345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.601 [2024-10-01 17:37:45.986365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.601 [2024-10-01 17:37:45.992408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.601 [2024-10-01 17:37:45.992611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.601 [2024-10-01 17:37:45.992627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.601 [2024-10-01 17:37:46.000083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.601 [2024-10-01 17:37:46.000397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.601 [2024-10-01 17:37:46.000415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.601 [2024-10-01 17:37:46.005109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.601 [2024-10-01 17:37:46.005309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.601 [2024-10-01 17:37:46.005327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.601 [2024-10-01 17:37:46.010779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.601 [2024-10-01 17:37:46.011107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.601 [2024-10-01 17:37:46.011125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.601 [2024-10-01 17:37:46.019253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.601 [2024-10-01 17:37:46.019463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.601 [2024-10-01 17:37:46.019479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.601 [2024-10-01 17:37:46.024641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.601 [2024-10-01 17:37:46.024840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.601 [2024-10-01 17:37:46.024857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.601 [2024-10-01 17:37:46.033825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.601 [2024-10-01 17:37:46.033891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.601 [2024-10-01 17:37:46.033905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.601 [2024-10-01 17:37:46.043026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.601 [2024-10-01 17:37:46.043227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.601 [2024-10-01 17:37:46.043243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.601 [2024-10-01 17:37:46.048549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.601 [2024-10-01 17:37:46.048753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.601 [2024-10-01 17:37:46.048769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.601 [2024-10-01 17:37:46.054144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2445890) with pdu=0x2000198fef90 00:37:47.601 [2024-10-01 17:37:46.055465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.601 [2024-10-01 17:37:46.055484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.601 3890.50 IOPS, 486.31 MiB/s 00:37:47.601 Latency(us) 00:37:47.601 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:47.601 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:37:47.601 nvme0n1 : 2.00 3889.41 486.18 0.00 0.00 4108.61 1884.16 12615.68 00:37:47.601 =================================================================================================================== 00:37:47.601 Total : 3889.41 486.18 0.00 0.00 4108.61 1884.16 12615.68 00:37:47.601 { 00:37:47.601 "results": [ 00:37:47.601 { 00:37:47.601 "job": "nvme0n1", 00:37:47.601 "core_mask": "0x2", 00:37:47.601 "workload": "randwrite", 00:37:47.601 "status": "finished", 00:37:47.601 "queue_depth": 16, 00:37:47.601 "io_size": 131072, 00:37:47.601 "runtime": 2.004674, 00:37:47.601 "iops": 3889.41044778353, 00:37:47.601 "mibps": 486.17630597294124, 00:37:47.601 "io_failed": 0, 00:37:47.601 "io_timeout": 0, 00:37:47.601 "avg_latency_us": 4108.607926125432, 00:37:47.601 "min_latency_us": 1884.16, 00:37:47.601 "max_latency_us": 12615.68 00:37:47.601 } 00:37:47.601 ], 00:37:47.601 "core_count": 1 00:37:47.601 } 00:37:47.601 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:47.601 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:47.601 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:47.601 | .driver_specific 00:37:47.601 | .nvme_error 00:37:47.601 | .status_code 00:37:47.601 | .command_transient_transport_error' 00:37:47.601 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:47.860 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 251 > 0 )) 00:37:47.860 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3285012 00:37:47.860 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3285012 ']' 00:37:47.860 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3285012 00:37:47.860 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:37:47.860 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:47.860 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3285012 00:37:47.860 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:47.860 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:47.860 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3285012' 00:37:47.860 killing process with pid 3285012 00:37:47.860 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3285012 00:37:47.860 Received shutdown signal, test time was about 2.000000 seconds 00:37:47.860 00:37:47.860 Latency(us) 00:37:47.860 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:47.860 =================================================================================================================== 00:37:47.860 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:47.860 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3285012 00:37:48.119 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3282758 00:37:48.119 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3282758 ']' 00:37:48.119 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3282758 00:37:48.119 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:37:48.119 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:48.119 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3282758 00:37:48.119 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:48.119 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:48.119 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3282758' 00:37:48.119 killing process with pid 3282758 00:37:48.119 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3282758 00:37:48.119 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3282758 00:37:48.119 00:37:48.119 real 0m16.276s 00:37:48.119 user 0m32.178s 00:37:48.119 sys 0m3.507s 00:37:48.119 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:48.119 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:48.119 ************************************ 00:37:48.119 END TEST nvmf_digest_error 00:37:48.119 ************************************ 00:37:48.119 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:37:48.119 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:37:48.119 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:48.119 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:37:48.378 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:48.378 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:37:48.378 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:48.378 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:48.378 rmmod nvme_tcp 00:37:48.378 rmmod nvme_fabrics 00:37:48.378 rmmod nvme_keyring 00:37:48.378 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:48.378 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:37:48.378 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:37:48.378 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@515 -- # '[' -n 3282758 ']' 00:37:48.378 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # killprocess 3282758 00:37:48.378 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 3282758 ']' 00:37:48.378 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 3282758 00:37:48.378 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3282758) - No such process 00:37:48.378 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 3282758 is not found' 00:37:48.378 Process with pid 3282758 is not found 00:37:48.378 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:37:48.378 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:48.379 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:48.379 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:37:48.379 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-save 00:37:48.379 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:48.379 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-restore 00:37:48.379 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:48.379 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:48.379 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:48.379 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:48.379 17:37:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:50.288 17:37:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:50.288 00:37:50.288 real 0m41.246s 00:37:50.288 user 1m5.801s 00:37:50.288 sys 0m12.185s 00:37:50.288 17:37:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:50.288 17:37:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:50.288 ************************************ 00:37:50.289 END TEST nvmf_digest 00:37:50.289 ************************************ 00:37:50.548 17:37:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:37:50.548 17:37:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:37:50.548 17:37:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:37:50.548 17:37:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:50.548 17:37:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:50.548 17:37:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:50.548 17:37:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:50.548 ************************************ 00:37:50.548 START TEST nvmf_bdevperf 00:37:50.548 ************************************ 00:37:50.548 17:37:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:50.548 * Looking for test storage... 00:37:50.548 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:50.548 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:50.548 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:37:50.548 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:50.548 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:50.548 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:50.548 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:50.548 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:50.548 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:37:50.548 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:37:50.548 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:37:50.548 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:37:50.548 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:37:50.548 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:37:50.548 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:37:50.548 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:50.548 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:37:50.548 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:37:50.548 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:50.548 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:50.548 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:37:50.548 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:37:50.548 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:50.548 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:37:50.548 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:37:50.548 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:37:50.548 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:37:50.548 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:50.807 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:37:50.807 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:37:50.807 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:50.807 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:50.807 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:37:50.807 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:50.807 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:50.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:50.807 --rc genhtml_branch_coverage=1 00:37:50.807 --rc genhtml_function_coverage=1 00:37:50.807 --rc genhtml_legend=1 00:37:50.807 --rc geninfo_all_blocks=1 00:37:50.807 --rc geninfo_unexecuted_blocks=1 00:37:50.807 00:37:50.807 ' 00:37:50.807 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:50.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:50.807 --rc genhtml_branch_coverage=1 00:37:50.807 --rc genhtml_function_coverage=1 00:37:50.807 --rc genhtml_legend=1 00:37:50.807 --rc geninfo_all_blocks=1 00:37:50.807 --rc geninfo_unexecuted_blocks=1 00:37:50.807 00:37:50.807 ' 00:37:50.807 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:50.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:50.807 --rc genhtml_branch_coverage=1 00:37:50.807 --rc genhtml_function_coverage=1 00:37:50.807 --rc genhtml_legend=1 00:37:50.807 --rc geninfo_all_blocks=1 00:37:50.807 --rc geninfo_unexecuted_blocks=1 00:37:50.807 00:37:50.807 ' 00:37:50.807 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:50.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:50.807 --rc genhtml_branch_coverage=1 00:37:50.807 --rc genhtml_function_coverage=1 00:37:50.807 --rc genhtml_legend=1 00:37:50.807 --rc geninfo_all_blocks=1 00:37:50.807 --rc geninfo_unexecuted_blocks=1 00:37:50.807 00:37:50.807 ' 00:37:50.807 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:50.807 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:37:50.807 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:50.808 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # prepare_net_devs 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:37:50.808 17:37:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:57.504 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:57.504 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:57.504 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:57.504 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # is_hw=yes 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:57.504 17:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:57.764 17:37:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:57.764 17:37:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:57.764 17:37:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:57.764 17:37:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:57.764 17:37:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:57.764 17:37:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:57.764 17:37:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:57.764 17:37:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:57.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:57.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.598 ms 00:37:57.764 00:37:57.764 --- 10.0.0.2 ping statistics --- 00:37:57.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:57.764 rtt min/avg/max/mdev = 0.598/0.598/0.598/0.000 ms 00:37:57.764 17:37:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:57.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:57.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:37:57.764 00:37:57.764 --- 10.0.0.1 ping statistics --- 00:37:57.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:57.764 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:37:57.764 17:37:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:57.764 17:37:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # return 0 00:37:57.764 17:37:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:37:57.764 17:37:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:57.764 17:37:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:57.764 17:37:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:57.764 17:37:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:57.764 17:37:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:57.764 17:37:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:57.764 17:37:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:37:57.764 17:37:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:57.765 17:37:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:57.765 17:37:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:57.765 17:37:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:57.765 17:37:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=3289909 00:37:57.765 17:37:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 3289909 00:37:57.765 17:37:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:57.765 17:37:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 3289909 ']' 00:37:57.765 17:37:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:57.765 17:37:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:57.765 17:37:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:57.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:57.765 17:37:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:57.765 17:37:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:58.024 [2024-10-01 17:37:56.335723] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:37:58.024 [2024-10-01 17:37:56.335775] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:58.024 [2024-10-01 17:37:56.419024] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:58.024 [2024-10-01 17:37:56.451375] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:58.024 [2024-10-01 17:37:56.451410] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:58.024 [2024-10-01 17:37:56.451418] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:58.024 [2024-10-01 17:37:56.451425] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:58.024 [2024-10-01 17:37:56.451431] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:58.024 [2024-10-01 17:37:56.451534] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:37:58.024 [2024-10-01 17:37:56.451691] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:58.024 [2024-10-01 17:37:56.451692] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:37:58.594 17:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:58.594 17:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:37:58.594 17:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:58.594 17:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:58.594 17:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:58.854 17:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:58.854 17:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:58.854 17:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:58.854 17:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:58.854 [2024-10-01 17:37:57.169821] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:58.854 17:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:58.854 17:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:58.854 17:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:58.854 17:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:58.854 Malloc0 00:37:58.854 17:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:58.854 17:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:58.854 17:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:58.854 17:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:58.854 17:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:58.854 17:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:58.854 17:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:58.854 17:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:58.854 17:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:58.854 17:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:58.854 17:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:58.854 17:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:58.854 [2024-10-01 17:37:57.236416] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:58.854 17:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:58.854 17:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:37:58.854 17:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:37:58.854 17:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:37:58.854 17:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:37:58.854 17:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:37:58.854 17:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:37:58.854 { 00:37:58.854 "params": { 00:37:58.854 "name": "Nvme$subsystem", 00:37:58.854 "trtype": "$TEST_TRANSPORT", 00:37:58.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:58.854 "adrfam": "ipv4", 00:37:58.854 "trsvcid": "$NVMF_PORT", 00:37:58.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:58.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:58.854 "hdgst": ${hdgst:-false}, 00:37:58.854 "ddgst": ${ddgst:-false} 00:37:58.854 }, 00:37:58.854 "method": "bdev_nvme_attach_controller" 00:37:58.854 } 00:37:58.854 EOF 00:37:58.854 )") 00:37:58.854 17:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:37:58.854 17:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:37:58.854 17:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:37:58.854 17:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:37:58.855 "params": { 00:37:58.855 "name": "Nvme1", 00:37:58.855 "trtype": "tcp", 00:37:58.855 "traddr": "10.0.0.2", 00:37:58.855 "adrfam": "ipv4", 00:37:58.855 "trsvcid": "4420", 00:37:58.855 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:58.855 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:58.855 "hdgst": false, 00:37:58.855 "ddgst": false 00:37:58.855 }, 00:37:58.855 "method": "bdev_nvme_attach_controller" 00:37:58.855 }' 00:37:58.855 [2024-10-01 17:37:57.291812] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:37:58.855 [2024-10-01 17:37:57.291867] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3290060 ] 00:37:58.855 [2024-10-01 17:37:57.352699] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:58.855 [2024-10-01 17:37:57.383761] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:59.115 Running I/O for 1 seconds... 00:38:00.052 9113.00 IOPS, 35.60 MiB/s 00:38:00.052 Latency(us) 00:38:00.052 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:00.052 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:38:00.052 Verification LBA range: start 0x0 length 0x4000 00:38:00.052 Nvme1n1 : 1.01 9208.08 35.97 0.00 0.00 13839.84 2566.83 15619.41 00:38:00.052 =================================================================================================================== 00:38:00.052 Total : 9208.08 35.97 0.00 0.00 13839.84 2566.83 15619.41 00:38:00.313 17:37:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3290288 00:38:00.313 17:37:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:38:00.313 17:37:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:38:00.313 17:37:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:38:00.313 17:37:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:38:00.313 17:37:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:38:00.313 17:37:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:00.313 17:37:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:00.313 { 00:38:00.313 "params": { 00:38:00.313 "name": "Nvme$subsystem", 00:38:00.313 "trtype": "$TEST_TRANSPORT", 00:38:00.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:00.313 "adrfam": "ipv4", 00:38:00.313 "trsvcid": "$NVMF_PORT", 00:38:00.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:00.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:00.313 "hdgst": ${hdgst:-false}, 00:38:00.313 "ddgst": ${ddgst:-false} 00:38:00.313 }, 00:38:00.313 "method": "bdev_nvme_attach_controller" 00:38:00.313 } 00:38:00.313 EOF 00:38:00.313 )") 00:38:00.313 17:37:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:38:00.313 17:37:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:38:00.313 17:37:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:38:00.313 17:37:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:38:00.313 "params": { 00:38:00.313 "name": "Nvme1", 00:38:00.313 "trtype": "tcp", 00:38:00.313 "traddr": "10.0.0.2", 00:38:00.313 "adrfam": "ipv4", 00:38:00.313 "trsvcid": "4420", 00:38:00.313 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:00.313 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:00.313 "hdgst": false, 00:38:00.313 "ddgst": false 00:38:00.313 }, 00:38:00.313 "method": "bdev_nvme_attach_controller" 00:38:00.313 }' 00:38:00.313 [2024-10-01 17:37:58.715844] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:38:00.313 [2024-10-01 17:37:58.715899] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3290288 ] 00:38:00.313 [2024-10-01 17:37:58.776959] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:00.313 [2024-10-01 17:37:58.806541] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:00.574 Running I/O for 15 seconds... 00:38:03.473 11649.00 IOPS, 45.50 MiB/s 11491.50 IOPS, 44.89 MiB/s 17:38:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3289909 00:38:03.473 17:38:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:38:03.473 [2024-10-01 17:38:01.681619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:106904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.473 [2024-10-01 17:38:01.681661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.473 [2024-10-01 17:38:01.681681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:107088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.473 [2024-10-01 17:38:01.681692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.473 [2024-10-01 17:38:01.681704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:107096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.473 [2024-10-01 17:38:01.681712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.473 [2024-10-01 17:38:01.681721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:107104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.473 [2024-10-01 17:38:01.681731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.473 [2024-10-01 17:38:01.681742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:107112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.473 [2024-10-01 17:38:01.681751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.473 [2024-10-01 17:38:01.681762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:107120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.473 [2024-10-01 17:38:01.681771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.473 [2024-10-01 17:38:01.681782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.473 [2024-10-01 17:38:01.681792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.473 [2024-10-01 17:38:01.681802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:107136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.473 [2024-10-01 17:38:01.681811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.473 [2024-10-01 17:38:01.681822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:107144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.473 [2024-10-01 17:38:01.681830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.473 [2024-10-01 17:38:01.681840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:107152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.473 [2024-10-01 17:38:01.681848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.473 [2024-10-01 17:38:01.681858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:107160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.473 [2024-10-01 17:38:01.681867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.473 [2024-10-01 17:38:01.681880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:107168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.473 [2024-10-01 17:38:01.681890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.473 [2024-10-01 17:38:01.681900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:107176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.473 [2024-10-01 17:38:01.681914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.473 [2024-10-01 17:38:01.681930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:107184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.473 [2024-10-01 17:38:01.681939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.473 [2024-10-01 17:38:01.681952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:107192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.473 [2024-10-01 17:38:01.681962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.473 [2024-10-01 17:38:01.681972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:107200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.473 [2024-10-01 17:38:01.681980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.473 [2024-10-01 17:38:01.681990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:106912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.473 [2024-10-01 17:38:01.682098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.473 [2024-10-01 17:38:01.682109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:106920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.473 [2024-10-01 17:38:01.682118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.474 [2024-10-01 17:38:01.682128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.474 [2024-10-01 17:38:01.682137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.474 [2024-10-01 17:38:01.682147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:106936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.474 [2024-10-01 17:38:01.682156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.474 [2024-10-01 17:38:01.682166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:106944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.474 [2024-10-01 17:38:01.682173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.474 [2024-10-01 17:38:01.682183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:106952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.474 [2024-10-01 17:38:01.682191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.474 [2024-10-01 17:38:01.682200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:106960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.474 [2024-10-01 17:38:01.682208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.474 [2024-10-01 17:38:01.682218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:107208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.474 [2024-10-01 17:38:01.682225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.474 [2024-10-01 17:38:01.682235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:107216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.474 [2024-10-01 17:38:01.682243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.474 [2024-10-01 17:38:01.682258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:107224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.474 [2024-10-01 17:38:01.682266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.474 [2024-10-01 17:38:01.682276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:107232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.474 [2024-10-01 17:38:01.682284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.474 [2024-10-01 17:38:01.682293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:107240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.474 [2024-10-01 17:38:01.682301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.474 [2024-10-01 17:38:01.682311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:107248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.474 [2024-10-01 17:38:01.682318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.474 [2024-10-01 17:38:01.682327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:107256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.474 [2024-10-01 17:38:01.682335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.474 [2024-10-01 17:38:01.682344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:107264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.474 [2024-10-01 17:38:01.682351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.474 [2024-10-01 17:38:01.682361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:107272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.474 [2024-10-01 17:38:01.682368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.474 [2024-10-01 17:38:01.682378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.474 [2024-10-01 17:38:01.682385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.474 [2024-10-01 17:38:01.682395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:107288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.474 [2024-10-01 17:38:01.682403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.474 [2024-10-01 17:38:01.682413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:107296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.474 [2024-10-01 17:38:01.682420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.474 [2024-10-01 17:38:01.682430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:107304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.474 [2024-10-01 17:38:01.682437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.474 [2024-10-01 17:38:01.682447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:107312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.474 [2024-10-01 17:38:01.682454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.474 [2024-10-01 17:38:01.682463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:107320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.474 [2024-10-01 17:38:01.682472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.474 [2024-10-01 17:38:01.682483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:107328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.474 [2024-10-01 17:38:01.682491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.474 [2024-10-01 17:38:01.682500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:107336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.474 [2024-10-01 17:38:01.682508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.474 [2024-10-01 17:38:01.682517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:107344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.474 [2024-10-01 17:38:01.682524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.474 [2024-10-01 17:38:01.682534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:107352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.474 [2024-10-01 17:38:01.682542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.474 [2024-10-01 17:38:01.682551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:107360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.474 [2024-10-01 17:38:01.682559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.474 [2024-10-01 17:38:01.682568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:107368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.474 [2024-10-01 17:38:01.682575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.474 [2024-10-01 17:38:01.682584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:107376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.474 [2024-10-01 17:38:01.682592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.474 [2024-10-01 17:38:01.682602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:107384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.474 [2024-10-01 17:38:01.682610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.474 [2024-10-01 17:38:01.682619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:107392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.474 [2024-10-01 17:38:01.682626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.474 [2024-10-01 17:38:01.682635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:107400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.474 [2024-10-01 17:38:01.682643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.474 [2024-10-01 17:38:01.682654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.474 [2024-10-01 17:38:01.682662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.474 [2024-10-01 17:38:01.682672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:107416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.474 [2024-10-01 17:38:01.682679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.474 [2024-10-01 17:38:01.682689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:107424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.474 [2024-10-01 17:38:01.682697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.475 [2024-10-01 17:38:01.682708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:107432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.475 [2024-10-01 17:38:01.682716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.475 [2024-10-01 17:38:01.682725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:107440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.475 [2024-10-01 17:38:01.682733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.475 [2024-10-01 17:38:01.682742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:107448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.475 [2024-10-01 17:38:01.682749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.475 [2024-10-01 17:38:01.682759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:107456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.475 [2024-10-01 17:38:01.682767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.475 [2024-10-01 17:38:01.682777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:107464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.475 [2024-10-01 17:38:01.682784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.475 [2024-10-01 17:38:01.682793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:107472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.475 [2024-10-01 17:38:01.682800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.475 [2024-10-01 17:38:01.682810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:107480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.475 [2024-10-01 17:38:01.682818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.475 [2024-10-01 17:38:01.682828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:107488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.475 [2024-10-01 17:38:01.682835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.475 [2024-10-01 17:38:01.682844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:107496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.475 [2024-10-01 17:38:01.682851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.475 [2024-10-01 17:38:01.682861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:107504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.475 [2024-10-01 17:38:01.682869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.475 [2024-10-01 17:38:01.682879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:107512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.475 [2024-10-01 17:38:01.682886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.475 [2024-10-01 17:38:01.682895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.475 [2024-10-01 17:38:01.682903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.475 [2024-10-01 17:38:01.682913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:107528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.475 [2024-10-01 17:38:01.682922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.475 [2024-10-01 17:38:01.682931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:107536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.475 [2024-10-01 17:38:01.682938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.475 [2024-10-01 17:38:01.682949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.475 [2024-10-01 17:38:01.682956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.475 [2024-10-01 17:38:01.682966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:107552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.475 [2024-10-01 17:38:01.682974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.475 [2024-10-01 17:38:01.682984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:107560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.475 [2024-10-01 17:38:01.682992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.475 [2024-10-01 17:38:01.683006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:107568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.475 [2024-10-01 17:38:01.683013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.475 [2024-10-01 17:38:01.683023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:107576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.475 [2024-10-01 17:38:01.683030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.475 [2024-10-01 17:38:01.683040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:107584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.475 [2024-10-01 17:38:01.683048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.475 [2024-10-01 17:38:01.683057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:107592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.475 [2024-10-01 17:38:01.683065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.475 [2024-10-01 17:38:01.683074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:107600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.475 [2024-10-01 17:38:01.683081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.475 [2024-10-01 17:38:01.683090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:107608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.475 [2024-10-01 17:38:01.683098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.475 [2024-10-01 17:38:01.683108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:107616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.475 [2024-10-01 17:38:01.683115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.475 [2024-10-01 17:38:01.683125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.475 [2024-10-01 17:38:01.683133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.475 [2024-10-01 17:38:01.683143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:107632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.475 [2024-10-01 17:38:01.683151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.475 [2024-10-01 17:38:01.683161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:107640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.475 [2024-10-01 17:38:01.683168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.475 [2024-10-01 17:38:01.683177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:107648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.475 [2024-10-01 17:38:01.683185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.475 [2024-10-01 17:38:01.683194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:107656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.475 [2024-10-01 17:38:01.683201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.475 [2024-10-01 17:38:01.683212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:107664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.475 [2024-10-01 17:38:01.683219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.475 [2024-10-01 17:38:01.683229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:107672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.475 [2024-10-01 17:38:01.683236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.475 [2024-10-01 17:38:01.683245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:107680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.475 [2024-10-01 17:38:01.683252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.475 [2024-10-01 17:38:01.683263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:107688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.476 [2024-10-01 17:38:01.683270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.476 [2024-10-01 17:38:01.683280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:107696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.476 [2024-10-01 17:38:01.683287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.476 [2024-10-01 17:38:01.683296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:107704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.476 [2024-10-01 17:38:01.683304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.476 [2024-10-01 17:38:01.683313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:107712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.476 [2024-10-01 17:38:01.683321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.476 [2024-10-01 17:38:01.683330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:107720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.476 [2024-10-01 17:38:01.683337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.476 [2024-10-01 17:38:01.683348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:107728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.476 [2024-10-01 17:38:01.683356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.476 [2024-10-01 17:38:01.683365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:107736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.476 [2024-10-01 17:38:01.683373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.476 [2024-10-01 17:38:01.683383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:107744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.476 [2024-10-01 17:38:01.683390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.476 [2024-10-01 17:38:01.683399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:107752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.476 [2024-10-01 17:38:01.683407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.476 [2024-10-01 17:38:01.683417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.476 [2024-10-01 17:38:01.683424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.476 [2024-10-01 17:38:01.683434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:107768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.476 [2024-10-01 17:38:01.683441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.476 [2024-10-01 17:38:01.683450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:107776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.476 [2024-10-01 17:38:01.683457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.476 [2024-10-01 17:38:01.683467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:107784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.476 [2024-10-01 17:38:01.683475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.476 [2024-10-01 17:38:01.683485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:107792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.476 [2024-10-01 17:38:01.683492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.476 [2024-10-01 17:38:01.683502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:107800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.476 [2024-10-01 17:38:01.683509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.476 [2024-10-01 17:38:01.683519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:107808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.476 [2024-10-01 17:38:01.683527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.476 [2024-10-01 17:38:01.683537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.476 [2024-10-01 17:38:01.683544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.476 [2024-10-01 17:38:01.683553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.476 [2024-10-01 17:38:01.683561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.476 [2024-10-01 17:38:01.683573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:107832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.476 [2024-10-01 17:38:01.683580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.476 [2024-10-01 17:38:01.683590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:107840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.476 [2024-10-01 17:38:01.683598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.476 [2024-10-01 17:38:01.683607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:107848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.476 [2024-10-01 17:38:01.683615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.476 [2024-10-01 17:38:01.683625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:107856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.476 [2024-10-01 17:38:01.683632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.476 [2024-10-01 17:38:01.683642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.476 [2024-10-01 17:38:01.683650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.476 [2024-10-01 17:38:01.683659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:107872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.476 [2024-10-01 17:38:01.683666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.476 [2024-10-01 17:38:01.683676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:107880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.476 [2024-10-01 17:38:01.683684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.476 [2024-10-01 17:38:01.683694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.476 [2024-10-01 17:38:01.683701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.476 [2024-10-01 17:38:01.683710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:107896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.476 [2024-10-01 17:38:01.683718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.476 [2024-10-01 17:38:01.683727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:107904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.476 [2024-10-01 17:38:01.683735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.476 [2024-10-01 17:38:01.683744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:107912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.476 [2024-10-01 17:38:01.683752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.476 [2024-10-01 17:38:01.683761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:107920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.476 [2024-10-01 17:38:01.683768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.476 [2024-10-01 17:38:01.683778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:106968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.476 [2024-10-01 17:38:01.683788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.476 [2024-10-01 17:38:01.683798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:106976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.476 [2024-10-01 17:38:01.683805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.477 [2024-10-01 17:38:01.683815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:106984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.477 [2024-10-01 17:38:01.683822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.477 [2024-10-01 17:38:01.683831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:106992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.477 [2024-10-01 17:38:01.683839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.477 [2024-10-01 17:38:01.683849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:107000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.477 [2024-10-01 17:38:01.683856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.477 [2024-10-01 17:38:01.683866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:107008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.477 [2024-10-01 17:38:01.683873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.477 [2024-10-01 17:38:01.683883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:107016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.477 [2024-10-01 17:38:01.683890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.477 [2024-10-01 17:38:01.683900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:107024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.477 [2024-10-01 17:38:01.683908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.477 [2024-10-01 17:38:01.683917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.477 [2024-10-01 17:38:01.683924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.477 [2024-10-01 17:38:01.683934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:107040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.477 [2024-10-01 17:38:01.683941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.477 [2024-10-01 17:38:01.683951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:107048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.477 [2024-10-01 17:38:01.683959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.477 [2024-10-01 17:38:01.683968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:107056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.477 [2024-10-01 17:38:01.683975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.477 [2024-10-01 17:38:01.683985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:107064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.477 [2024-10-01 17:38:01.683992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.477 [2024-10-01 17:38:01.684007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:107072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.477 [2024-10-01 17:38:01.684015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.477 [2024-10-01 17:38:01.684025] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2f7e0 is same with the state(6) to be set 00:38:03.477 [2024-10-01 17:38:01.684034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:03.477 [2024-10-01 17:38:01.684040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:03.477 [2024-10-01 17:38:01.684046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107080 len:8 PRP1 0x0 PRP2 0x0 00:38:03.477 [2024-10-01 17:38:01.684055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.477 [2024-10-01 17:38:01.684092] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b2f7e0 was disconnected and freed. reset controller. 00:38:03.477 [2024-10-01 17:38:01.687649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.477 [2024-10-01 17:38:01.687704] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:03.477 [2024-10-01 17:38:01.688504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.477 [2024-10-01 17:38:01.688523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:03.477 [2024-10-01 17:38:01.688531] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:03.477 [2024-10-01 17:38:01.688748] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:03.477 [2024-10-01 17:38:01.688966] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.477 [2024-10-01 17:38:01.688975] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.477 [2024-10-01 17:38:01.688984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.477 [2024-10-01 17:38:01.692481] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.477 [2024-10-01 17:38:01.701742] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.477 [2024-10-01 17:38:01.702397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.477 [2024-10-01 17:38:01.702437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:03.477 [2024-10-01 17:38:01.702448] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:03.477 [2024-10-01 17:38:01.702687] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:03.477 [2024-10-01 17:38:01.702907] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.477 [2024-10-01 17:38:01.702917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.477 [2024-10-01 17:38:01.702924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.477 [2024-10-01 17:38:01.706429] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.477 [2024-10-01 17:38:01.715472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.477 [2024-10-01 17:38:01.716096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.477 [2024-10-01 17:38:01.716136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:03.477 [2024-10-01 17:38:01.716154] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:03.477 [2024-10-01 17:38:01.716390] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:03.477 [2024-10-01 17:38:01.716611] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.477 [2024-10-01 17:38:01.716621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.477 [2024-10-01 17:38:01.716629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.477 [2024-10-01 17:38:01.720129] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.477 [2024-10-01 17:38:01.729389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.477 [2024-10-01 17:38:01.730056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.477 [2024-10-01 17:38:01.730096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:03.477 [2024-10-01 17:38:01.730109] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:03.477 [2024-10-01 17:38:01.730348] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:03.477 [2024-10-01 17:38:01.730568] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.477 [2024-10-01 17:38:01.730578] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.477 [2024-10-01 17:38:01.730585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.477 [2024-10-01 17:38:01.734088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.477 [2024-10-01 17:38:01.743137] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.477 [2024-10-01 17:38:01.743745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.477 [2024-10-01 17:38:01.743785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:03.478 [2024-10-01 17:38:01.743796] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:03.478 [2024-10-01 17:38:01.744041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:03.478 [2024-10-01 17:38:01.744262] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.478 [2024-10-01 17:38:01.744271] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.478 [2024-10-01 17:38:01.744280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.478 [2024-10-01 17:38:01.747770] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.478 [2024-10-01 17:38:01.757022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.478 [2024-10-01 17:38:01.757588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.478 [2024-10-01 17:38:01.757608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:03.478 [2024-10-01 17:38:01.757617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:03.478 [2024-10-01 17:38:01.757833] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:03.478 [2024-10-01 17:38:01.758057] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.478 [2024-10-01 17:38:01.758071] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.478 [2024-10-01 17:38:01.758079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.478 [2024-10-01 17:38:01.761565] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.478 [2024-10-01 17:38:01.770810] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.478 [2024-10-01 17:38:01.771399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.478 [2024-10-01 17:38:01.771418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:03.478 [2024-10-01 17:38:01.771426] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:03.478 [2024-10-01 17:38:01.771642] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:03.478 [2024-10-01 17:38:01.771858] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.478 [2024-10-01 17:38:01.771867] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.478 [2024-10-01 17:38:01.771875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.478 [2024-10-01 17:38:01.775366] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.478 [2024-10-01 17:38:01.784613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.478 [2024-10-01 17:38:01.785285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.478 [2024-10-01 17:38:01.785325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:03.478 [2024-10-01 17:38:01.785337] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:03.478 [2024-10-01 17:38:01.785572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:03.478 [2024-10-01 17:38:01.785793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.478 [2024-10-01 17:38:01.785803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.478 [2024-10-01 17:38:01.785811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.478 [2024-10-01 17:38:01.789311] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.478 [2024-10-01 17:38:01.798378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.478 [2024-10-01 17:38:01.799052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.478 [2024-10-01 17:38:01.799091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:03.478 [2024-10-01 17:38:01.799105] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:03.478 [2024-10-01 17:38:01.799344] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:03.478 [2024-10-01 17:38:01.799564] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.478 [2024-10-01 17:38:01.799575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.478 [2024-10-01 17:38:01.799583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.478 [2024-10-01 17:38:01.803081] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.478 [2024-10-01 17:38:01.812135] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.478 [2024-10-01 17:38:01.812741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.478 [2024-10-01 17:38:01.812781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:03.478 [2024-10-01 17:38:01.812792] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:03.478 [2024-10-01 17:38:01.813037] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:03.478 [2024-10-01 17:38:01.813259] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.478 [2024-10-01 17:38:01.813268] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.478 [2024-10-01 17:38:01.813277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.478 [2024-10-01 17:38:01.816767] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.478 [2024-10-01 17:38:01.826069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.478 [2024-10-01 17:38:01.826747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.478 [2024-10-01 17:38:01.826787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:03.478 [2024-10-01 17:38:01.826800] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:03.478 [2024-10-01 17:38:01.827046] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:03.478 [2024-10-01 17:38:01.827267] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.478 [2024-10-01 17:38:01.827277] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.478 [2024-10-01 17:38:01.827285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.479 [2024-10-01 17:38:01.830778] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.479 [2024-10-01 17:38:01.839834] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.479 [2024-10-01 17:38:01.840510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.479 [2024-10-01 17:38:01.840550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:03.479 [2024-10-01 17:38:01.840561] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:03.479 [2024-10-01 17:38:01.840797] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:03.479 [2024-10-01 17:38:01.841026] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.479 [2024-10-01 17:38:01.841036] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.479 [2024-10-01 17:38:01.841044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.479 [2024-10-01 17:38:01.844535] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.479 [2024-10-01 17:38:01.853581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.479 [2024-10-01 17:38:01.854266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.479 [2024-10-01 17:38:01.854306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:03.479 [2024-10-01 17:38:01.854322] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:03.479 [2024-10-01 17:38:01.854560] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:03.479 [2024-10-01 17:38:01.854781] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.479 [2024-10-01 17:38:01.854790] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.479 [2024-10-01 17:38:01.854798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.479 [2024-10-01 17:38:01.858295] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.479 [2024-10-01 17:38:01.867346] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.479 [2024-10-01 17:38:01.867917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.479 [2024-10-01 17:38:01.867937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:03.479 [2024-10-01 17:38:01.867945] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:03.479 [2024-10-01 17:38:01.868167] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:03.479 [2024-10-01 17:38:01.868385] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.479 [2024-10-01 17:38:01.868394] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.479 [2024-10-01 17:38:01.868401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.479 [2024-10-01 17:38:01.871897] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.479 [2024-10-01 17:38:01.881149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.479 [2024-10-01 17:38:01.881799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.479 [2024-10-01 17:38:01.881838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:03.479 [2024-10-01 17:38:01.881849] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:03.479 [2024-10-01 17:38:01.882092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:03.479 [2024-10-01 17:38:01.882314] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.479 [2024-10-01 17:38:01.882323] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.479 [2024-10-01 17:38:01.882331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.479 [2024-10-01 17:38:01.885822] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.479 [2024-10-01 17:38:01.894872] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.479 [2024-10-01 17:38:01.895508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.479 [2024-10-01 17:38:01.895547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:03.479 [2024-10-01 17:38:01.895559] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:03.479 [2024-10-01 17:38:01.895794] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:03.479 [2024-10-01 17:38:01.896023] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.479 [2024-10-01 17:38:01.896033] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.479 [2024-10-01 17:38:01.896045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.479 [2024-10-01 17:38:01.899540] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.479 [2024-10-01 17:38:01.908790] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.479 [2024-10-01 17:38:01.909437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.479 [2024-10-01 17:38:01.909477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:03.479 [2024-10-01 17:38:01.909488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:03.479 [2024-10-01 17:38:01.909723] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:03.479 [2024-10-01 17:38:01.909944] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.479 [2024-10-01 17:38:01.909954] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.479 [2024-10-01 17:38:01.909962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.479 [2024-10-01 17:38:01.913463] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.479 [2024-10-01 17:38:01.922716] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.479 [2024-10-01 17:38:01.923366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.479 [2024-10-01 17:38:01.923406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:03.479 [2024-10-01 17:38:01.923418] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:03.479 [2024-10-01 17:38:01.923654] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:03.479 [2024-10-01 17:38:01.923874] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.479 [2024-10-01 17:38:01.923884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.479 [2024-10-01 17:38:01.923892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.479 [2024-10-01 17:38:01.927393] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.479 [2024-10-01 17:38:01.936644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.479 [2024-10-01 17:38:01.937209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.479 [2024-10-01 17:38:01.937248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:03.479 [2024-10-01 17:38:01.937261] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:03.479 [2024-10-01 17:38:01.937500] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:03.479 [2024-10-01 17:38:01.937720] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.479 [2024-10-01 17:38:01.937730] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.479 [2024-10-01 17:38:01.937738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.479 [2024-10-01 17:38:01.941240] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.479 [2024-10-01 17:38:01.950494] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.479 [2024-10-01 17:38:01.951095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.479 [2024-10-01 17:38:01.951135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:03.479 [2024-10-01 17:38:01.951148] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:03.479 [2024-10-01 17:38:01.951387] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:03.479 [2024-10-01 17:38:01.951607] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.479 [2024-10-01 17:38:01.951617] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.479 [2024-10-01 17:38:01.951625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.479 [2024-10-01 17:38:01.955124] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.479 [2024-10-01 17:38:01.964379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.479 [2024-10-01 17:38:01.965041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.479 [2024-10-01 17:38:01.965080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:03.479 [2024-10-01 17:38:01.965093] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:03.479 [2024-10-01 17:38:01.965332] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:03.479 [2024-10-01 17:38:01.965552] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.479 [2024-10-01 17:38:01.965562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.479 [2024-10-01 17:38:01.965570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.480 [2024-10-01 17:38:01.969069] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.480 [2024-10-01 17:38:01.978130] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.480 [2024-10-01 17:38:01.978744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.480 [2024-10-01 17:38:01.978784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:03.480 [2024-10-01 17:38:01.978795] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:03.480 [2024-10-01 17:38:01.979040] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:03.480 [2024-10-01 17:38:01.979262] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.480 [2024-10-01 17:38:01.979272] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.480 [2024-10-01 17:38:01.979280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.480 [2024-10-01 17:38:01.982774] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.480 [2024-10-01 17:38:01.991910] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.480 [2024-10-01 17:38:01.992561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.480 [2024-10-01 17:38:01.992602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:03.480 [2024-10-01 17:38:01.992613] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:03.480 [2024-10-01 17:38:01.992854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:03.480 [2024-10-01 17:38:01.993084] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.480 [2024-10-01 17:38:01.993095] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.480 [2024-10-01 17:38:01.993102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.480 [2024-10-01 17:38:01.996594] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.480 [2024-10-01 17:38:02.005845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.480 [2024-10-01 17:38:02.006533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.480 [2024-10-01 17:38:02.006573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:03.480 [2024-10-01 17:38:02.006584] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:03.480 [2024-10-01 17:38:02.006819] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:03.480 [2024-10-01 17:38:02.007048] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.480 [2024-10-01 17:38:02.007058] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.480 [2024-10-01 17:38:02.007067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.480 [2024-10-01 17:38:02.010558] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.743 [2024-10-01 17:38:02.019608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.743 [2024-10-01 17:38:02.020144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.743 [2024-10-01 17:38:02.020165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:03.743 [2024-10-01 17:38:02.020176] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:03.743 [2024-10-01 17:38:02.020393] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:03.743 [2024-10-01 17:38:02.020610] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.743 [2024-10-01 17:38:02.020619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.743 [2024-10-01 17:38:02.020627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.743 [2024-10-01 17:38:02.024131] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.743 [2024-10-01 17:38:02.033406] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.743 [2024-10-01 17:38:02.033975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.743 [2024-10-01 17:38:02.033998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:03.743 [2024-10-01 17:38:02.034007] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:03.743 [2024-10-01 17:38:02.034224] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:03.743 [2024-10-01 17:38:02.034441] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.743 [2024-10-01 17:38:02.034450] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.743 [2024-10-01 17:38:02.034463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.743 [2024-10-01 17:38:02.037951] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.743 [2024-10-01 17:38:02.047202] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.743 [2024-10-01 17:38:02.047858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.743 [2024-10-01 17:38:02.047898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:03.743 [2024-10-01 17:38:02.047909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:03.743 [2024-10-01 17:38:02.048154] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:03.743 [2024-10-01 17:38:02.048376] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.743 [2024-10-01 17:38:02.048386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.743 [2024-10-01 17:38:02.048394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.743 [2024-10-01 17:38:02.051885] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.743 [2024-10-01 17:38:02.060932] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.743 [2024-10-01 17:38:02.061559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.743 [2024-10-01 17:38:02.061599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:03.743 [2024-10-01 17:38:02.061609] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:03.743 [2024-10-01 17:38:02.061845] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:03.743 [2024-10-01 17:38:02.062076] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.743 [2024-10-01 17:38:02.062086] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.743 [2024-10-01 17:38:02.062095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.743 [2024-10-01 17:38:02.065588] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.743 [2024-10-01 17:38:02.074851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.743 [2024-10-01 17:38:02.075485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.743 [2024-10-01 17:38:02.075525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:03.743 [2024-10-01 17:38:02.075536] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:03.743 [2024-10-01 17:38:02.075771] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:03.743 [2024-10-01 17:38:02.075991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.743 [2024-10-01 17:38:02.076011] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.743 [2024-10-01 17:38:02.076020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.743 9915.67 IOPS, 38.73 MiB/s [2024-10-01 17:38:02.081166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.743 [2024-10-01 17:38:02.088774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.743 [2024-10-01 17:38:02.089444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.743 [2024-10-01 17:38:02.089488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:03.743 [2024-10-01 17:38:02.089500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:03.743 [2024-10-01 17:38:02.089736] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:03.743 [2024-10-01 17:38:02.089956] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.743 [2024-10-01 17:38:02.089965] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.743 [2024-10-01 17:38:02.089973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.743 [2024-10-01 17:38:02.093476] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.743 [2024-10-01 17:38:02.102520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.743 [2024-10-01 17:38:02.103215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.743 [2024-10-01 17:38:02.103255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:03.743 [2024-10-01 17:38:02.103266] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:03.743 [2024-10-01 17:38:02.103502] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:03.743 [2024-10-01 17:38:02.103722] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.743 [2024-10-01 17:38:02.103731] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.743 [2024-10-01 17:38:02.103739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.743 [2024-10-01 17:38:02.107244] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.743 [2024-10-01 17:38:02.116289] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.743 [2024-10-01 17:38:02.116956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.743 [2024-10-01 17:38:02.117003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:03.743 [2024-10-01 17:38:02.117015] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:03.743 [2024-10-01 17:38:02.117250] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:03.743 [2024-10-01 17:38:02.117470] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.743 [2024-10-01 17:38:02.117480] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.743 [2024-10-01 17:38:02.117488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.743 [2024-10-01 17:38:02.120981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.743 [2024-10-01 17:38:02.130037] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.743 [2024-10-01 17:38:02.130723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.743 [2024-10-01 17:38:02.130762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:03.743 [2024-10-01 17:38:02.130773] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:03.743 [2024-10-01 17:38:02.131018] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:03.743 [2024-10-01 17:38:02.131244] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.743 [2024-10-01 17:38:02.131254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.743 [2024-10-01 17:38:02.131262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.743 [2024-10-01 17:38:02.134753] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.743 [2024-10-01 17:38:02.143800] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.743 [2024-10-01 17:38:02.144467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.743 [2024-10-01 17:38:02.144507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:03.743 [2024-10-01 17:38:02.144518] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:03.743 [2024-10-01 17:38:02.144753] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:03.743 [2024-10-01 17:38:02.144973] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.743 [2024-10-01 17:38:02.144983] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.743 [2024-10-01 17:38:02.144991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.744 [2024-10-01 17:38:02.148494] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.744 [2024-10-01 17:38:02.157537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.744 [2024-10-01 17:38:02.158116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.744 [2024-10-01 17:38:02.158156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:03.744 [2024-10-01 17:38:02.158168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:03.744 [2024-10-01 17:38:02.158407] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:03.744 [2024-10-01 17:38:02.158628] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.744 [2024-10-01 17:38:02.158638] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.744 [2024-10-01 17:38:02.158646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.744 [2024-10-01 17:38:02.162151] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.744 [2024-10-01 17:38:02.171418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.744 [2024-10-01 17:38:02.172088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.744 [2024-10-01 17:38:02.172127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:03.744 [2024-10-01 17:38:02.172139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:03.744 [2024-10-01 17:38:02.172374] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:03.744 [2024-10-01 17:38:02.172605] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.744 [2024-10-01 17:38:02.172616] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.744 [2024-10-01 17:38:02.172624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.744 [2024-10-01 17:38:02.176126] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.744 [2024-10-01 17:38:02.185172] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.744 [2024-10-01 17:38:02.185838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.744 [2024-10-01 17:38:02.185878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:03.744 [2024-10-01 17:38:02.185890] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:03.744 [2024-10-01 17:38:02.186134] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:03.744 [2024-10-01 17:38:02.186355] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.744 [2024-10-01 17:38:02.186365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.744 [2024-10-01 17:38:02.186373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.744 [2024-10-01 17:38:02.189865] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.744 [2024-10-01 17:38:02.198924] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.744 [2024-10-01 17:38:02.199579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.744 [2024-10-01 17:38:02.199619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:03.744 [2024-10-01 17:38:02.199630] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:03.744 [2024-10-01 17:38:02.199865] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:03.744 [2024-10-01 17:38:02.200095] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.744 [2024-10-01 17:38:02.200106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.744 [2024-10-01 17:38:02.200115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.744 [2024-10-01 17:38:02.203610] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.744 [2024-10-01 17:38:02.212660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.744 [2024-10-01 17:38:02.213286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.744 [2024-10-01 17:38:02.213326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:03.744 [2024-10-01 17:38:02.213338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:03.744 [2024-10-01 17:38:02.213573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:03.744 [2024-10-01 17:38:02.213795] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.744 [2024-10-01 17:38:02.213804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.744 [2024-10-01 17:38:02.213813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.744 [2024-10-01 17:38:02.217313] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.744 [2024-10-01 17:38:02.226582] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.744 [2024-10-01 17:38:02.227028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.744 [2024-10-01 17:38:02.227050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:03.744 [2024-10-01 17:38:02.227062] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:03.744 [2024-10-01 17:38:02.227281] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:03.744 [2024-10-01 17:38:02.227497] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.744 [2024-10-01 17:38:02.227508] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.744 [2024-10-01 17:38:02.227515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.744 [2024-10-01 17:38:02.231012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.744 [2024-10-01 17:38:02.240495] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.744 [2024-10-01 17:38:02.241058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.744 [2024-10-01 17:38:02.241078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:03.744 [2024-10-01 17:38:02.241086] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:03.744 [2024-10-01 17:38:02.241303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:03.744 [2024-10-01 17:38:02.241520] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.744 [2024-10-01 17:38:02.241530] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.744 [2024-10-01 17:38:02.241538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.744 [2024-10-01 17:38:02.245029] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.744 [2024-10-01 17:38:02.254278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.744 [2024-10-01 17:38:02.254917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.744 [2024-10-01 17:38:02.254958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:03.744 [2024-10-01 17:38:02.254969] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:03.744 [2024-10-01 17:38:02.255215] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:03.744 [2024-10-01 17:38:02.255437] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.744 [2024-10-01 17:38:02.255446] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.744 [2024-10-01 17:38:02.255455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.744 [2024-10-01 17:38:02.258943] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.744 [2024-10-01 17:38:02.268201] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.744 [2024-10-01 17:38:02.268749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.744 [2024-10-01 17:38:02.268788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:03.744 [2024-10-01 17:38:02.268801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:03.744 [2024-10-01 17:38:02.269047] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:03.744 [2024-10-01 17:38:02.269269] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.744 [2024-10-01 17:38:02.269284] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.744 [2024-10-01 17:38:02.269292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.744 [2024-10-01 17:38:02.272799] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.744 [2024-10-01 17:38:02.282060] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.744 [2024-10-01 17:38:02.282720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.744 [2024-10-01 17:38:02.282759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:03.744 [2024-10-01 17:38:02.282771] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:03.744 [2024-10-01 17:38:02.283018] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:03.744 [2024-10-01 17:38:02.283240] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.744 [2024-10-01 17:38:02.283249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.744 [2024-10-01 17:38:02.283257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.744 [2024-10-01 17:38:02.286752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.007 [2024-10-01 17:38:02.295818] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.007 [2024-10-01 17:38:02.296488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.007 [2024-10-01 17:38:02.296528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.007 [2024-10-01 17:38:02.296539] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.007 [2024-10-01 17:38:02.296776] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.007 [2024-10-01 17:38:02.297007] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.007 [2024-10-01 17:38:02.297017] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.007 [2024-10-01 17:38:02.297025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.007 [2024-10-01 17:38:02.300517] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.007 [2024-10-01 17:38:02.309564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.007 [2024-10-01 17:38:02.310228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.007 [2024-10-01 17:38:02.310269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.007 [2024-10-01 17:38:02.310279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.007 [2024-10-01 17:38:02.310515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.007 [2024-10-01 17:38:02.310736] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.007 [2024-10-01 17:38:02.310746] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.007 [2024-10-01 17:38:02.310754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.007 [2024-10-01 17:38:02.314258] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.007 [2024-10-01 17:38:02.323318] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.007 [2024-10-01 17:38:02.323983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.007 [2024-10-01 17:38:02.324032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.007 [2024-10-01 17:38:02.324044] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.007 [2024-10-01 17:38:02.324281] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.007 [2024-10-01 17:38:02.324501] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.007 [2024-10-01 17:38:02.324511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.007 [2024-10-01 17:38:02.324519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.007 [2024-10-01 17:38:02.328019] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.007 [2024-10-01 17:38:02.337082] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.007 [2024-10-01 17:38:02.337754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.007 [2024-10-01 17:38:02.337793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.007 [2024-10-01 17:38:02.337804] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.007 [2024-10-01 17:38:02.338051] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.007 [2024-10-01 17:38:02.338272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.007 [2024-10-01 17:38:02.338282] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.007 [2024-10-01 17:38:02.338290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.007 [2024-10-01 17:38:02.341786] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.007 [2024-10-01 17:38:02.350846] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.007 [2024-10-01 17:38:02.351427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.007 [2024-10-01 17:38:02.351447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.007 [2024-10-01 17:38:02.351455] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.007 [2024-10-01 17:38:02.351671] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.007 [2024-10-01 17:38:02.351888] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.007 [2024-10-01 17:38:02.351897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.007 [2024-10-01 17:38:02.351904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.007 [2024-10-01 17:38:02.355405] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.007 [2024-10-01 17:38:02.364673] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.007 [2024-10-01 17:38:02.365203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.007 [2024-10-01 17:38:02.365221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.007 [2024-10-01 17:38:02.365235] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.007 [2024-10-01 17:38:02.365452] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.007 [2024-10-01 17:38:02.365669] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.007 [2024-10-01 17:38:02.365677] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.007 [2024-10-01 17:38:02.365685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.007 [2024-10-01 17:38:02.369182] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.007 [2024-10-01 17:38:02.378456] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.007 [2024-10-01 17:38:02.379007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.007 [2024-10-01 17:38:02.379048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.007 [2024-10-01 17:38:02.379060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.007 [2024-10-01 17:38:02.379298] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.007 [2024-10-01 17:38:02.379518] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.007 [2024-10-01 17:38:02.379529] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.008 [2024-10-01 17:38:02.379537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.008 [2024-10-01 17:38:02.383036] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.008 [2024-10-01 17:38:02.392306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.008 [2024-10-01 17:38:02.392933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.008 [2024-10-01 17:38:02.392973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.008 [2024-10-01 17:38:02.392985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.008 [2024-10-01 17:38:02.393230] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.008 [2024-10-01 17:38:02.393452] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.008 [2024-10-01 17:38:02.393461] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.008 [2024-10-01 17:38:02.393470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.008 [2024-10-01 17:38:02.396965] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.008 [2024-10-01 17:38:02.406051] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.008 [2024-10-01 17:38:02.406711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.008 [2024-10-01 17:38:02.406750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.008 [2024-10-01 17:38:02.406761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.008 [2024-10-01 17:38:02.407006] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.008 [2024-10-01 17:38:02.407228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.008 [2024-10-01 17:38:02.407242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.008 [2024-10-01 17:38:02.407251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.008 [2024-10-01 17:38:02.410748] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.008 [2024-10-01 17:38:02.419812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.008 [2024-10-01 17:38:02.420364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.008 [2024-10-01 17:38:02.420386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.008 [2024-10-01 17:38:02.420394] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.008 [2024-10-01 17:38:02.420611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.008 [2024-10-01 17:38:02.420828] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.008 [2024-10-01 17:38:02.420837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.008 [2024-10-01 17:38:02.420844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.008 [2024-10-01 17:38:02.424354] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.008 [2024-10-01 17:38:02.433646] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.008 [2024-10-01 17:38:02.434277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.008 [2024-10-01 17:38:02.434317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.008 [2024-10-01 17:38:02.434328] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.008 [2024-10-01 17:38:02.434564] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.008 [2024-10-01 17:38:02.434784] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.008 [2024-10-01 17:38:02.434794] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.008 [2024-10-01 17:38:02.434802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.008 [2024-10-01 17:38:02.438298] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.008 [2024-10-01 17:38:02.447597] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.008 [2024-10-01 17:38:02.448358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.008 [2024-10-01 17:38:02.448398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.008 [2024-10-01 17:38:02.448409] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.008 [2024-10-01 17:38:02.448644] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.008 [2024-10-01 17:38:02.448865] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.008 [2024-10-01 17:38:02.448875] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.008 [2024-10-01 17:38:02.448883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.008 [2024-10-01 17:38:02.452380] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.008 [2024-10-01 17:38:02.461433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.008 [2024-10-01 17:38:02.462017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.008 [2024-10-01 17:38:02.462038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.008 [2024-10-01 17:38:02.462046] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.008 [2024-10-01 17:38:02.462263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.008 [2024-10-01 17:38:02.462480] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.008 [2024-10-01 17:38:02.462489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.008 [2024-10-01 17:38:02.462496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.008 [2024-10-01 17:38:02.465981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.008 [2024-10-01 17:38:02.475260] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.008 [2024-10-01 17:38:02.475668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.008 [2024-10-01 17:38:02.475687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.008 [2024-10-01 17:38:02.475695] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.008 [2024-10-01 17:38:02.475911] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.008 [2024-10-01 17:38:02.476135] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.008 [2024-10-01 17:38:02.476146] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.008 [2024-10-01 17:38:02.476154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.008 [2024-10-01 17:38:02.479648] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.008 [2024-10-01 17:38:02.489132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.008 [2024-10-01 17:38:02.489555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.008 [2024-10-01 17:38:02.489573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.008 [2024-10-01 17:38:02.489581] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.008 [2024-10-01 17:38:02.489797] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.008 [2024-10-01 17:38:02.490020] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.008 [2024-10-01 17:38:02.490029] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.008 [2024-10-01 17:38:02.490036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.008 [2024-10-01 17:38:02.493532] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.008 [2024-10-01 17:38:02.503014] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.008 [2024-10-01 17:38:02.503564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.008 [2024-10-01 17:38:02.503581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.008 [2024-10-01 17:38:02.503589] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.008 [2024-10-01 17:38:02.503809] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.008 [2024-10-01 17:38:02.504033] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.008 [2024-10-01 17:38:02.504042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.008 [2024-10-01 17:38:02.504050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.008 [2024-10-01 17:38:02.507542] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.008 [2024-10-01 17:38:02.516811] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.008 [2024-10-01 17:38:02.517343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.008 [2024-10-01 17:38:02.517359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.008 [2024-10-01 17:38:02.517367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.008 [2024-10-01 17:38:02.517582] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.008 [2024-10-01 17:38:02.517799] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.008 [2024-10-01 17:38:02.517808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.008 [2024-10-01 17:38:02.517815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.009 [2024-10-01 17:38:02.521323] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.009 [2024-10-01 17:38:02.530598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.009 [2024-10-01 17:38:02.531119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.009 [2024-10-01 17:38:02.531137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.009 [2024-10-01 17:38:02.531145] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.009 [2024-10-01 17:38:02.531362] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.009 [2024-10-01 17:38:02.531577] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.009 [2024-10-01 17:38:02.531588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.009 [2024-10-01 17:38:02.531595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.009 [2024-10-01 17:38:02.535094] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.009 [2024-10-01 17:38:02.544369] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.009 [2024-10-01 17:38:02.544922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.009 [2024-10-01 17:38:02.544938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.009 [2024-10-01 17:38:02.544946] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.009 [2024-10-01 17:38:02.545169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.009 [2024-10-01 17:38:02.545386] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.009 [2024-10-01 17:38:02.545396] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.009 [2024-10-01 17:38:02.545408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.009 [2024-10-01 17:38:02.548901] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.271 [2024-10-01 17:38:02.558178] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.271 [2024-10-01 17:38:02.558743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.271 [2024-10-01 17:38:02.558760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.271 [2024-10-01 17:38:02.558768] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.271 [2024-10-01 17:38:02.558983] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.271 [2024-10-01 17:38:02.559207] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.271 [2024-10-01 17:38:02.559216] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.271 [2024-10-01 17:38:02.559224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.271 [2024-10-01 17:38:02.562715] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.271 [2024-10-01 17:38:02.571987] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.271 [2024-10-01 17:38:02.572608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.271 [2024-10-01 17:38:02.572649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.271 [2024-10-01 17:38:02.572660] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.271 [2024-10-01 17:38:02.572896] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.271 [2024-10-01 17:38:02.573127] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.271 [2024-10-01 17:38:02.573138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.271 [2024-10-01 17:38:02.573146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.271 [2024-10-01 17:38:02.576652] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.271 [2024-10-01 17:38:02.585929] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.271 [2024-10-01 17:38:02.586568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.271 [2024-10-01 17:38:02.586607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.271 [2024-10-01 17:38:02.586618] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.271 [2024-10-01 17:38:02.586853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.271 [2024-10-01 17:38:02.587083] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.271 [2024-10-01 17:38:02.587094] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.271 [2024-10-01 17:38:02.587102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.271 [2024-10-01 17:38:02.590602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.271 [2024-10-01 17:38:02.599871] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.271 [2024-10-01 17:38:02.600419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.271 [2024-10-01 17:38:02.600445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.272 [2024-10-01 17:38:02.600453] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.272 [2024-10-01 17:38:02.600670] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.272 [2024-10-01 17:38:02.600886] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.272 [2024-10-01 17:38:02.600896] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.272 [2024-10-01 17:38:02.600903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.272 [2024-10-01 17:38:02.604403] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.272 [2024-10-01 17:38:02.613665] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.272 [2024-10-01 17:38:02.614211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.272 [2024-10-01 17:38:02.614251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.272 [2024-10-01 17:38:02.614262] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.272 [2024-10-01 17:38:02.614498] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.272 [2024-10-01 17:38:02.614718] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.272 [2024-10-01 17:38:02.614728] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.272 [2024-10-01 17:38:02.614736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.272 [2024-10-01 17:38:02.618245] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.272 [2024-10-01 17:38:02.627526] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.272 [2024-10-01 17:38:02.628055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.272 [2024-10-01 17:38:02.628076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.272 [2024-10-01 17:38:02.628084] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.272 [2024-10-01 17:38:02.628301] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.272 [2024-10-01 17:38:02.628518] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.272 [2024-10-01 17:38:02.628527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.272 [2024-10-01 17:38:02.628535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.272 [2024-10-01 17:38:02.632037] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.272 [2024-10-01 17:38:02.641302] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.272 [2024-10-01 17:38:02.641867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.272 [2024-10-01 17:38:02.641885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.272 [2024-10-01 17:38:02.641893] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.272 [2024-10-01 17:38:02.642115] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.272 [2024-10-01 17:38:02.642342] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.272 [2024-10-01 17:38:02.642351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.272 [2024-10-01 17:38:02.642358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.272 [2024-10-01 17:38:02.645853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.272 [2024-10-01 17:38:02.655149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.272 [2024-10-01 17:38:02.655665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.272 [2024-10-01 17:38:02.655683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.272 [2024-10-01 17:38:02.655691] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.272 [2024-10-01 17:38:02.655907] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.272 [2024-10-01 17:38:02.656130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.272 [2024-10-01 17:38:02.656141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.272 [2024-10-01 17:38:02.656148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.272 [2024-10-01 17:38:02.659641] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.272 [2024-10-01 17:38:02.668906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.272 [2024-10-01 17:38:02.669422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.272 [2024-10-01 17:38:02.669439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.272 [2024-10-01 17:38:02.669447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.272 [2024-10-01 17:38:02.669662] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.272 [2024-10-01 17:38:02.669879] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.272 [2024-10-01 17:38:02.669888] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.272 [2024-10-01 17:38:02.669895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.272 [2024-10-01 17:38:02.673389] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.272 [2024-10-01 17:38:02.682665] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.272 [2024-10-01 17:38:02.683204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.272 [2024-10-01 17:38:02.683221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.272 [2024-10-01 17:38:02.683229] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.272 [2024-10-01 17:38:02.683444] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.272 [2024-10-01 17:38:02.683662] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.272 [2024-10-01 17:38:02.683672] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.272 [2024-10-01 17:38:02.683679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.272 [2024-10-01 17:38:02.687179] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.272 [2024-10-01 17:38:02.696444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.272 [2024-10-01 17:38:02.697103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.272 [2024-10-01 17:38:02.697143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.272 [2024-10-01 17:38:02.697156] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.272 [2024-10-01 17:38:02.697393] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.272 [2024-10-01 17:38:02.697613] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.272 [2024-10-01 17:38:02.697623] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.272 [2024-10-01 17:38:02.697631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.272 [2024-10-01 17:38:02.701127] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.272 [2024-10-01 17:38:02.710178] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.272 [2024-10-01 17:38:02.710538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.272 [2024-10-01 17:38:02.710558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.272 [2024-10-01 17:38:02.710567] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.272 [2024-10-01 17:38:02.710783] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.272 [2024-10-01 17:38:02.711005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.272 [2024-10-01 17:38:02.711017] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.272 [2024-10-01 17:38:02.711024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.272 [2024-10-01 17:38:02.714598] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.272 [2024-10-01 17:38:02.724073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.272 [2024-10-01 17:38:02.724680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.272 [2024-10-01 17:38:02.724719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.272 [2024-10-01 17:38:02.724730] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.272 [2024-10-01 17:38:02.724966] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.272 [2024-10-01 17:38:02.725196] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.272 [2024-10-01 17:38:02.725206] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.272 [2024-10-01 17:38:02.725215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.272 [2024-10-01 17:38:02.728706] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.272 [2024-10-01 17:38:02.737960] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.272 [2024-10-01 17:38:02.738593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.272 [2024-10-01 17:38:02.738633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.272 [2024-10-01 17:38:02.738649] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.272 [2024-10-01 17:38:02.738885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.273 [2024-10-01 17:38:02.739113] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.273 [2024-10-01 17:38:02.739124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.273 [2024-10-01 17:38:02.739131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.273 [2024-10-01 17:38:02.742628] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.273 [2024-10-01 17:38:02.751692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.273 [2024-10-01 17:38:02.752283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.273 [2024-10-01 17:38:02.752304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.273 [2024-10-01 17:38:02.752312] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.273 [2024-10-01 17:38:02.752529] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.273 [2024-10-01 17:38:02.752746] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.273 [2024-10-01 17:38:02.752755] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.273 [2024-10-01 17:38:02.752763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.273 [2024-10-01 17:38:02.756261] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.273 [2024-10-01 17:38:02.765525] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.273 [2024-10-01 17:38:02.766048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.273 [2024-10-01 17:38:02.766075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.273 [2024-10-01 17:38:02.766083] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.273 [2024-10-01 17:38:02.766300] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.273 [2024-10-01 17:38:02.766516] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.273 [2024-10-01 17:38:02.766525] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.273 [2024-10-01 17:38:02.766532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.273 [2024-10-01 17:38:02.770027] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.273 [2024-10-01 17:38:02.779298] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.273 [2024-10-01 17:38:02.779954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.273 [2024-10-01 17:38:02.780003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.273 [2024-10-01 17:38:02.780017] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.273 [2024-10-01 17:38:02.780253] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.273 [2024-10-01 17:38:02.780474] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.273 [2024-10-01 17:38:02.780488] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.273 [2024-10-01 17:38:02.780496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.273 [2024-10-01 17:38:02.783998] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.273 [2024-10-01 17:38:02.793057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.273 [2024-10-01 17:38:02.793729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.273 [2024-10-01 17:38:02.793769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.273 [2024-10-01 17:38:02.793780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.273 [2024-10-01 17:38:02.794026] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.273 [2024-10-01 17:38:02.794248] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.273 [2024-10-01 17:38:02.794257] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.273 [2024-10-01 17:38:02.794265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.273 [2024-10-01 17:38:02.797763] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.273 [2024-10-01 17:38:02.806838] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.273 [2024-10-01 17:38:02.807477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.273 [2024-10-01 17:38:02.807517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.273 [2024-10-01 17:38:02.807528] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.273 [2024-10-01 17:38:02.807764] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.273 [2024-10-01 17:38:02.807985] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.273 [2024-10-01 17:38:02.808004] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.273 [2024-10-01 17:38:02.808012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.273 [2024-10-01 17:38:02.811510] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.535 [2024-10-01 17:38:02.820779] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.535 [2024-10-01 17:38:02.821258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.535 [2024-10-01 17:38:02.821279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.536 [2024-10-01 17:38:02.821287] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.536 [2024-10-01 17:38:02.821504] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.536 [2024-10-01 17:38:02.821720] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.536 [2024-10-01 17:38:02.821730] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.536 [2024-10-01 17:38:02.821738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.536 [2024-10-01 17:38:02.825238] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.536 [2024-10-01 17:38:02.834714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.536 [2024-10-01 17:38:02.835236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.536 [2024-10-01 17:38:02.835254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.536 [2024-10-01 17:38:02.835262] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.536 [2024-10-01 17:38:02.835479] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.536 [2024-10-01 17:38:02.835696] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.536 [2024-10-01 17:38:02.835705] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.536 [2024-10-01 17:38:02.835712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.536 [2024-10-01 17:38:02.839210] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.536 [2024-10-01 17:38:02.848472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.536 [2024-10-01 17:38:02.849021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.536 [2024-10-01 17:38:02.849039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.536 [2024-10-01 17:38:02.849047] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.536 [2024-10-01 17:38:02.849263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.536 [2024-10-01 17:38:02.849479] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.536 [2024-10-01 17:38:02.849489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.536 [2024-10-01 17:38:02.849496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.536 [2024-10-01 17:38:02.852988] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.536 [2024-10-01 17:38:02.862288] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.536 [2024-10-01 17:38:02.862809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.536 [2024-10-01 17:38:02.862826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.536 [2024-10-01 17:38:02.862833] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.536 [2024-10-01 17:38:02.863056] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.536 [2024-10-01 17:38:02.863274] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.536 [2024-10-01 17:38:02.863285] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.536 [2024-10-01 17:38:02.863292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.536 [2024-10-01 17:38:02.866783] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.536 [2024-10-01 17:38:02.876072] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.536 [2024-10-01 17:38:02.876627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.536 [2024-10-01 17:38:02.876644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.536 [2024-10-01 17:38:02.876652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.536 [2024-10-01 17:38:02.876872] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.536 [2024-10-01 17:38:02.877096] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.536 [2024-10-01 17:38:02.877106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.536 [2024-10-01 17:38:02.877113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.536 [2024-10-01 17:38:02.880608] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.536 [2024-10-01 17:38:02.889892] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.536 [2024-10-01 17:38:02.890452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.536 [2024-10-01 17:38:02.890469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.536 [2024-10-01 17:38:02.890478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.536 [2024-10-01 17:38:02.890694] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.536 [2024-10-01 17:38:02.890911] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.536 [2024-10-01 17:38:02.890919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.536 [2024-10-01 17:38:02.890927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.536 [2024-10-01 17:38:02.894427] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.536 [2024-10-01 17:38:02.903696] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.536 [2024-10-01 17:38:02.904217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.536 [2024-10-01 17:38:02.904234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.536 [2024-10-01 17:38:02.904242] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.536 [2024-10-01 17:38:02.904458] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.536 [2024-10-01 17:38:02.904676] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.536 [2024-10-01 17:38:02.904685] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.536 [2024-10-01 17:38:02.904693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.536 [2024-10-01 17:38:02.908190] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.536 [2024-10-01 17:38:02.917460] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.536 [2024-10-01 17:38:02.918077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.536 [2024-10-01 17:38:02.918116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.536 [2024-10-01 17:38:02.918129] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.536 [2024-10-01 17:38:02.918368] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.536 [2024-10-01 17:38:02.918588] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.536 [2024-10-01 17:38:02.918599] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.536 [2024-10-01 17:38:02.918612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.536 [2024-10-01 17:38:02.922125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.536 [2024-10-01 17:38:02.931383] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.536 [2024-10-01 17:38:02.932044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.536 [2024-10-01 17:38:02.932085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.536 [2024-10-01 17:38:02.932096] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.536 [2024-10-01 17:38:02.932332] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.536 [2024-10-01 17:38:02.932552] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.536 [2024-10-01 17:38:02.932563] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.536 [2024-10-01 17:38:02.932571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.536 [2024-10-01 17:38:02.936072] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.536 [2024-10-01 17:38:02.945135] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.536 [2024-10-01 17:38:02.945677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.536 [2024-10-01 17:38:02.945698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.536 [2024-10-01 17:38:02.945706] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.536 [2024-10-01 17:38:02.945922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.536 [2024-10-01 17:38:02.946146] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.536 [2024-10-01 17:38:02.946156] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.536 [2024-10-01 17:38:02.946163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.536 [2024-10-01 17:38:02.949657] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.536 [2024-10-01 17:38:02.958923] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.537 [2024-10-01 17:38:02.959419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.537 [2024-10-01 17:38:02.959438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.537 [2024-10-01 17:38:02.959446] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.537 [2024-10-01 17:38:02.959662] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.537 [2024-10-01 17:38:02.959879] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.537 [2024-10-01 17:38:02.959888] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.537 [2024-10-01 17:38:02.959896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.537 [2024-10-01 17:38:02.963394] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.537 [2024-10-01 17:38:02.972664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.537 [2024-10-01 17:38:02.973284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.537 [2024-10-01 17:38:02.973324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.537 [2024-10-01 17:38:02.973336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.537 [2024-10-01 17:38:02.973572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.537 [2024-10-01 17:38:02.973792] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.537 [2024-10-01 17:38:02.973802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.537 [2024-10-01 17:38:02.973810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.537 [2024-10-01 17:38:02.977328] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.537 [2024-10-01 17:38:02.986599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.537 [2024-10-01 17:38:02.987244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.537 [2024-10-01 17:38:02.987283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.537 [2024-10-01 17:38:02.987294] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.537 [2024-10-01 17:38:02.987530] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.537 [2024-10-01 17:38:02.987750] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.537 [2024-10-01 17:38:02.987759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.537 [2024-10-01 17:38:02.987768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.537 [2024-10-01 17:38:02.991264] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.537 [2024-10-01 17:38:03.000524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.537 [2024-10-01 17:38:03.001056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.537 [2024-10-01 17:38:03.001076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.537 [2024-10-01 17:38:03.001084] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.537 [2024-10-01 17:38:03.001301] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.537 [2024-10-01 17:38:03.001517] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.537 [2024-10-01 17:38:03.001526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.537 [2024-10-01 17:38:03.001534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.537 [2024-10-01 17:38:03.005034] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.537 [2024-10-01 17:38:03.014400] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.537 [2024-10-01 17:38:03.014965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.537 [2024-10-01 17:38:03.014983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.537 [2024-10-01 17:38:03.014991] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.537 [2024-10-01 17:38:03.015219] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.537 [2024-10-01 17:38:03.015436] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.537 [2024-10-01 17:38:03.015445] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.537 [2024-10-01 17:38:03.015452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.537 [2024-10-01 17:38:03.018942] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.537 [2024-10-01 17:38:03.028228] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.537 [2024-10-01 17:38:03.028884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.537 [2024-10-01 17:38:03.028924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.537 [2024-10-01 17:38:03.028935] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.537 [2024-10-01 17:38:03.029181] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.537 [2024-10-01 17:38:03.029402] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.537 [2024-10-01 17:38:03.029413] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.537 [2024-10-01 17:38:03.029421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.537 [2024-10-01 17:38:03.032920] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.537 [2024-10-01 17:38:03.041982] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.537 [2024-10-01 17:38:03.042513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.537 [2024-10-01 17:38:03.042534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.537 [2024-10-01 17:38:03.042543] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.537 [2024-10-01 17:38:03.042759] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.537 [2024-10-01 17:38:03.042976] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.537 [2024-10-01 17:38:03.042986] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.537 [2024-10-01 17:38:03.042999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.537 [2024-10-01 17:38:03.046492] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.537 [2024-10-01 17:38:03.055754] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.537 [2024-10-01 17:38:03.056297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.537 [2024-10-01 17:38:03.056315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.537 [2024-10-01 17:38:03.056322] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.537 [2024-10-01 17:38:03.056538] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.537 [2024-10-01 17:38:03.056755] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.537 [2024-10-01 17:38:03.056764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.537 [2024-10-01 17:38:03.056776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.537 [2024-10-01 17:38:03.060273] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.537 [2024-10-01 17:38:03.069540] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.537 [2024-10-01 17:38:03.070088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.537 [2024-10-01 17:38:03.070109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.537 [2024-10-01 17:38:03.070117] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.537 [2024-10-01 17:38:03.070334] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.537 [2024-10-01 17:38:03.070550] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.537 [2024-10-01 17:38:03.070560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.537 [2024-10-01 17:38:03.070569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.537 [2024-10-01 17:38:03.074069] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.799 7436.75 IOPS, 29.05 MiB/s [2024-10-01 17:38:03.084591] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.799 [2024-10-01 17:38:03.085127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.799 [2024-10-01 17:38:03.085146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.799 [2024-10-01 17:38:03.085154] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.799 [2024-10-01 17:38:03.085370] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.799 [2024-10-01 17:38:03.085587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.799 [2024-10-01 17:38:03.085597] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.799 [2024-10-01 17:38:03.085605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.799 [2024-10-01 17:38:03.089107] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.799 [2024-10-01 17:38:03.098380] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.799 [2024-10-01 17:38:03.098779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.799 [2024-10-01 17:38:03.098799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.799 [2024-10-01 17:38:03.098806] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.799 [2024-10-01 17:38:03.099029] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.799 [2024-10-01 17:38:03.099247] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.799 [2024-10-01 17:38:03.099257] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.799 [2024-10-01 17:38:03.099265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.799 [2024-10-01 17:38:03.102757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.799 [2024-10-01 17:38:03.112232] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.799 [2024-10-01 17:38:03.112742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.799 [2024-10-01 17:38:03.112763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.799 [2024-10-01 17:38:03.112771] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.799 [2024-10-01 17:38:03.112987] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.799 [2024-10-01 17:38:03.113211] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.799 [2024-10-01 17:38:03.113220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.799 [2024-10-01 17:38:03.113227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.799 [2024-10-01 17:38:03.116720] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.799 [2024-10-01 17:38:03.126001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.799 [2024-10-01 17:38:03.126517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.799 [2024-10-01 17:38:03.126533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.799 [2024-10-01 17:38:03.126541] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.799 [2024-10-01 17:38:03.126756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.799 [2024-10-01 17:38:03.126973] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.799 [2024-10-01 17:38:03.126982] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.799 [2024-10-01 17:38:03.126990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.799 [2024-10-01 17:38:03.130485] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.799 [2024-10-01 17:38:03.139746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.799 [2024-10-01 17:38:03.140356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.799 [2024-10-01 17:38:03.140396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.799 [2024-10-01 17:38:03.140408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.799 [2024-10-01 17:38:03.140643] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.799 [2024-10-01 17:38:03.140864] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.799 [2024-10-01 17:38:03.140873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.799 [2024-10-01 17:38:03.140881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.799 [2024-10-01 17:38:03.144384] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.799 [2024-10-01 17:38:03.153641] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.799 [2024-10-01 17:38:03.154311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.799 [2024-10-01 17:38:03.154351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.799 [2024-10-01 17:38:03.154363] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.799 [2024-10-01 17:38:03.154600] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.799 [2024-10-01 17:38:03.154825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.799 [2024-10-01 17:38:03.154835] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.799 [2024-10-01 17:38:03.154843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.799 [2024-10-01 17:38:03.158344] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.799 [2024-10-01 17:38:03.167389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.799 [2024-10-01 17:38:03.168055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.799 [2024-10-01 17:38:03.168096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.799 [2024-10-01 17:38:03.168108] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.799 [2024-10-01 17:38:03.168345] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.799 [2024-10-01 17:38:03.168565] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.799 [2024-10-01 17:38:03.168575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.799 [2024-10-01 17:38:03.168583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.799 [2024-10-01 17:38:03.172087] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.799 [2024-10-01 17:38:03.181164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.799 [2024-10-01 17:38:03.181830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.799 [2024-10-01 17:38:03.181869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.799 [2024-10-01 17:38:03.181880] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.799 [2024-10-01 17:38:03.182126] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.799 [2024-10-01 17:38:03.182355] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.800 [2024-10-01 17:38:03.182365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.800 [2024-10-01 17:38:03.182373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.800 [2024-10-01 17:38:03.185870] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.800 [2024-10-01 17:38:03.194932] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.800 [2024-10-01 17:38:03.195572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.800 [2024-10-01 17:38:03.195612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.800 [2024-10-01 17:38:03.195623] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.800 [2024-10-01 17:38:03.195858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.800 [2024-10-01 17:38:03.196089] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.800 [2024-10-01 17:38:03.196101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.800 [2024-10-01 17:38:03.196109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.800 [2024-10-01 17:38:03.199611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.800 [2024-10-01 17:38:03.208676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.800 [2024-10-01 17:38:03.209396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.800 [2024-10-01 17:38:03.209435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.800 [2024-10-01 17:38:03.209447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.800 [2024-10-01 17:38:03.209682] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.800 [2024-10-01 17:38:03.209903] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.800 [2024-10-01 17:38:03.209912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.800 [2024-10-01 17:38:03.209920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.800 [2024-10-01 17:38:03.213423] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.800 [2024-10-01 17:38:03.222484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.800 [2024-10-01 17:38:03.223095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.800 [2024-10-01 17:38:03.223135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.800 [2024-10-01 17:38:03.223148] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.800 [2024-10-01 17:38:03.223384] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.800 [2024-10-01 17:38:03.223606] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.800 [2024-10-01 17:38:03.223616] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.800 [2024-10-01 17:38:03.223624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.800 [2024-10-01 17:38:03.227122] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.800 [2024-10-01 17:38:03.236372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.800 [2024-10-01 17:38:03.236944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.800 [2024-10-01 17:38:03.236964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.800 [2024-10-01 17:38:03.236972] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.800 [2024-10-01 17:38:03.237194] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.800 [2024-10-01 17:38:03.237412] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.800 [2024-10-01 17:38:03.237422] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.800 [2024-10-01 17:38:03.237429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.800 [2024-10-01 17:38:03.240914] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.800 [2024-10-01 17:38:03.250162] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.800 [2024-10-01 17:38:03.250676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.800 [2024-10-01 17:38:03.250693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.800 [2024-10-01 17:38:03.250705] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.800 [2024-10-01 17:38:03.250921] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.800 [2024-10-01 17:38:03.251144] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.800 [2024-10-01 17:38:03.251154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.800 [2024-10-01 17:38:03.251161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.800 [2024-10-01 17:38:03.254652] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.800 [2024-10-01 17:38:03.263972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.800 [2024-10-01 17:38:03.264588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.800 [2024-10-01 17:38:03.264628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.800 [2024-10-01 17:38:03.264639] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.800 [2024-10-01 17:38:03.264875] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.800 [2024-10-01 17:38:03.265106] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.800 [2024-10-01 17:38:03.265117] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.800 [2024-10-01 17:38:03.265124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.800 [2024-10-01 17:38:03.268621] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.800 [2024-10-01 17:38:03.277896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.800 [2024-10-01 17:38:03.278483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.800 [2024-10-01 17:38:03.278522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.800 [2024-10-01 17:38:03.278533] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.800 [2024-10-01 17:38:03.278769] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.800 [2024-10-01 17:38:03.278989] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.800 [2024-10-01 17:38:03.279013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.800 [2024-10-01 17:38:03.279021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.800 [2024-10-01 17:38:03.282513] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.800 [2024-10-01 17:38:03.291762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.800 [2024-10-01 17:38:03.292384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.800 [2024-10-01 17:38:03.292423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.800 [2024-10-01 17:38:03.292435] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.800 [2024-10-01 17:38:03.292670] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.800 [2024-10-01 17:38:03.292891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.800 [2024-10-01 17:38:03.292905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.800 [2024-10-01 17:38:03.292913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.800 [2024-10-01 17:38:03.296415] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.800 [2024-10-01 17:38:03.305679] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.800 [2024-10-01 17:38:03.306327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.800 [2024-10-01 17:38:03.306366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.800 [2024-10-01 17:38:03.306377] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.800 [2024-10-01 17:38:03.306613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.800 [2024-10-01 17:38:03.306832] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.800 [2024-10-01 17:38:03.306843] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.800 [2024-10-01 17:38:03.306851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.800 [2024-10-01 17:38:03.310351] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.800 [2024-10-01 17:38:03.319600] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.800 [2024-10-01 17:38:03.320223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.800 [2024-10-01 17:38:03.320262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.800 [2024-10-01 17:38:03.320273] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.800 [2024-10-01 17:38:03.320509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.800 [2024-10-01 17:38:03.320729] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.800 [2024-10-01 17:38:03.320738] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.800 [2024-10-01 17:38:03.320747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.800 [2024-10-01 17:38:03.324257] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.801 [2024-10-01 17:38:03.333507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.801 [2024-10-01 17:38:03.334092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.801 [2024-10-01 17:38:03.334132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:04.801 [2024-10-01 17:38:03.334145] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:04.801 [2024-10-01 17:38:03.334384] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:04.801 [2024-10-01 17:38:03.334604] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.801 [2024-10-01 17:38:03.334613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.801 [2024-10-01 17:38:03.334621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.801 [2024-10-01 17:38:03.338123] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.061 [2024-10-01 17:38:03.347385] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.061 [2024-10-01 17:38:03.348072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.061 [2024-10-01 17:38:03.348112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.061 [2024-10-01 17:38:03.348123] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.061 [2024-10-01 17:38:03.348359] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.061 [2024-10-01 17:38:03.348579] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.061 [2024-10-01 17:38:03.348588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.061 [2024-10-01 17:38:03.348596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.061 [2024-10-01 17:38:03.352099] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.061 [2024-10-01 17:38:03.361142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.061 [2024-10-01 17:38:03.361811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.061 [2024-10-01 17:38:03.361852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.061 [2024-10-01 17:38:03.361863] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.061 [2024-10-01 17:38:03.362108] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.061 [2024-10-01 17:38:03.362330] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.061 [2024-10-01 17:38:03.362340] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.061 [2024-10-01 17:38:03.362347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.061 [2024-10-01 17:38:03.365838] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.061 [2024-10-01 17:38:03.374885] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.061 [2024-10-01 17:38:03.375482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.061 [2024-10-01 17:38:03.375522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.061 [2024-10-01 17:38:03.375533] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.061 [2024-10-01 17:38:03.375769] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.061 [2024-10-01 17:38:03.375989] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.061 [2024-10-01 17:38:03.376010] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.061 [2024-10-01 17:38:03.376018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.061 [2024-10-01 17:38:03.379522] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.061 [2024-10-01 17:38:03.388777] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.061 [2024-10-01 17:38:03.389415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.061 [2024-10-01 17:38:03.389454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.061 [2024-10-01 17:38:03.389465] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.061 [2024-10-01 17:38:03.389705] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.061 [2024-10-01 17:38:03.389926] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.061 [2024-10-01 17:38:03.389935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.061 [2024-10-01 17:38:03.389943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.061 [2024-10-01 17:38:03.393446] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.061 [2024-10-01 17:38:03.402694] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.061 [2024-10-01 17:38:03.403325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.061 [2024-10-01 17:38:03.403365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.061 [2024-10-01 17:38:03.403376] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.061 [2024-10-01 17:38:03.403611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.061 [2024-10-01 17:38:03.403832] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.061 [2024-10-01 17:38:03.403842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.061 [2024-10-01 17:38:03.403850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.061 [2024-10-01 17:38:03.407349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.061 [2024-10-01 17:38:03.416599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.061 [2024-10-01 17:38:03.417212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.061 [2024-10-01 17:38:03.417252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.061 [2024-10-01 17:38:03.417263] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.061 [2024-10-01 17:38:03.417499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.061 [2024-10-01 17:38:03.417720] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.061 [2024-10-01 17:38:03.417729] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.061 [2024-10-01 17:38:03.417737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.061 [2024-10-01 17:38:03.421246] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.061 [2024-10-01 17:38:03.430497] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.061 [2024-10-01 17:38:03.431101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.061 [2024-10-01 17:38:03.431141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.061 [2024-10-01 17:38:03.431153] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.061 [2024-10-01 17:38:03.431388] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.061 [2024-10-01 17:38:03.431608] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.061 [2024-10-01 17:38:03.431618] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.061 [2024-10-01 17:38:03.431630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.061 [2024-10-01 17:38:03.435132] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.061 [2024-10-01 17:38:03.444384] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.061 [2024-10-01 17:38:03.445033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.061 [2024-10-01 17:38:03.445073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.061 [2024-10-01 17:38:03.445084] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.061 [2024-10-01 17:38:03.445319] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.062 [2024-10-01 17:38:03.445539] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.062 [2024-10-01 17:38:03.445549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.062 [2024-10-01 17:38:03.445557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.062 [2024-10-01 17:38:03.449058] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.062 [2024-10-01 17:38:03.458344] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.062 [2024-10-01 17:38:03.458985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.062 [2024-10-01 17:38:03.459031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.062 [2024-10-01 17:38:03.459042] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.062 [2024-10-01 17:38:03.459278] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.062 [2024-10-01 17:38:03.459498] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.062 [2024-10-01 17:38:03.459509] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.062 [2024-10-01 17:38:03.459517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.062 [2024-10-01 17:38:03.463015] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.062 [2024-10-01 17:38:03.472269] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.062 [2024-10-01 17:38:03.472919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.062 [2024-10-01 17:38:03.472958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.062 [2024-10-01 17:38:03.472969] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.062 [2024-10-01 17:38:03.473213] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.062 [2024-10-01 17:38:03.473435] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.062 [2024-10-01 17:38:03.473444] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.062 [2024-10-01 17:38:03.473452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.062 [2024-10-01 17:38:03.476953] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.062 [2024-10-01 17:38:03.486006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.062 [2024-10-01 17:38:03.486585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.062 [2024-10-01 17:38:03.486605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.062 [2024-10-01 17:38:03.486614] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.062 [2024-10-01 17:38:03.486830] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.062 [2024-10-01 17:38:03.487079] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.062 [2024-10-01 17:38:03.487092] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.062 [2024-10-01 17:38:03.487100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.062 [2024-10-01 17:38:03.490593] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.062 [2024-10-01 17:38:03.499847] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.062 [2024-10-01 17:38:03.500517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.062 [2024-10-01 17:38:03.500557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.062 [2024-10-01 17:38:03.500568] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.062 [2024-10-01 17:38:03.500803] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.062 [2024-10-01 17:38:03.501033] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.062 [2024-10-01 17:38:03.501043] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.062 [2024-10-01 17:38:03.501051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.062 [2024-10-01 17:38:03.504543] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.062 [2024-10-01 17:38:03.513584] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.062 [2024-10-01 17:38:03.514274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.062 [2024-10-01 17:38:03.514313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.062 [2024-10-01 17:38:03.514324] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.062 [2024-10-01 17:38:03.514560] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.062 [2024-10-01 17:38:03.514780] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.062 [2024-10-01 17:38:03.514789] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.062 [2024-10-01 17:38:03.514798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.062 [2024-10-01 17:38:03.518298] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.062 [2024-10-01 17:38:03.527352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.062 [2024-10-01 17:38:03.528032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.062 [2024-10-01 17:38:03.528072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.062 [2024-10-01 17:38:03.528085] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.062 [2024-10-01 17:38:03.528328] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.062 [2024-10-01 17:38:03.528549] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.062 [2024-10-01 17:38:03.528558] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.062 [2024-10-01 17:38:03.528566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.062 [2024-10-01 17:38:03.532066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.062 [2024-10-01 17:38:03.541109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.062 [2024-10-01 17:38:03.541776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.062 [2024-10-01 17:38:03.541815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.062 [2024-10-01 17:38:03.541826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.062 [2024-10-01 17:38:03.542071] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.062 [2024-10-01 17:38:03.542293] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.062 [2024-10-01 17:38:03.542302] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.062 [2024-10-01 17:38:03.542310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.062 [2024-10-01 17:38:03.545805] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.062 [2024-10-01 17:38:03.554851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.062 [2024-10-01 17:38:03.555479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.062 [2024-10-01 17:38:03.555519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.062 [2024-10-01 17:38:03.555530] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.062 [2024-10-01 17:38:03.555766] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.062 [2024-10-01 17:38:03.555987] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.062 [2024-10-01 17:38:03.556008] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.062 [2024-10-01 17:38:03.556017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.062 [2024-10-01 17:38:03.559507] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.062 [2024-10-01 17:38:03.568755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.062 [2024-10-01 17:38:03.569386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.062 [2024-10-01 17:38:03.569425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.062 [2024-10-01 17:38:03.569436] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.062 [2024-10-01 17:38:03.569672] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.062 [2024-10-01 17:38:03.569892] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.062 [2024-10-01 17:38:03.569903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.062 [2024-10-01 17:38:03.569915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.062 [2024-10-01 17:38:03.573415] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.062 [2024-10-01 17:38:03.582658] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.062 [2024-10-01 17:38:03.583320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.062 [2024-10-01 17:38:03.583360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.062 [2024-10-01 17:38:03.583371] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.062 [2024-10-01 17:38:03.583606] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.062 [2024-10-01 17:38:03.583827] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.062 [2024-10-01 17:38:03.583836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.062 [2024-10-01 17:38:03.583844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.062 [2024-10-01 17:38:03.587345] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.062 [2024-10-01 17:38:03.596600] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.062 [2024-10-01 17:38:03.597285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.062 [2024-10-01 17:38:03.597325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.062 [2024-10-01 17:38:03.597336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.062 [2024-10-01 17:38:03.597571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.062 [2024-10-01 17:38:03.597792] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.062 [2024-10-01 17:38:03.597801] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.062 [2024-10-01 17:38:03.597809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.062 [2024-10-01 17:38:03.601310] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.324 [2024-10-01 17:38:03.610365] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.324 [2024-10-01 17:38:03.611038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.324 [2024-10-01 17:38:03.611077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.324 [2024-10-01 17:38:03.611089] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.324 [2024-10-01 17:38:03.611324] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.324 [2024-10-01 17:38:03.611545] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.324 [2024-10-01 17:38:03.611554] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.324 [2024-10-01 17:38:03.611562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.324 [2024-10-01 17:38:03.615063] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.324 [2024-10-01 17:38:03.624129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.324 [2024-10-01 17:38:03.624696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.324 [2024-10-01 17:38:03.624721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.324 [2024-10-01 17:38:03.624729] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.324 [2024-10-01 17:38:03.624945] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.324 [2024-10-01 17:38:03.625170] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.324 [2024-10-01 17:38:03.625180] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.324 [2024-10-01 17:38:03.625187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.324 [2024-10-01 17:38:03.628676] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.324 [2024-10-01 17:38:03.637927] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.324 [2024-10-01 17:38:03.638553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.324 [2024-10-01 17:38:03.638592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.324 [2024-10-01 17:38:03.638603] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.324 [2024-10-01 17:38:03.638839] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.324 [2024-10-01 17:38:03.639068] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.324 [2024-10-01 17:38:03.639078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.324 [2024-10-01 17:38:03.639086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.324 [2024-10-01 17:38:03.642578] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.324 [2024-10-01 17:38:03.651671] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.324 [2024-10-01 17:38:03.652312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.324 [2024-10-01 17:38:03.652352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.324 [2024-10-01 17:38:03.652363] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.324 [2024-10-01 17:38:03.652598] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.324 [2024-10-01 17:38:03.652818] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.324 [2024-10-01 17:38:03.652829] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.324 [2024-10-01 17:38:03.652837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.324 [2024-10-01 17:38:03.656341] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.324 [2024-10-01 17:38:03.665597] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.324 [2024-10-01 17:38:03.666229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.324 [2024-10-01 17:38:03.666270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.324 [2024-10-01 17:38:03.666283] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.324 [2024-10-01 17:38:03.666519] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.324 [2024-10-01 17:38:03.666744] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.324 [2024-10-01 17:38:03.666755] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.324 [2024-10-01 17:38:03.666763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.324 [2024-10-01 17:38:03.670264] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.324 [2024-10-01 17:38:03.679530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.324 [2024-10-01 17:38:03.680117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.324 [2024-10-01 17:38:03.680157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.324 [2024-10-01 17:38:03.680170] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.324 [2024-10-01 17:38:03.680408] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.324 [2024-10-01 17:38:03.680628] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.324 [2024-10-01 17:38:03.680638] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.324 [2024-10-01 17:38:03.680646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.324 [2024-10-01 17:38:03.684144] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.324 [2024-10-01 17:38:03.693395] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.324 [2024-10-01 17:38:03.693931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.324 [2024-10-01 17:38:03.693952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.324 [2024-10-01 17:38:03.693961] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.325 [2024-10-01 17:38:03.694209] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.325 [2024-10-01 17:38:03.694430] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.325 [2024-10-01 17:38:03.694440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.325 [2024-10-01 17:38:03.694447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.325 [2024-10-01 17:38:03.697939] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.325 [2024-10-01 17:38:03.707191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.325 [2024-10-01 17:38:03.707856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.325 [2024-10-01 17:38:03.707896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.325 [2024-10-01 17:38:03.707907] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.325 [2024-10-01 17:38:03.708154] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.325 [2024-10-01 17:38:03.708375] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.325 [2024-10-01 17:38:03.708384] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.325 [2024-10-01 17:38:03.708393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.325 [2024-10-01 17:38:03.711890] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.325 [2024-10-01 17:38:03.720937] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.325 [2024-10-01 17:38:03.721489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.325 [2024-10-01 17:38:03.721530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.325 [2024-10-01 17:38:03.721541] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.325 [2024-10-01 17:38:03.721777] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.325 [2024-10-01 17:38:03.722005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.325 [2024-10-01 17:38:03.722015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.325 [2024-10-01 17:38:03.722023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.325 [2024-10-01 17:38:03.725517] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.325 [2024-10-01 17:38:03.734771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.325 [2024-10-01 17:38:03.735402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.325 [2024-10-01 17:38:03.735442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.325 [2024-10-01 17:38:03.735453] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.325 [2024-10-01 17:38:03.735689] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.325 [2024-10-01 17:38:03.735909] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.325 [2024-10-01 17:38:03.735918] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.325 [2024-10-01 17:38:03.735926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.325 [2024-10-01 17:38:03.739426] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.325 [2024-10-01 17:38:03.748547] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.325 [2024-10-01 17:38:03.749267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.325 [2024-10-01 17:38:03.749307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.325 [2024-10-01 17:38:03.749318] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.325 [2024-10-01 17:38:03.749554] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.325 [2024-10-01 17:38:03.749774] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.325 [2024-10-01 17:38:03.749784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.325 [2024-10-01 17:38:03.749792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.325 [2024-10-01 17:38:03.753293] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.325 [2024-10-01 17:38:03.762340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.325 [2024-10-01 17:38:03.763027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.325 [2024-10-01 17:38:03.763067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.325 [2024-10-01 17:38:03.763083] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.325 [2024-10-01 17:38:03.763319] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.325 [2024-10-01 17:38:03.763539] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.325 [2024-10-01 17:38:03.763549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.325 [2024-10-01 17:38:03.763556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.325 [2024-10-01 17:38:03.767056] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.325 [2024-10-01 17:38:03.776102] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.325 [2024-10-01 17:38:03.776765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.325 [2024-10-01 17:38:03.776805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.325 [2024-10-01 17:38:03.776815] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.325 [2024-10-01 17:38:03.777060] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.325 [2024-10-01 17:38:03.777281] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.325 [2024-10-01 17:38:03.777291] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.325 [2024-10-01 17:38:03.777299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.325 [2024-10-01 17:38:03.780802] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.325 [2024-10-01 17:38:03.789848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.325 [2024-10-01 17:38:03.790494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.325 [2024-10-01 17:38:03.790534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.325 [2024-10-01 17:38:03.790545] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.325 [2024-10-01 17:38:03.790781] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.325 [2024-10-01 17:38:03.791011] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.325 [2024-10-01 17:38:03.791021] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.325 [2024-10-01 17:38:03.791029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.325 [2024-10-01 17:38:03.794524] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.325 [2024-10-01 17:38:03.803774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.325 [2024-10-01 17:38:03.804390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.325 [2024-10-01 17:38:03.804429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.325 [2024-10-01 17:38:03.804441] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.325 [2024-10-01 17:38:03.804676] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.325 [2024-10-01 17:38:03.804896] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.325 [2024-10-01 17:38:03.804911] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.325 [2024-10-01 17:38:03.804919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.325 [2024-10-01 17:38:03.808430] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.325 [2024-10-01 17:38:03.817700] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.325 [2024-10-01 17:38:03.818371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.325 [2024-10-01 17:38:03.818411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.325 [2024-10-01 17:38:03.818422] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.325 [2024-10-01 17:38:03.818657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.325 [2024-10-01 17:38:03.818878] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.325 [2024-10-01 17:38:03.818888] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.325 [2024-10-01 17:38:03.818896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.325 [2024-10-01 17:38:03.822409] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.325 [2024-10-01 17:38:03.831456] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.325 [2024-10-01 17:38:03.832084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.325 [2024-10-01 17:38:03.832123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.325 [2024-10-01 17:38:03.832136] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.325 [2024-10-01 17:38:03.832375] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.325 [2024-10-01 17:38:03.832595] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.326 [2024-10-01 17:38:03.832604] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.326 [2024-10-01 17:38:03.832612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.326 [2024-10-01 17:38:03.836110] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.326 [2024-10-01 17:38:03.845387] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.326 [2024-10-01 17:38:03.846050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.326 [2024-10-01 17:38:03.846089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.326 [2024-10-01 17:38:03.846102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.326 [2024-10-01 17:38:03.846338] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.326 [2024-10-01 17:38:03.846559] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.326 [2024-10-01 17:38:03.846568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.326 [2024-10-01 17:38:03.846576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.326 [2024-10-01 17:38:03.850080] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.326 [2024-10-01 17:38:03.859132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.326 [2024-10-01 17:38:03.859796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.326 [2024-10-01 17:38:03.859835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.326 [2024-10-01 17:38:03.859846] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.326 [2024-10-01 17:38:03.860090] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.326 [2024-10-01 17:38:03.860311] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.326 [2024-10-01 17:38:03.860321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.326 [2024-10-01 17:38:03.860329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.326 [2024-10-01 17:38:03.863820] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.587 [2024-10-01 17:38:03.872871] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.587 [2024-10-01 17:38:03.873552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.587 [2024-10-01 17:38:03.873592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.587 [2024-10-01 17:38:03.873603] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.587 [2024-10-01 17:38:03.873839] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.587 [2024-10-01 17:38:03.874070] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.587 [2024-10-01 17:38:03.874080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.587 [2024-10-01 17:38:03.874088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.587 [2024-10-01 17:38:03.877581] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.587 [2024-10-01 17:38:03.886636] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.587 [2024-10-01 17:38:03.887326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.587 [2024-10-01 17:38:03.887366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.587 [2024-10-01 17:38:03.887377] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.587 [2024-10-01 17:38:03.887613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.587 [2024-10-01 17:38:03.887833] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.587 [2024-10-01 17:38:03.887843] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.587 [2024-10-01 17:38:03.887851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.587 [2024-10-01 17:38:03.891352] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.587 [2024-10-01 17:38:03.900396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.588 [2024-10-01 17:38:03.900956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.588 [2024-10-01 17:38:03.900976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.588 [2024-10-01 17:38:03.900985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.588 [2024-10-01 17:38:03.901213] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.588 [2024-10-01 17:38:03.901431] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.588 [2024-10-01 17:38:03.901440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.588 [2024-10-01 17:38:03.901448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.588 [2024-10-01 17:38:03.904963] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.588 [2024-10-01 17:38:03.914227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.588 [2024-10-01 17:38:03.914820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.588 [2024-10-01 17:38:03.914859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.588 [2024-10-01 17:38:03.914870] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.588 [2024-10-01 17:38:03.915115] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.588 [2024-10-01 17:38:03.915337] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.588 [2024-10-01 17:38:03.915346] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.588 [2024-10-01 17:38:03.915355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.588 [2024-10-01 17:38:03.918847] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.588 [2024-10-01 17:38:03.928113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.588 [2024-10-01 17:38:03.928633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.588 [2024-10-01 17:38:03.928652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.588 [2024-10-01 17:38:03.928660] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.588 [2024-10-01 17:38:03.928876] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.588 [2024-10-01 17:38:03.929100] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.588 [2024-10-01 17:38:03.929110] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.588 [2024-10-01 17:38:03.929117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.588 [2024-10-01 17:38:03.932601] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.588 [2024-10-01 17:38:03.941845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.588 [2024-10-01 17:38:03.942479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.588 [2024-10-01 17:38:03.942519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.588 [2024-10-01 17:38:03.942531] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.588 [2024-10-01 17:38:03.942766] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.588 [2024-10-01 17:38:03.942986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.588 [2024-10-01 17:38:03.943006] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.588 [2024-10-01 17:38:03.943018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.588 [2024-10-01 17:38:03.946509] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.588 [2024-10-01 17:38:03.955760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.588 [2024-10-01 17:38:03.956393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.588 [2024-10-01 17:38:03.956433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.588 [2024-10-01 17:38:03.956444] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.588 [2024-10-01 17:38:03.956680] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.588 [2024-10-01 17:38:03.956900] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.588 [2024-10-01 17:38:03.956910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.588 [2024-10-01 17:38:03.956918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.588 [2024-10-01 17:38:03.960419] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.588 [2024-10-01 17:38:03.969705] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.588 [2024-10-01 17:38:03.970346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.588 [2024-10-01 17:38:03.970386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.588 [2024-10-01 17:38:03.970398] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.588 [2024-10-01 17:38:03.970635] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.588 [2024-10-01 17:38:03.970855] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.588 [2024-10-01 17:38:03.970866] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.588 [2024-10-01 17:38:03.970874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.588 [2024-10-01 17:38:03.974372] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.588 [2024-10-01 17:38:03.983636] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.588 [2024-10-01 17:38:03.984292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.588 [2024-10-01 17:38:03.984331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.588 [2024-10-01 17:38:03.984344] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.588 [2024-10-01 17:38:03.984581] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.588 [2024-10-01 17:38:03.984801] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.588 [2024-10-01 17:38:03.984811] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.588 [2024-10-01 17:38:03.984819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.588 [2024-10-01 17:38:03.988319] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.588 [2024-10-01 17:38:03.997366] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.588 [2024-10-01 17:38:03.997826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.588 [2024-10-01 17:38:03.997846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.588 [2024-10-01 17:38:03.997855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.588 [2024-10-01 17:38:03.998079] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.588 [2024-10-01 17:38:03.998298] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.588 [2024-10-01 17:38:03.998308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.588 [2024-10-01 17:38:03.998316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.588 [2024-10-01 17:38:04.001802] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.588 [2024-10-01 17:38:04.011255] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.588 [2024-10-01 17:38:04.011904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.588 [2024-10-01 17:38:04.011944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.588 [2024-10-01 17:38:04.011956] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.588 [2024-10-01 17:38:04.012202] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.588 [2024-10-01 17:38:04.012423] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.588 [2024-10-01 17:38:04.012433] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.588 [2024-10-01 17:38:04.012440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.588 [2024-10-01 17:38:04.015937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.588 [2024-10-01 17:38:04.024995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.588 [2024-10-01 17:38:04.025662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.588 [2024-10-01 17:38:04.025702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.588 [2024-10-01 17:38:04.025713] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.588 [2024-10-01 17:38:04.025948] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.588 [2024-10-01 17:38:04.026179] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.588 [2024-10-01 17:38:04.026189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.588 [2024-10-01 17:38:04.026198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.588 [2024-10-01 17:38:04.029692] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.588 [2024-10-01 17:38:04.038734] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.588 [2024-10-01 17:38:04.039387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.588 [2024-10-01 17:38:04.039426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.589 [2024-10-01 17:38:04.039438] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.589 [2024-10-01 17:38:04.039673] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.589 [2024-10-01 17:38:04.039902] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.589 [2024-10-01 17:38:04.039912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.589 [2024-10-01 17:38:04.039920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.589 [2024-10-01 17:38:04.043422] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.589 [2024-10-01 17:38:04.052467] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.589 [2024-10-01 17:38:04.053080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.589 [2024-10-01 17:38:04.053120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.589 [2024-10-01 17:38:04.053133] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.589 [2024-10-01 17:38:04.053371] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.589 [2024-10-01 17:38:04.053592] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.589 [2024-10-01 17:38:04.053601] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.589 [2024-10-01 17:38:04.053609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.589 [2024-10-01 17:38:04.057112] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.589 [2024-10-01 17:38:04.066368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.589 [2024-10-01 17:38:04.066992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.589 [2024-10-01 17:38:04.067038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.589 [2024-10-01 17:38:04.067049] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.589 [2024-10-01 17:38:04.067285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.589 [2024-10-01 17:38:04.067505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.589 [2024-10-01 17:38:04.067515] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.589 [2024-10-01 17:38:04.067523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.589 [2024-10-01 17:38:04.071014] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.589 [2024-10-01 17:38:04.080284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.589 [2024-10-01 17:38:04.080951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.589 [2024-10-01 17:38:04.080991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.589 [2024-10-01 17:38:04.081011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.589 [2024-10-01 17:38:04.081248] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.589 [2024-10-01 17:38:04.081469] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.589 [2024-10-01 17:38:04.081479] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.589 [2024-10-01 17:38:04.081487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.589 5949.40 IOPS, 23.24 MiB/s [2024-10-01 17:38:04.086632] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.589 [2024-10-01 17:38:04.094035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.589 [2024-10-01 17:38:04.094656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.589 [2024-10-01 17:38:04.094696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.589 [2024-10-01 17:38:04.094707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.589 [2024-10-01 17:38:04.094943] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.589 [2024-10-01 17:38:04.095173] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.589 [2024-10-01 17:38:04.095184] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.589 [2024-10-01 17:38:04.095192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.589 [2024-10-01 17:38:04.098684] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.589 [2024-10-01 17:38:04.107938] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.589 [2024-10-01 17:38:04.108512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.589 [2024-10-01 17:38:04.108532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.589 [2024-10-01 17:38:04.108541] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.589 [2024-10-01 17:38:04.108757] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.589 [2024-10-01 17:38:04.108973] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.589 [2024-10-01 17:38:04.108982] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.589 [2024-10-01 17:38:04.108990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.589 [2024-10-01 17:38:04.112513] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.589 [2024-10-01 17:38:04.121774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.589 [2024-10-01 17:38:04.122277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.589 [2024-10-01 17:38:04.122296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.589 [2024-10-01 17:38:04.122304] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.589 [2024-10-01 17:38:04.122520] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.589 [2024-10-01 17:38:04.122736] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.589 [2024-10-01 17:38:04.122745] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.589 [2024-10-01 17:38:04.122752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.589 [2024-10-01 17:38:04.126244] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.853 [2024-10-01 17:38:04.135700] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.853 [2024-10-01 17:38:04.136202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.854 [2024-10-01 17:38:04.136224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.854 [2024-10-01 17:38:04.136232] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.854 [2024-10-01 17:38:04.136448] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.854 [2024-10-01 17:38:04.136664] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.854 [2024-10-01 17:38:04.136673] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.854 [2024-10-01 17:38:04.136680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.854 [2024-10-01 17:38:04.140170] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.854 [2024-10-01 17:38:04.149620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.854 [2024-10-01 17:38:04.150235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.854 [2024-10-01 17:38:04.150275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.854 [2024-10-01 17:38:04.150288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.854 [2024-10-01 17:38:04.150525] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.854 [2024-10-01 17:38:04.150745] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.854 [2024-10-01 17:38:04.150755] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.854 [2024-10-01 17:38:04.150763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.854 [2024-10-01 17:38:04.154265] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.854 [2024-10-01 17:38:04.163521] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.854 [2024-10-01 17:38:04.164124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.854 [2024-10-01 17:38:04.164164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.854 [2024-10-01 17:38:04.164177] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.854 [2024-10-01 17:38:04.164415] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.854 [2024-10-01 17:38:04.164637] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.854 [2024-10-01 17:38:04.164647] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.854 [2024-10-01 17:38:04.164654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.854 [2024-10-01 17:38:04.168156] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.854 [2024-10-01 17:38:04.177416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.854 [2024-10-01 17:38:04.178048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.854 [2024-10-01 17:38:04.178088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.854 [2024-10-01 17:38:04.178100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.854 [2024-10-01 17:38:04.178338] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.854 [2024-10-01 17:38:04.178563] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.854 [2024-10-01 17:38:04.178573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.854 [2024-10-01 17:38:04.178581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.854 [2024-10-01 17:38:04.182093] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.854 [2024-10-01 17:38:04.191351] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.854 [2024-10-01 17:38:04.191975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.854 [2024-10-01 17:38:04.192023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.854 [2024-10-01 17:38:04.192037] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.854 [2024-10-01 17:38:04.192273] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.854 [2024-10-01 17:38:04.192493] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.854 [2024-10-01 17:38:04.192503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.854 [2024-10-01 17:38:04.192511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.854 [2024-10-01 17:38:04.196007] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.854 [2024-10-01 17:38:04.205264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.854 [2024-10-01 17:38:04.205940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.854 [2024-10-01 17:38:04.205980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.854 [2024-10-01 17:38:04.205992] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.854 [2024-10-01 17:38:04.206237] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.854 [2024-10-01 17:38:04.206458] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.854 [2024-10-01 17:38:04.206468] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.854 [2024-10-01 17:38:04.206476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.854 [2024-10-01 17:38:04.209968] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.854 [2024-10-01 17:38:04.219025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.854 [2024-10-01 17:38:04.219692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.854 [2024-10-01 17:38:04.219732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.854 [2024-10-01 17:38:04.219743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.854 [2024-10-01 17:38:04.219978] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.854 [2024-10-01 17:38:04.220207] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.854 [2024-10-01 17:38:04.220218] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.854 [2024-10-01 17:38:04.220226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.854 [2024-10-01 17:38:04.223737] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.854 [2024-10-01 17:38:04.232807] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.854 [2024-10-01 17:38:04.233387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.854 [2024-10-01 17:38:04.233407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.854 [2024-10-01 17:38:04.233415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.854 [2024-10-01 17:38:04.233632] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.854 [2024-10-01 17:38:04.233849] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.854 [2024-10-01 17:38:04.233858] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.855 [2024-10-01 17:38:04.233866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.855 [2024-10-01 17:38:04.237367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.855 [2024-10-01 17:38:04.246616] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.855 [2024-10-01 17:38:04.247145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-10-01 17:38:04.247163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.855 [2024-10-01 17:38:04.247171] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.855 [2024-10-01 17:38:04.247387] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.855 [2024-10-01 17:38:04.247604] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.855 [2024-10-01 17:38:04.247614] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.855 [2024-10-01 17:38:04.247621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.855 [2024-10-01 17:38:04.251112] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.855 [2024-10-01 17:38:04.260363] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.855 [2024-10-01 17:38:04.260925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-10-01 17:38:04.260942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.855 [2024-10-01 17:38:04.260950] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.855 [2024-10-01 17:38:04.261170] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.855 [2024-10-01 17:38:04.261386] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.855 [2024-10-01 17:38:04.261395] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.855 [2024-10-01 17:38:04.261402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.855 [2024-10-01 17:38:04.264885] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.855 [2024-10-01 17:38:04.274144] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.855 [2024-10-01 17:38:04.274704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-10-01 17:38:04.274720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.855 [2024-10-01 17:38:04.274732] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.855 [2024-10-01 17:38:04.274948] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.855 [2024-10-01 17:38:04.275170] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.855 [2024-10-01 17:38:04.275180] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.855 [2024-10-01 17:38:04.275188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.855 [2024-10-01 17:38:04.278674] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.855 [2024-10-01 17:38:04.287945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.855 [2024-10-01 17:38:04.288561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-10-01 17:38:04.288601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.855 [2024-10-01 17:38:04.288612] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.855 [2024-10-01 17:38:04.288848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.855 [2024-10-01 17:38:04.289079] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.855 [2024-10-01 17:38:04.289089] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.855 [2024-10-01 17:38:04.289097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.855 [2024-10-01 17:38:04.292591] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.855 [2024-10-01 17:38:04.301845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.855 [2024-10-01 17:38:04.302383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-10-01 17:38:04.302404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.855 [2024-10-01 17:38:04.302412] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.855 [2024-10-01 17:38:04.302628] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.855 [2024-10-01 17:38:04.302846] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.855 [2024-10-01 17:38:04.302854] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.855 [2024-10-01 17:38:04.302862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.855 [2024-10-01 17:38:04.306356] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.855 [2024-10-01 17:38:04.315613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.855 [2024-10-01 17:38:04.316151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-10-01 17:38:04.316169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.855 [2024-10-01 17:38:04.316177] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.855 [2024-10-01 17:38:04.316393] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.855 [2024-10-01 17:38:04.316609] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.855 [2024-10-01 17:38:04.316623] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.855 [2024-10-01 17:38:04.316630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.855 [2024-10-01 17:38:04.320150] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.855 [2024-10-01 17:38:04.329419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.855 [2024-10-01 17:38:04.329986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-10-01 17:38:04.330009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.855 [2024-10-01 17:38:04.330017] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.855 [2024-10-01 17:38:04.330234] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.855 [2024-10-01 17:38:04.330451] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.855 [2024-10-01 17:38:04.330460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.855 [2024-10-01 17:38:04.330467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.855 [2024-10-01 17:38:04.333952] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.855 [2024-10-01 17:38:04.343203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.855 [2024-10-01 17:38:04.343760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-10-01 17:38:04.343777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.855 [2024-10-01 17:38:04.343785] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.855 [2024-10-01 17:38:04.344005] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.855 [2024-10-01 17:38:04.344221] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.855 [2024-10-01 17:38:04.344231] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.855 [2024-10-01 17:38:04.344239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.856 [2024-10-01 17:38:04.347722] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.856 [2024-10-01 17:38:04.356971] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.856 [2024-10-01 17:38:04.357535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.856 [2024-10-01 17:38:04.357552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.856 [2024-10-01 17:38:04.357560] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.856 [2024-10-01 17:38:04.357776] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.856 [2024-10-01 17:38:04.357997] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.856 [2024-10-01 17:38:04.358006] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.856 [2024-10-01 17:38:04.358014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.856 [2024-10-01 17:38:04.361499] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.856 [2024-10-01 17:38:04.370746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.856 [2024-10-01 17:38:04.371388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.856 [2024-10-01 17:38:04.371428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.856 [2024-10-01 17:38:04.371439] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.856 [2024-10-01 17:38:04.371675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.856 [2024-10-01 17:38:04.371895] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.856 [2024-10-01 17:38:04.371904] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.856 [2024-10-01 17:38:04.371912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.856 [2024-10-01 17:38:04.375412] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.856 [2024-10-01 17:38:04.384674] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:05.856 [2024-10-01 17:38:04.385365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.856 [2024-10-01 17:38:04.385405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:05.856 [2024-10-01 17:38:04.385417] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:05.856 [2024-10-01 17:38:04.385652] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:05.856 [2024-10-01 17:38:04.385872] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:05.856 [2024-10-01 17:38:04.385882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:05.856 [2024-10-01 17:38:04.385890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:05.856 [2024-10-01 17:38:04.389390] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:05.856 [2024-10-01 17:38:04.398436] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.117 [2024-10-01 17:38:04.398965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.117 [2024-10-01 17:38:04.398986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.117 [2024-10-01 17:38:04.399001] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.118 [2024-10-01 17:38:04.399218] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.118 [2024-10-01 17:38:04.399435] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.118 [2024-10-01 17:38:04.399444] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.118 [2024-10-01 17:38:04.399452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.118 [2024-10-01 17:38:04.402934] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.118 [2024-10-01 17:38:04.412185] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.118 [2024-10-01 17:38:04.412721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.118 [2024-10-01 17:38:04.412760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.118 [2024-10-01 17:38:04.412771] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.118 [2024-10-01 17:38:04.413020] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.118 [2024-10-01 17:38:04.413242] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.118 [2024-10-01 17:38:04.413252] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.118 [2024-10-01 17:38:04.413260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.118 [2024-10-01 17:38:04.416750] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.118 [2024-10-01 17:38:04.426016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.118 [2024-10-01 17:38:04.426545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.118 [2024-10-01 17:38:04.426566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.118 [2024-10-01 17:38:04.426574] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.118 [2024-10-01 17:38:04.426791] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.118 [2024-10-01 17:38:04.427015] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.118 [2024-10-01 17:38:04.427026] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.118 [2024-10-01 17:38:04.427033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.118 [2024-10-01 17:38:04.430522] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.118 [2024-10-01 17:38:04.439777] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.118 [2024-10-01 17:38:04.440347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.118 [2024-10-01 17:38:04.440365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.118 [2024-10-01 17:38:04.440373] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.118 [2024-10-01 17:38:04.440589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.118 [2024-10-01 17:38:04.440805] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.118 [2024-10-01 17:38:04.440815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.118 [2024-10-01 17:38:04.440822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.118 [2024-10-01 17:38:04.444312] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.118 [2024-10-01 17:38:04.453560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.118 [2024-10-01 17:38:04.454134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.118 [2024-10-01 17:38:04.454174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.118 [2024-10-01 17:38:04.454185] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.118 [2024-10-01 17:38:04.454421] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.118 [2024-10-01 17:38:04.454641] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.118 [2024-10-01 17:38:04.454650] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.118 [2024-10-01 17:38:04.454663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.118 [2024-10-01 17:38:04.458164] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.118 [2024-10-01 17:38:04.467416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.118 [2024-10-01 17:38:04.467979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.118 [2024-10-01 17:38:04.468005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.118 [2024-10-01 17:38:04.468014] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.118 [2024-10-01 17:38:04.468231] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.118 [2024-10-01 17:38:04.468447] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.118 [2024-10-01 17:38:04.468456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.118 [2024-10-01 17:38:04.468464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.118 [2024-10-01 17:38:04.471948] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.118 [2024-10-01 17:38:04.481210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.118 [2024-10-01 17:38:04.481769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.118 [2024-10-01 17:38:04.481786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.118 [2024-10-01 17:38:04.481794] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.118 [2024-10-01 17:38:04.482015] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.118 [2024-10-01 17:38:04.482233] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.118 [2024-10-01 17:38:04.482244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.118 [2024-10-01 17:38:04.482251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.118 [2024-10-01 17:38:04.485736] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.118 [2024-10-01 17:38:04.494983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.118 [2024-10-01 17:38:04.495547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.118 [2024-10-01 17:38:04.495564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.118 [2024-10-01 17:38:04.495572] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.118 [2024-10-01 17:38:04.495788] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.118 [2024-10-01 17:38:04.496010] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.118 [2024-10-01 17:38:04.496019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.118 [2024-10-01 17:38:04.496026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.118 [2024-10-01 17:38:04.499515] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.118 [2024-10-01 17:38:04.508765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.118 [2024-10-01 17:38:04.509331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.118 [2024-10-01 17:38:04.509348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.118 [2024-10-01 17:38:04.509356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.118 [2024-10-01 17:38:04.509571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.118 [2024-10-01 17:38:04.509788] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.118 [2024-10-01 17:38:04.509797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.118 [2024-10-01 17:38:04.509804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.118 [2024-10-01 17:38:04.513294] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.118 [2024-10-01 17:38:04.522546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.118 [2024-10-01 17:38:04.522954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.118 [2024-10-01 17:38:04.522970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.118 [2024-10-01 17:38:04.522978] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.118 [2024-10-01 17:38:04.523197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.118 [2024-10-01 17:38:04.523415] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.118 [2024-10-01 17:38:04.523424] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.118 [2024-10-01 17:38:04.523431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.118 [2024-10-01 17:38:04.526913] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.118 [2024-10-01 17:38:04.536409] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.118 [2024-10-01 17:38:04.536941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.118 [2024-10-01 17:38:04.536959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.118 [2024-10-01 17:38:04.536967] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.118 [2024-10-01 17:38:04.537187] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.118 [2024-10-01 17:38:04.537405] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.118 [2024-10-01 17:38:04.537414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.118 [2024-10-01 17:38:04.537421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.118 [2024-10-01 17:38:04.540907] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.118 [2024-10-01 17:38:04.550196] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.118 [2024-10-01 17:38:04.550811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.118 [2024-10-01 17:38:04.550851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.118 [2024-10-01 17:38:04.550862] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.118 [2024-10-01 17:38:04.551111] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.118 [2024-10-01 17:38:04.551333] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.118 [2024-10-01 17:38:04.551342] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.118 [2024-10-01 17:38:04.551350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.118 [2024-10-01 17:38:04.554842] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.118 [2024-10-01 17:38:04.564109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.118 [2024-10-01 17:38:04.564656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.118 [2024-10-01 17:38:04.564695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.118 [2024-10-01 17:38:04.564706] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.118 [2024-10-01 17:38:04.564941] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.118 [2024-10-01 17:38:04.565171] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.118 [2024-10-01 17:38:04.565182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.118 [2024-10-01 17:38:04.565190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.118 [2024-10-01 17:38:04.568680] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.118 [2024-10-01 17:38:04.578115] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.118 [2024-10-01 17:38:04.578788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.118 [2024-10-01 17:38:04.578828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.118 [2024-10-01 17:38:04.578839] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.118 [2024-10-01 17:38:04.579081] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.118 [2024-10-01 17:38:04.579302] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.118 [2024-10-01 17:38:04.579312] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.118 [2024-10-01 17:38:04.579320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.119 [2024-10-01 17:38:04.582823] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.119 [2024-10-01 17:38:04.591874] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.119 [2024-10-01 17:38:04.592422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.119 [2024-10-01 17:38:04.592442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.119 [2024-10-01 17:38:04.592450] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.119 [2024-10-01 17:38:04.592667] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.119 [2024-10-01 17:38:04.592884] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.119 [2024-10-01 17:38:04.592894] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.119 [2024-10-01 17:38:04.592906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.119 [2024-10-01 17:38:04.596405] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.119 [2024-10-01 17:38:04.605667] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.119 [2024-10-01 17:38:04.606293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.119 [2024-10-01 17:38:04.606333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.119 [2024-10-01 17:38:04.606344] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.119 [2024-10-01 17:38:04.606580] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.119 [2024-10-01 17:38:04.606801] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.119 [2024-10-01 17:38:04.606810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.119 [2024-10-01 17:38:04.606818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.119 [2024-10-01 17:38:04.610324] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.119 [2024-10-01 17:38:04.619593] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.119 [2024-10-01 17:38:04.620133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.119 [2024-10-01 17:38:04.620154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.119 [2024-10-01 17:38:04.620162] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.119 [2024-10-01 17:38:04.620379] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.119 [2024-10-01 17:38:04.620597] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.119 [2024-10-01 17:38:04.620606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.119 [2024-10-01 17:38:04.620613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.119 [2024-10-01 17:38:04.624122] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.119 [2024-10-01 17:38:04.633388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.119 [2024-10-01 17:38:04.633945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.119 [2024-10-01 17:38:04.633963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.119 [2024-10-01 17:38:04.633971] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.119 [2024-10-01 17:38:04.634193] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.119 [2024-10-01 17:38:04.634410] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.119 [2024-10-01 17:38:04.634419] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.119 [2024-10-01 17:38:04.634426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.119 [2024-10-01 17:38:04.637916] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.119 [2024-10-01 17:38:04.647185] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.119 [2024-10-01 17:38:04.647760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.119 [2024-10-01 17:38:04.647804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.119 [2024-10-01 17:38:04.647815] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.119 [2024-10-01 17:38:04.648060] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.119 [2024-10-01 17:38:04.648282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.119 [2024-10-01 17:38:04.648292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.119 [2024-10-01 17:38:04.648300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.119 [2024-10-01 17:38:04.651796] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.119 [2024-10-01 17:38:04.661068] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.119 [2024-10-01 17:38:04.661639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.119 [2024-10-01 17:38:04.661659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.119 [2024-10-01 17:38:04.661668] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.119 [2024-10-01 17:38:04.661884] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.119 [2024-10-01 17:38:04.662110] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.119 [2024-10-01 17:38:04.662120] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.119 [2024-10-01 17:38:04.662128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.381 [2024-10-01 17:38:04.665620] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.381 [2024-10-01 17:38:04.674886] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.381 [2024-10-01 17:38:04.675544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.381 [2024-10-01 17:38:04.675584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.381 [2024-10-01 17:38:04.675595] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.381 [2024-10-01 17:38:04.675831] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.381 [2024-10-01 17:38:04.676063] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.382 [2024-10-01 17:38:04.676074] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.382 [2024-10-01 17:38:04.676082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.382 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3289909 Killed "${NVMF_APP[@]}" "$@" 00:38:06.382 17:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:38:06.382 17:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:38:06.382 17:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:06.382 17:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:06.382 [2024-10-01 17:38:04.679581] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.382 17:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:06.382 17:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=3291510 00:38:06.382 17:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 3291510 00:38:06.382 17:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:38:06.382 17:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 3291510 ']' 00:38:06.382 17:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:06.382 [2024-10-01 17:38:04.688669] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.382 17:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:06.382 17:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:06.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:06.382 [2024-10-01 17:38:04.689197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.382 [2024-10-01 17:38:04.689217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.382 [2024-10-01 17:38:04.689227] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.382 17:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:06.382 [2024-10-01 17:38:04.689444] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.382 17:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:06.382 [2024-10-01 17:38:04.689664] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.382 [2024-10-01 17:38:04.689675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.382 [2024-10-01 17:38:04.689683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.382 [2024-10-01 17:38:04.693188] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.382 [2024-10-01 17:38:04.702460] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.382 [2024-10-01 17:38:04.702870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.382 [2024-10-01 17:38:04.702888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.382 [2024-10-01 17:38:04.702896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.382 [2024-10-01 17:38:04.703118] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.382 [2024-10-01 17:38:04.703335] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.382 [2024-10-01 17:38:04.703344] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.382 [2024-10-01 17:38:04.703351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.382 [2024-10-01 17:38:04.706844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.382 [2024-10-01 17:38:04.716316] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.382 [2024-10-01 17:38:04.716776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.382 [2024-10-01 17:38:04.716793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.382 [2024-10-01 17:38:04.716801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.382 [2024-10-01 17:38:04.717024] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.382 [2024-10-01 17:38:04.717246] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.382 [2024-10-01 17:38:04.717256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.382 [2024-10-01 17:38:04.717263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.382 [2024-10-01 17:38:04.720753] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.382 [2024-10-01 17:38:04.730236] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.382 [2024-10-01 17:38:04.730684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.382 [2024-10-01 17:38:04.730701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.382 [2024-10-01 17:38:04.730709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.382 [2024-10-01 17:38:04.730924] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.382 [2024-10-01 17:38:04.731149] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.382 [2024-10-01 17:38:04.731159] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.382 [2024-10-01 17:38:04.731166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.382 [2024-10-01 17:38:04.734685] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.382 [2024-10-01 17:38:04.744171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.382 [2024-10-01 17:38:04.744700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.382 [2024-10-01 17:38:04.744718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.382 [2024-10-01 17:38:04.744726] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.382 [2024-10-01 17:38:04.744943] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.382 [2024-10-01 17:38:04.745168] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.382 [2024-10-01 17:38:04.745179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.382 [2024-10-01 17:38:04.745186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.382 [2024-10-01 17:38:04.748677] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.382 [2024-10-01 17:38:04.751040] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:38:06.382 [2024-10-01 17:38:04.751087] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:06.382 [2024-10-01 17:38:04.757950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.382 [2024-10-01 17:38:04.758521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.382 [2024-10-01 17:38:04.758539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.382 [2024-10-01 17:38:04.758546] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.382 [2024-10-01 17:38:04.758762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.382 [2024-10-01 17:38:04.758987] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.382 [2024-10-01 17:38:04.759004] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.382 [2024-10-01 17:38:04.759011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.382 [2024-10-01 17:38:04.762503] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.382 [2024-10-01 17:38:04.771764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.382 [2024-10-01 17:38:04.772349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.382 [2024-10-01 17:38:04.772366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.382 [2024-10-01 17:38:04.772374] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.382 [2024-10-01 17:38:04.772590] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.382 [2024-10-01 17:38:04.772807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.382 [2024-10-01 17:38:04.772816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.382 [2024-10-01 17:38:04.772823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.382 [2024-10-01 17:38:04.776406] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.382 [2024-10-01 17:38:04.785682] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.382 [2024-10-01 17:38:04.786231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.382 [2024-10-01 17:38:04.786270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.382 [2024-10-01 17:38:04.786283] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.382 [2024-10-01 17:38:04.786523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.382 [2024-10-01 17:38:04.786743] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.382 [2024-10-01 17:38:04.786752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.382 [2024-10-01 17:38:04.786760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.382 [2024-10-01 17:38:04.790272] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.383 [2024-10-01 17:38:04.799558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.383 [2024-10-01 17:38:04.800038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.383 [2024-10-01 17:38:04.800058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.383 [2024-10-01 17:38:04.800067] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.383 [2024-10-01 17:38:04.800284] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.383 [2024-10-01 17:38:04.800501] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.383 [2024-10-01 17:38:04.800511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.383 [2024-10-01 17:38:04.800519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.383 [2024-10-01 17:38:04.804019] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.383 [2024-10-01 17:38:04.813499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.383 [2024-10-01 17:38:04.814210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.383 [2024-10-01 17:38:04.814250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.383 [2024-10-01 17:38:04.814262] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.383 [2024-10-01 17:38:04.814498] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.383 [2024-10-01 17:38:04.814719] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.383 [2024-10-01 17:38:04.814729] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.383 [2024-10-01 17:38:04.814737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.383 [2024-10-01 17:38:04.818239] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.383 [2024-10-01 17:38:04.827305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.383 [2024-10-01 17:38:04.827748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.383 [2024-10-01 17:38:04.827770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.383 [2024-10-01 17:38:04.827778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.383 [2024-10-01 17:38:04.828000] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.383 [2024-10-01 17:38:04.828219] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.383 [2024-10-01 17:38:04.828228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.383 [2024-10-01 17:38:04.828235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.383 [2024-10-01 17:38:04.831721] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.383 [2024-10-01 17:38:04.834438] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:06.383 [2024-10-01 17:38:04.841187] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.383 [2024-10-01 17:38:04.841718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.383 [2024-10-01 17:38:04.841736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.383 [2024-10-01 17:38:04.841744] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.383 [2024-10-01 17:38:04.841961] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.383 [2024-10-01 17:38:04.842184] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.383 [2024-10-01 17:38:04.842195] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.383 [2024-10-01 17:38:04.842202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.383 [2024-10-01 17:38:04.845692] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.383 [2024-10-01 17:38:04.854956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.383 [2024-10-01 17:38:04.855669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.383 [2024-10-01 17:38:04.855714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.383 [2024-10-01 17:38:04.855731] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.383 [2024-10-01 17:38:04.855975] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.383 [2024-10-01 17:38:04.856205] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.383 [2024-10-01 17:38:04.856215] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.383 [2024-10-01 17:38:04.856223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.383 [2024-10-01 17:38:04.859717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.383 [2024-10-01 17:38:04.862788] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:06.383 [2024-10-01 17:38:04.862815] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:06.383 [2024-10-01 17:38:04.862821] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:06.383 [2024-10-01 17:38:04.862826] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:06.383 [2024-10-01 17:38:04.862830] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:06.383 [2024-10-01 17:38:04.862954] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:38:06.383 [2024-10-01 17:38:04.863122] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:38:06.383 [2024-10-01 17:38:04.863219] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:06.383 [2024-10-01 17:38:04.868774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.383 [2024-10-01 17:38:04.869376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.383 [2024-10-01 17:38:04.869398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.383 [2024-10-01 17:38:04.869407] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.383 [2024-10-01 17:38:04.869625] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.383 [2024-10-01 17:38:04.869843] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.383 [2024-10-01 17:38:04.869852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.383 [2024-10-01 17:38:04.869860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.383 [2024-10-01 17:38:04.873353] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.383 [2024-10-01 17:38:04.882625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.383 [2024-10-01 17:38:04.883311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.383 [2024-10-01 17:38:04.883357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.383 [2024-10-01 17:38:04.883368] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.383 [2024-10-01 17:38:04.883610] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.383 [2024-10-01 17:38:04.883831] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.383 [2024-10-01 17:38:04.883841] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.383 [2024-10-01 17:38:04.883849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.383 [2024-10-01 17:38:04.887358] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.383 [2024-10-01 17:38:04.896417] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.383 [2024-10-01 17:38:04.897080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.383 [2024-10-01 17:38:04.897122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.383 [2024-10-01 17:38:04.897134] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.383 [2024-10-01 17:38:04.897373] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.383 [2024-10-01 17:38:04.897594] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.383 [2024-10-01 17:38:04.897603] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.383 [2024-10-01 17:38:04.897611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.383 [2024-10-01 17:38:04.901114] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.383 [2024-10-01 17:38:04.910165] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.383 [2024-10-01 17:38:04.910585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.383 [2024-10-01 17:38:04.910605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.383 [2024-10-01 17:38:04.910613] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.383 [2024-10-01 17:38:04.910829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.383 [2024-10-01 17:38:04.911053] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.383 [2024-10-01 17:38:04.911063] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.383 [2024-10-01 17:38:04.911071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.383 [2024-10-01 17:38:04.914556] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.383 [2024-10-01 17:38:04.924026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.383 [2024-10-01 17:38:04.924653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.383 [2024-10-01 17:38:04.924693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.383 [2024-10-01 17:38:04.924704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.384 [2024-10-01 17:38:04.924940] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.384 [2024-10-01 17:38:04.925168] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.384 [2024-10-01 17:38:04.925179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.384 [2024-10-01 17:38:04.925187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.646 [2024-10-01 17:38:04.928683] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.646 [2024-10-01 17:38:04.937934] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.646 [2024-10-01 17:38:04.938342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.646 [2024-10-01 17:38:04.938364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.646 [2024-10-01 17:38:04.938377] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.646 [2024-10-01 17:38:04.938594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.646 [2024-10-01 17:38:04.938810] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.646 [2024-10-01 17:38:04.938820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.646 [2024-10-01 17:38:04.938828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.646 [2024-10-01 17:38:04.942370] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.646 [2024-10-01 17:38:04.951837] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.646 [2024-10-01 17:38:04.952332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.646 [2024-10-01 17:38:04.952372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.646 [2024-10-01 17:38:04.952385] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.646 [2024-10-01 17:38:04.952621] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.646 [2024-10-01 17:38:04.952842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.646 [2024-10-01 17:38:04.952851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.646 [2024-10-01 17:38:04.952859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.646 [2024-10-01 17:38:04.956358] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.646 [2024-10-01 17:38:04.965607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.646 [2024-10-01 17:38:04.966122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.646 [2024-10-01 17:38:04.966161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.646 [2024-10-01 17:38:04.966174] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.646 [2024-10-01 17:38:04.966411] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.646 [2024-10-01 17:38:04.966632] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.646 [2024-10-01 17:38:04.966641] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.646 [2024-10-01 17:38:04.966650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.646 [2024-10-01 17:38:04.970148] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.646 [2024-10-01 17:38:04.979402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.646 [2024-10-01 17:38:04.979977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.646 [2024-10-01 17:38:04.980002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.646 [2024-10-01 17:38:04.980011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.646 [2024-10-01 17:38:04.980228] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.646 [2024-10-01 17:38:04.980446] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.646 [2024-10-01 17:38:04.980459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.646 [2024-10-01 17:38:04.980466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.646 [2024-10-01 17:38:04.983965] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.646 [2024-10-01 17:38:04.993326] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.646 [2024-10-01 17:38:04.993844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.646 [2024-10-01 17:38:04.993883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.646 [2024-10-01 17:38:04.993894] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.646 [2024-10-01 17:38:04.994137] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.646 [2024-10-01 17:38:04.994359] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.646 [2024-10-01 17:38:04.994369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.646 [2024-10-01 17:38:04.994377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.646 [2024-10-01 17:38:04.997870] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.646 [2024-10-01 17:38:05.007121] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.646 [2024-10-01 17:38:05.007625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.646 [2024-10-01 17:38:05.007665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.646 [2024-10-01 17:38:05.007678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.646 [2024-10-01 17:38:05.007915] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.646 [2024-10-01 17:38:05.008143] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.646 [2024-10-01 17:38:05.008154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.646 [2024-10-01 17:38:05.008163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.646 [2024-10-01 17:38:05.011653] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.646 [2024-10-01 17:38:05.020902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.646 [2024-10-01 17:38:05.021408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.646 [2024-10-01 17:38:05.021448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.646 [2024-10-01 17:38:05.021459] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.646 [2024-10-01 17:38:05.021695] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.646 [2024-10-01 17:38:05.021915] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.646 [2024-10-01 17:38:05.021926] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.646 [2024-10-01 17:38:05.021934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.646 [2024-10-01 17:38:05.025434] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.646 [2024-10-01 17:38:05.034689] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.646 [2024-10-01 17:38:05.035295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.646 [2024-10-01 17:38:05.035334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.646 [2024-10-01 17:38:05.035345] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.646 [2024-10-01 17:38:05.035581] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.646 [2024-10-01 17:38:05.035801] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.646 [2024-10-01 17:38:05.035811] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.646 [2024-10-01 17:38:05.035819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.646 [2024-10-01 17:38:05.039316] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.646 [2024-10-01 17:38:05.048566] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.646 [2024-10-01 17:38:05.049123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.646 [2024-10-01 17:38:05.049163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.646 [2024-10-01 17:38:05.049176] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.646 [2024-10-01 17:38:05.049413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.646 [2024-10-01 17:38:05.049634] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.647 [2024-10-01 17:38:05.049643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.647 [2024-10-01 17:38:05.049651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.647 [2024-10-01 17:38:05.053150] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.647 [2024-10-01 17:38:05.062403] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.647 [2024-10-01 17:38:05.062946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.647 [2024-10-01 17:38:05.062985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.647 [2024-10-01 17:38:05.063004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.647 [2024-10-01 17:38:05.063240] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.647 [2024-10-01 17:38:05.063461] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.647 [2024-10-01 17:38:05.063470] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.647 [2024-10-01 17:38:05.063478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.647 [2024-10-01 17:38:05.066971] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.647 [2024-10-01 17:38:05.076226] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.647 [2024-10-01 17:38:05.076752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.647 [2024-10-01 17:38:05.076772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.647 [2024-10-01 17:38:05.076785] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.647 [2024-10-01 17:38:05.077007] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.647 [2024-10-01 17:38:05.077225] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.647 [2024-10-01 17:38:05.077234] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.647 [2024-10-01 17:38:05.077242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.647 [2024-10-01 17:38:05.080729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.647 4957.83 IOPS, 19.37 MiB/s [2024-10-01 17:38:05.090005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.647 [2024-10-01 17:38:05.090543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.647 [2024-10-01 17:38:05.090560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.647 [2024-10-01 17:38:05.090568] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.647 [2024-10-01 17:38:05.090784] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.647 [2024-10-01 17:38:05.091006] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.647 [2024-10-01 17:38:05.091016] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.647 [2024-10-01 17:38:05.091024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.647 [2024-10-01 17:38:05.094508] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.647 [2024-10-01 17:38:05.103757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.647 [2024-10-01 17:38:05.104268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.647 [2024-10-01 17:38:05.104308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.647 [2024-10-01 17:38:05.104321] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.647 [2024-10-01 17:38:05.104558] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.647 [2024-10-01 17:38:05.104779] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.647 [2024-10-01 17:38:05.104789] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.647 [2024-10-01 17:38:05.104796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.647 [2024-10-01 17:38:05.108299] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.647 [2024-10-01 17:38:05.117554] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.647 [2024-10-01 17:38:05.118003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.647 [2024-10-01 17:38:05.118025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.647 [2024-10-01 17:38:05.118033] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.647 [2024-10-01 17:38:05.118251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.647 [2024-10-01 17:38:05.118469] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.647 [2024-10-01 17:38:05.118483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.647 [2024-10-01 17:38:05.118490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.647 [2024-10-01 17:38:05.121987] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.647 [2024-10-01 17:38:05.131442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.647 [2024-10-01 17:38:05.131928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.647 [2024-10-01 17:38:05.131945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.647 [2024-10-01 17:38:05.131953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.647 [2024-10-01 17:38:05.132174] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.647 [2024-10-01 17:38:05.132391] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.647 [2024-10-01 17:38:05.132400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.647 [2024-10-01 17:38:05.132407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.647 [2024-10-01 17:38:05.135889] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.647 [2024-10-01 17:38:05.145337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.647 [2024-10-01 17:38:05.145823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.647 [2024-10-01 17:38:05.145840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.647 [2024-10-01 17:38:05.145848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.647 [2024-10-01 17:38:05.146068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.647 [2024-10-01 17:38:05.146286] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.647 [2024-10-01 17:38:05.146296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.647 [2024-10-01 17:38:05.146303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.647 [2024-10-01 17:38:05.149785] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.647 [2024-10-01 17:38:05.159065] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.647 [2024-10-01 17:38:05.159616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.647 [2024-10-01 17:38:05.159634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.647 [2024-10-01 17:38:05.159642] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.647 [2024-10-01 17:38:05.159858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.647 [2024-10-01 17:38:05.160079] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.647 [2024-10-01 17:38:05.160089] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.647 [2024-10-01 17:38:05.160096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.647 [2024-10-01 17:38:05.163583] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.647 [2024-10-01 17:38:05.172826] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.647 [2024-10-01 17:38:05.173406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.647 [2024-10-01 17:38:05.173446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.647 [2024-10-01 17:38:05.173458] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.647 [2024-10-01 17:38:05.173693] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.647 [2024-10-01 17:38:05.173914] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.647 [2024-10-01 17:38:05.173923] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.647 [2024-10-01 17:38:05.173931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.647 [2024-10-01 17:38:05.177427] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.647 [2024-10-01 17:38:05.186690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.647 [2024-10-01 17:38:05.187203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.647 [2024-10-01 17:38:05.187224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.647 [2024-10-01 17:38:05.187232] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.647 [2024-10-01 17:38:05.187449] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.647 [2024-10-01 17:38:05.187665] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.647 [2024-10-01 17:38:05.187675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.647 [2024-10-01 17:38:05.187683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.911 [2024-10-01 17:38:05.191172] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.911 [2024-10-01 17:38:05.200419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.911 [2024-10-01 17:38:05.201016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.911 [2024-10-01 17:38:05.201056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.911 [2024-10-01 17:38:05.201068] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.911 [2024-10-01 17:38:05.201308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.911 [2024-10-01 17:38:05.201528] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.911 [2024-10-01 17:38:05.201538] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.911 [2024-10-01 17:38:05.201546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.911 [2024-10-01 17:38:05.205046] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.911 [2024-10-01 17:38:05.214295] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.911 [2024-10-01 17:38:05.214828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.911 [2024-10-01 17:38:05.214868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.911 [2024-10-01 17:38:05.214879] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.911 [2024-10-01 17:38:05.215127] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.911 [2024-10-01 17:38:05.215348] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.911 [2024-10-01 17:38:05.215358] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.911 [2024-10-01 17:38:05.215366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.911 [2024-10-01 17:38:05.218855] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.911 [2024-10-01 17:38:05.228115] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.911 [2024-10-01 17:38:05.228729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.911 [2024-10-01 17:38:05.228769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.911 [2024-10-01 17:38:05.228780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.911 [2024-10-01 17:38:05.229023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.911 [2024-10-01 17:38:05.229244] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.911 [2024-10-01 17:38:05.229254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.911 [2024-10-01 17:38:05.229262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.911 [2024-10-01 17:38:05.232753] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.911 [2024-10-01 17:38:05.242005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.911 [2024-10-01 17:38:05.242620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.911 [2024-10-01 17:38:05.242660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.911 [2024-10-01 17:38:05.242672] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.911 [2024-10-01 17:38:05.242908] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.911 [2024-10-01 17:38:05.243136] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.911 [2024-10-01 17:38:05.243147] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.911 [2024-10-01 17:38:05.243155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.911 [2024-10-01 17:38:05.246644] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.911 [2024-10-01 17:38:05.255895] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.911 [2024-10-01 17:38:05.256382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.911 [2024-10-01 17:38:05.256422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.911 [2024-10-01 17:38:05.256433] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.911 [2024-10-01 17:38:05.256669] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.911 [2024-10-01 17:38:05.256890] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.911 [2024-10-01 17:38:05.256900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.911 [2024-10-01 17:38:05.256912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.911 [2024-10-01 17:38:05.260412] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.911 [2024-10-01 17:38:05.269660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.911 [2024-10-01 17:38:05.270293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.911 [2024-10-01 17:38:05.270333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.911 [2024-10-01 17:38:05.270344] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.911 [2024-10-01 17:38:05.270579] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.911 [2024-10-01 17:38:05.270800] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.911 [2024-10-01 17:38:05.270809] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.911 [2024-10-01 17:38:05.270817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.911 [2024-10-01 17:38:05.274313] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.911 [2024-10-01 17:38:05.283572] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.911 [2024-10-01 17:38:05.283964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.911 [2024-10-01 17:38:05.283984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.911 [2024-10-01 17:38:05.283999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.911 [2024-10-01 17:38:05.284216] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.911 [2024-10-01 17:38:05.284433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.911 [2024-10-01 17:38:05.284442] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.911 [2024-10-01 17:38:05.284450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.911 [2024-10-01 17:38:05.287933] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.911 [2024-10-01 17:38:05.297381] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.911 [2024-10-01 17:38:05.298005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.911 [2024-10-01 17:38:05.298044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.911 [2024-10-01 17:38:05.298055] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.911 [2024-10-01 17:38:05.298291] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.911 [2024-10-01 17:38:05.298511] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.911 [2024-10-01 17:38:05.298520] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.911 [2024-10-01 17:38:05.298528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.911 [2024-10-01 17:38:05.302024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.911 [2024-10-01 17:38:05.311311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.911 [2024-10-01 17:38:05.311959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.911 [2024-10-01 17:38:05.312007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.911 [2024-10-01 17:38:05.312020] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.911 [2024-10-01 17:38:05.312257] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.911 [2024-10-01 17:38:05.312477] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.911 [2024-10-01 17:38:05.312486] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.911 [2024-10-01 17:38:05.312494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.911 [2024-10-01 17:38:05.315985] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.911 [2024-10-01 17:38:05.325061] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.911 [2024-10-01 17:38:05.325695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.911 [2024-10-01 17:38:05.325735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.911 [2024-10-01 17:38:05.325745] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.911 [2024-10-01 17:38:05.325981] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.911 [2024-10-01 17:38:05.326209] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.911 [2024-10-01 17:38:05.326220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.911 [2024-10-01 17:38:05.326228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.912 [2024-10-01 17:38:05.329717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.912 [2024-10-01 17:38:05.338968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.912 [2024-10-01 17:38:05.339603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.912 [2024-10-01 17:38:05.339643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.912 [2024-10-01 17:38:05.339655] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.912 [2024-10-01 17:38:05.339890] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.912 [2024-10-01 17:38:05.340119] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.912 [2024-10-01 17:38:05.340130] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.912 [2024-10-01 17:38:05.340138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.912 [2024-10-01 17:38:05.343630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.912 [2024-10-01 17:38:05.352878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.912 [2024-10-01 17:38:05.353531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.912 [2024-10-01 17:38:05.353571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.912 [2024-10-01 17:38:05.353583] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.912 [2024-10-01 17:38:05.353820] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.912 [2024-10-01 17:38:05.354054] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.912 [2024-10-01 17:38:05.354066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.912 [2024-10-01 17:38:05.354074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.912 [2024-10-01 17:38:05.357568] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.912 [2024-10-01 17:38:05.366655] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.912 [2024-10-01 17:38:05.367202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.912 [2024-10-01 17:38:05.367222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.912 [2024-10-01 17:38:05.367231] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.912 [2024-10-01 17:38:05.367448] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.912 [2024-10-01 17:38:05.367664] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.912 [2024-10-01 17:38:05.367673] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.912 [2024-10-01 17:38:05.367680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.912 [2024-10-01 17:38:05.371172] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.912 [2024-10-01 17:38:05.380464] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.912 [2024-10-01 17:38:05.381001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.912 [2024-10-01 17:38:05.381018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.912 [2024-10-01 17:38:05.381026] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.912 [2024-10-01 17:38:05.381242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.912 [2024-10-01 17:38:05.381457] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.912 [2024-10-01 17:38:05.381466] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.912 [2024-10-01 17:38:05.381473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.912 [2024-10-01 17:38:05.384970] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.912 [2024-10-01 17:38:05.394226] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.912 [2024-10-01 17:38:05.394752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.912 [2024-10-01 17:38:05.394769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.912 [2024-10-01 17:38:05.394777] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.912 [2024-10-01 17:38:05.394992] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.912 [2024-10-01 17:38:05.395214] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.912 [2024-10-01 17:38:05.395222] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.912 [2024-10-01 17:38:05.395229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.912 [2024-10-01 17:38:05.398720] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.912 [2024-10-01 17:38:05.407969] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.912 [2024-10-01 17:38:05.408585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.912 [2024-10-01 17:38:05.408625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.912 [2024-10-01 17:38:05.408636] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.912 [2024-10-01 17:38:05.408871] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.912 [2024-10-01 17:38:05.409099] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.912 [2024-10-01 17:38:05.409108] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.912 [2024-10-01 17:38:05.409116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.912 [2024-10-01 17:38:05.412605] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.912 [2024-10-01 17:38:05.421866] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.912 [2024-10-01 17:38:05.422471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.912 [2024-10-01 17:38:05.422511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.912 [2024-10-01 17:38:05.422522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.912 [2024-10-01 17:38:05.422758] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.912 [2024-10-01 17:38:05.422978] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.912 [2024-10-01 17:38:05.422986] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.912 [2024-10-01 17:38:05.423003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.912 [2024-10-01 17:38:05.426495] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.912 [2024-10-01 17:38:05.435749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.912 [2024-10-01 17:38:05.436362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.912 [2024-10-01 17:38:05.436401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.912 [2024-10-01 17:38:05.436412] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.912 [2024-10-01 17:38:05.436648] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.912 [2024-10-01 17:38:05.436867] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.912 [2024-10-01 17:38:05.436876] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.912 [2024-10-01 17:38:05.436884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.912 [2024-10-01 17:38:05.440381] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:06.912 [2024-10-01 17:38:05.449633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:06.912 [2024-10-01 17:38:05.450276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.912 [2024-10-01 17:38:05.450315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:06.912 [2024-10-01 17:38:05.450335] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:06.912 [2024-10-01 17:38:05.450571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:06.912 [2024-10-01 17:38:05.450791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:06.912 [2024-10-01 17:38:05.450800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:06.912 [2024-10-01 17:38:05.450808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:06.912 [2024-10-01 17:38:05.454309] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.174 [2024-10-01 17:38:05.463563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.174 [2024-10-01 17:38:05.464132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.174 [2024-10-01 17:38:05.464171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:07.174 [2024-10-01 17:38:05.464184] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:07.174 [2024-10-01 17:38:05.464423] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:07.174 [2024-10-01 17:38:05.464643] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.174 [2024-10-01 17:38:05.464652] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.174 [2024-10-01 17:38:05.464659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.174 [2024-10-01 17:38:05.468162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.174 [2024-10-01 17:38:05.477416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.174 [2024-10-01 17:38:05.478043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.174 [2024-10-01 17:38:05.478082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:07.174 [2024-10-01 17:38:05.478095] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:07.175 [2024-10-01 17:38:05.478333] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:07.175 [2024-10-01 17:38:05.478553] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.175 [2024-10-01 17:38:05.478562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.175 [2024-10-01 17:38:05.478570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.175 [2024-10-01 17:38:05.482070] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.175 [2024-10-01 17:38:05.491335] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.175 [2024-10-01 17:38:05.491950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-10-01 17:38:05.491988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:07.175 [2024-10-01 17:38:05.492010] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:07.175 [2024-10-01 17:38:05.492249] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:07.175 [2024-10-01 17:38:05.492474] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.175 [2024-10-01 17:38:05.492484] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.175 [2024-10-01 17:38:05.492492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.175 [2024-10-01 17:38:05.495987] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.175 [2024-10-01 17:38:05.505244] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.175 [2024-10-01 17:38:05.505763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-10-01 17:38:05.505783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:07.175 [2024-10-01 17:38:05.505791] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:07.175 [2024-10-01 17:38:05.506012] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:07.175 [2024-10-01 17:38:05.506229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.175 [2024-10-01 17:38:05.506238] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.175 [2024-10-01 17:38:05.506245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.175 [2024-10-01 17:38:05.509729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.175 [2024-10-01 17:38:05.518974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.175 [2024-10-01 17:38:05.519627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-10-01 17:38:05.519666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:07.175 [2024-10-01 17:38:05.519677] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:07.175 [2024-10-01 17:38:05.519912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:07.175 [2024-10-01 17:38:05.520140] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.175 [2024-10-01 17:38:05.520149] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.175 [2024-10-01 17:38:05.520157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.175 [2024-10-01 17:38:05.523658] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.175 [2024-10-01 17:38:05.532701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.175 [2024-10-01 17:38:05.533379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-10-01 17:38:05.533418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:07.175 [2024-10-01 17:38:05.533429] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:07.175 [2024-10-01 17:38:05.533664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:07.175 [2024-10-01 17:38:05.533884] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.175 [2024-10-01 17:38:05.533892] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.175 [2024-10-01 17:38:05.533900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.175 17:38:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:07.175 17:38:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:38:07.175 [2024-10-01 17:38:05.537398] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.175 17:38:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:07.175 17:38:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:07.175 17:38:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:07.175 [2024-10-01 17:38:05.546446] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.175 [2024-10-01 17:38:05.546992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-10-01 17:38:05.547016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:07.175 [2024-10-01 17:38:05.547025] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:07.175 [2024-10-01 17:38:05.547241] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:07.175 [2024-10-01 17:38:05.547457] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.175 [2024-10-01 17:38:05.547466] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.175 [2024-10-01 17:38:05.547473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.175 [2024-10-01 17:38:05.550958] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.175 [2024-10-01 17:38:05.560211] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.175 [2024-10-01 17:38:05.560837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-10-01 17:38:05.560876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:07.175 [2024-10-01 17:38:05.560886] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:07.175 [2024-10-01 17:38:05.561129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:07.175 [2024-10-01 17:38:05.561350] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.175 [2024-10-01 17:38:05.561359] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.175 [2024-10-01 17:38:05.561367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.175 [2024-10-01 17:38:05.564858] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.175 [2024-10-01 17:38:05.573943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.175 [2024-10-01 17:38:05.574494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-10-01 17:38:05.574515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:07.175 [2024-10-01 17:38:05.574523] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:07.175 [2024-10-01 17:38:05.574739] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:07.175 [2024-10-01 17:38:05.574955] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.175 [2024-10-01 17:38:05.574963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.175 [2024-10-01 17:38:05.574971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.175 [2024-10-01 17:38:05.578663] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.175 17:38:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:07.175 17:38:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:07.175 17:38:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:07.175 17:38:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:07.175 [2024-10-01 17:38:05.585122] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:07.175 [2024-10-01 17:38:05.587736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.175 [2024-10-01 17:38:05.588328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-10-01 17:38:05.588367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:07.175 [2024-10-01 17:38:05.588378] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:07.175 [2024-10-01 17:38:05.588613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:07.175 [2024-10-01 17:38:05.588833] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.175 [2024-10-01 17:38:05.588842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.175 [2024-10-01 17:38:05.588849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.175 [2024-10-01 17:38:05.592346] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.175 17:38:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:07.175 17:38:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:07.175 17:38:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:07.175 17:38:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:07.175 [2024-10-01 17:38:05.601612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.175 [2024-10-01 17:38:05.602173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-10-01 17:38:05.602212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:07.175 [2024-10-01 17:38:05.602224] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:07.176 [2024-10-01 17:38:05.602463] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:07.176 [2024-10-01 17:38:05.602682] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.176 [2024-10-01 17:38:05.602691] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.176 [2024-10-01 17:38:05.602698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.176 [2024-10-01 17:38:05.606198] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.176 [2024-10-01 17:38:05.615452] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.176 [2024-10-01 17:38:05.616076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.176 [2024-10-01 17:38:05.616115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:07.176 [2024-10-01 17:38:05.616128] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:07.176 [2024-10-01 17:38:05.616367] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:07.176 [2024-10-01 17:38:05.616592] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.176 [2024-10-01 17:38:05.616600] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.176 [2024-10-01 17:38:05.616608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.176 Malloc0 00:38:07.176 17:38:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:07.176 17:38:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:07.176 17:38:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:07.176 17:38:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:07.176 [2024-10-01 17:38:05.620110] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.176 [2024-10-01 17:38:05.629374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.176 17:38:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:07.176 [2024-10-01 17:38:05.630026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.176 [2024-10-01 17:38:05.630065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:07.176 [2024-10-01 17:38:05.630076] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:07.176 [2024-10-01 17:38:05.630312] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:07.176 17:38:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:07.176 [2024-10-01 17:38:05.630531] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.176 [2024-10-01 17:38:05.630540] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.176 [2024-10-01 17:38:05.630548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.176 17:38:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:07.176 17:38:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:07.176 [2024-10-01 17:38:05.634052] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.176 17:38:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:07.176 17:38:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:07.176 17:38:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:07.176 17:38:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:07.176 [2024-10-01 17:38:05.643117] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.176 [2024-10-01 17:38:05.643681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.176 [2024-10-01 17:38:05.643701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ced0 with addr=10.0.0.2, port=4420 00:38:07.176 [2024-10-01 17:38:05.643709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ced0 is same with the state(6) to be set 00:38:07.176 [2024-10-01 17:38:05.643925] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ced0 (9): Bad file descriptor 00:38:07.176 [2024-10-01 17:38:05.644148] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:07.176 [2024-10-01 17:38:05.644158] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:07.176 [2024-10-01 17:38:05.644165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:07.176 [2024-10-01 17:38:05.647659] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:07.176 [2024-10-01 17:38:05.649251] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:07.176 17:38:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:07.176 17:38:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3290288 00:38:07.176 [2024-10-01 17:38:05.656919] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:07.436 [2024-10-01 17:38:05.819201] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:38:15.651 4694.29 IOPS, 18.34 MiB/s 5516.00 IOPS, 21.55 MiB/s 6176.67 IOPS, 24.13 MiB/s 6694.60 IOPS, 26.15 MiB/s 7103.00 IOPS, 27.75 MiB/s 7450.42 IOPS, 29.10 MiB/s 7731.54 IOPS, 30.20 MiB/s 7994.57 IOPS, 31.23 MiB/s 8214.33 IOPS, 32.09 MiB/s 00:38:15.651 Latency(us) 00:38:15.651 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:15.651 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:38:15.651 Verification LBA range: start 0x0 length 0x4000 00:38:15.651 Nvme1n1 : 15.01 8216.16 32.09 10258.14 0.00 6903.59 778.24 13871.79 00:38:15.651 =================================================================================================================== 00:38:15.651 Total : 8216.16 32.09 10258.14 0.00 6903.59 778.24 13871.79 00:38:15.910 17:38:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:38:15.910 17:38:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:15.910 17:38:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:15.910 17:38:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:15.910 17:38:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:15.910 17:38:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:38:15.910 17:38:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:38:15.910 17:38:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:15.910 17:38:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:38:15.910 17:38:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:15.910 17:38:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:38:15.910 17:38:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:15.910 17:38:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:15.910 rmmod nvme_tcp 00:38:15.910 rmmod nvme_fabrics 00:38:15.910 rmmod nvme_keyring 00:38:15.910 17:38:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:15.910 17:38:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:38:15.911 17:38:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:38:15.911 17:38:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@515 -- # '[' -n 3291510 ']' 00:38:15.911 17:38:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # killprocess 3291510 00:38:15.911 17:38:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 3291510 ']' 00:38:15.911 17:38:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 3291510 00:38:15.911 17:38:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:38:15.911 17:38:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:15.911 17:38:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3291510 00:38:15.911 17:38:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:15.911 17:38:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:15.911 17:38:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3291510' 00:38:15.911 killing process with pid 3291510 00:38:15.911 17:38:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 3291510 00:38:15.911 17:38:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 3291510 00:38:16.170 17:38:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:16.170 17:38:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:16.170 17:38:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:16.170 17:38:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:38:16.170 17:38:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-save 00:38:16.170 17:38:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:16.170 17:38:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-restore 00:38:16.170 17:38:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:16.170 17:38:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:16.170 17:38:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:16.170 17:38:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:16.170 17:38:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:18.078 17:38:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:18.079 00:38:18.079 real 0m27.672s 00:38:18.079 user 1m2.675s 00:38:18.079 sys 0m7.173s 00:38:18.079 17:38:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:18.079 17:38:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:18.079 ************************************ 00:38:18.079 END TEST nvmf_bdevperf 00:38:18.079 ************************************ 00:38:18.079 17:38:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:38:18.079 17:38:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:38:18.079 17:38:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:18.079 17:38:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:18.339 ************************************ 00:38:18.339 START TEST nvmf_target_disconnect 00:38:18.339 ************************************ 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:38:18.339 * Looking for test storage... 00:38:18.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:18.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:18.339 --rc genhtml_branch_coverage=1 00:38:18.339 --rc genhtml_function_coverage=1 00:38:18.339 --rc genhtml_legend=1 00:38:18.339 --rc geninfo_all_blocks=1 00:38:18.339 --rc geninfo_unexecuted_blocks=1 00:38:18.339 00:38:18.339 ' 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:18.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:18.339 --rc genhtml_branch_coverage=1 00:38:18.339 --rc genhtml_function_coverage=1 00:38:18.339 --rc genhtml_legend=1 00:38:18.339 --rc geninfo_all_blocks=1 00:38:18.339 --rc geninfo_unexecuted_blocks=1 00:38:18.339 00:38:18.339 ' 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:18.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:18.339 --rc genhtml_branch_coverage=1 00:38:18.339 --rc genhtml_function_coverage=1 00:38:18.339 --rc genhtml_legend=1 00:38:18.339 --rc geninfo_all_blocks=1 00:38:18.339 --rc geninfo_unexecuted_blocks=1 00:38:18.339 00:38:18.339 ' 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:18.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:18.339 --rc genhtml_branch_coverage=1 00:38:18.339 --rc genhtml_function_coverage=1 00:38:18.339 --rc genhtml_legend=1 00:38:18.339 --rc geninfo_all_blocks=1 00:38:18.339 --rc geninfo_unexecuted_blocks=1 00:38:18.339 00:38:18.339 ' 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:18.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:38:18.339 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:18.600 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:18.600 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:18.600 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:18.600 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:18.600 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:18.600 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:18.600 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:18.600 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:38:18.600 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:38:18.600 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:38:18.600 17:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:26.736 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:26.736 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:26.736 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:26.736 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:26.736 17:38:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:26.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:26.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.580 ms 00:38:26.737 00:38:26.737 --- 10.0.0.2 ping statistics --- 00:38:26.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:26.737 rtt min/avg/max/mdev = 0.580/0.580/0.580/0.000 ms 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:26.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:26.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:38:26.737 00:38:26.737 --- 10.0.0.1 ping statistics --- 00:38:26.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:26.737 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # return 0 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:26.737 ************************************ 00:38:26.737 START TEST nvmf_target_disconnect_tc1 00:38:26.737 ************************************ 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:26.737 [2024-10-01 17:38:24.221818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.737 [2024-10-01 17:38:24.221866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97e0e0 with addr=10.0.0.2, port=4420 00:38:26.737 [2024-10-01 17:38:24.221890] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:38:26.737 [2024-10-01 17:38:24.221903] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:26.737 [2024-10-01 17:38:24.221910] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:38:26.737 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:38:26.737 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:38:26.737 Initializing NVMe Controllers 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:26.737 00:38:26.737 real 0m0.108s 00:38:26.737 user 0m0.047s 00:38:26.737 sys 0m0.060s 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:38:26.737 ************************************ 00:38:26.737 END TEST nvmf_target_disconnect_tc1 00:38:26.737 ************************************ 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:26.737 ************************************ 00:38:26.737 START TEST nvmf_target_disconnect_tc2 00:38:26.737 ************************************ 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=3297441 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 3297441 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3297441 ']' 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:26.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:26.737 17:38:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:26.737 [2024-10-01 17:38:24.388817] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:38:26.737 [2024-10-01 17:38:24.388883] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:26.737 [2024-10-01 17:38:24.480029] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:26.737 [2024-10-01 17:38:24.528079] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:26.737 [2024-10-01 17:38:24.528135] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:26.737 [2024-10-01 17:38:24.528143] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:26.737 [2024-10-01 17:38:24.528150] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:26.737 [2024-10-01 17:38:24.528156] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:26.737 [2024-10-01 17:38:24.528783] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:38:26.737 [2024-10-01 17:38:24.528911] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:38:26.737 [2024-10-01 17:38:24.529060] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:38:26.737 [2024-10-01 17:38:24.529060] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:38:26.737 17:38:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:26.737 17:38:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:38:26.737 17:38:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:26.738 17:38:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:26.738 17:38:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:26.738 17:38:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:26.738 17:38:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:26.738 17:38:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:26.738 17:38:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:26.738 Malloc0 00:38:26.738 17:38:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:26.738 17:38:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:38:26.738 17:38:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:26.738 17:38:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:26.738 [2024-10-01 17:38:25.263761] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:26.738 17:38:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:26.738 17:38:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:26.738 17:38:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:26.738 17:38:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:26.738 17:38:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:26.738 17:38:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:26.738 17:38:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:26.738 17:38:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:26.997 17:38:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:26.998 17:38:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:26.998 17:38:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:26.998 17:38:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:26.998 [2024-10-01 17:38:25.292017] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:26.998 17:38:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:26.998 17:38:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:26.998 17:38:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:26.998 17:38:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:26.998 17:38:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:26.998 17:38:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3297689 00:38:26.998 17:38:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:38:26.998 17:38:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:28.912 17:38:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3297441 00:38:28.912 17:38:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:38:28.912 Read completed with error (sct=0, sc=8) 00:38:28.912 starting I/O failed 00:38:28.912 Read completed with error (sct=0, sc=8) 00:38:28.912 starting I/O failed 00:38:28.912 Read completed with error (sct=0, sc=8) 00:38:28.912 starting I/O failed 00:38:28.912 Read completed with error (sct=0, sc=8) 00:38:28.912 starting I/O failed 00:38:28.912 Read completed with error (sct=0, sc=8) 00:38:28.912 starting I/O failed 00:38:28.912 Write completed with error (sct=0, sc=8) 00:38:28.912 starting I/O failed 00:38:28.912 Write completed with error (sct=0, sc=8) 00:38:28.912 starting I/O failed 00:38:28.912 Read completed with error (sct=0, sc=8) 00:38:28.912 starting I/O failed 00:38:28.912 Read completed with error (sct=0, sc=8) 00:38:28.912 starting I/O failed 00:38:28.912 Read completed with error (sct=0, sc=8) 00:38:28.912 starting I/O failed 00:38:28.912 Read completed with error (sct=0, sc=8) 00:38:28.912 starting I/O failed 00:38:28.912 Read completed with error (sct=0, sc=8) 00:38:28.912 starting I/O failed 00:38:28.912 Write completed with error (sct=0, sc=8) 00:38:28.912 starting I/O failed 00:38:28.912 Read completed with error (sct=0, sc=8) 00:38:28.912 starting I/O failed 00:38:28.912 Read completed with error (sct=0, sc=8) 00:38:28.912 starting I/O failed 00:38:28.912 Read completed with error (sct=0, sc=8) 00:38:28.912 starting I/O failed 00:38:28.913 Read completed with error (sct=0, sc=8) 00:38:28.913 starting I/O failed 00:38:28.913 Write completed with error (sct=0, sc=8) 00:38:28.913 starting I/O failed 00:38:28.913 Read completed with error (sct=0, sc=8) 00:38:28.913 starting I/O failed 00:38:28.913 Write completed with error (sct=0, sc=8) 00:38:28.913 starting I/O failed 00:38:28.913 Read completed with error (sct=0, sc=8) 00:38:28.913 starting I/O failed 00:38:28.913 Write completed with error (sct=0, sc=8) 00:38:28.913 starting I/O failed 00:38:28.913 Write completed with error (sct=0, sc=8) 00:38:28.913 starting I/O failed 00:38:28.913 Write completed with error (sct=0, sc=8) 00:38:28.913 starting I/O failed 00:38:28.913 Read completed with error (sct=0, sc=8) 00:38:28.913 starting I/O failed 00:38:28.913 Write completed with error (sct=0, sc=8) 00:38:28.913 starting I/O failed 00:38:28.913 Write completed with error (sct=0, sc=8) 00:38:28.913 starting I/O failed 00:38:28.913 Read completed with error (sct=0, sc=8) 00:38:28.913 starting I/O failed 00:38:28.913 Read completed with error (sct=0, sc=8) 00:38:28.913 starting I/O failed 00:38:28.913 Write completed with error (sct=0, sc=8) 00:38:28.913 starting I/O failed 00:38:28.913 Write completed with error (sct=0, sc=8) 00:38:28.913 starting I/O failed 00:38:28.913 Write completed with error (sct=0, sc=8) 00:38:28.913 starting I/O failed 00:38:28.913 [2024-10-01 17:38:27.320230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:28.913 [2024-10-01 17:38:27.320507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.913 [2024-10-01 17:38:27.320529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.913 qpair failed and we were unable to recover it. 00:38:28.913 [2024-10-01 17:38:27.320833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.913 [2024-10-01 17:38:27.320843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.913 qpair failed and we were unable to recover it. 00:38:28.913 [2024-10-01 17:38:27.321226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.913 [2024-10-01 17:38:27.321270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.913 qpair failed and we were unable to recover it. 00:38:28.913 [2024-10-01 17:38:27.321597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.913 [2024-10-01 17:38:27.321612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.913 qpair failed and we were unable to recover it. 00:38:28.913 [2024-10-01 17:38:27.321902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.913 [2024-10-01 17:38:27.321913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.913 qpair failed and we were unable to recover it. 00:38:28.913 [2024-10-01 17:38:27.322364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.913 [2024-10-01 17:38:27.322403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.913 qpair failed and we were unable to recover it. 00:38:28.913 [2024-10-01 17:38:27.322733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.913 [2024-10-01 17:38:27.322746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.913 qpair failed and we were unable to recover it. 00:38:28.913 [2024-10-01 17:38:27.323217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.913 [2024-10-01 17:38:27.323256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.913 qpair failed and we were unable to recover it. 00:38:28.913 [2024-10-01 17:38:27.323589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.913 [2024-10-01 17:38:27.323601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.913 qpair failed and we were unable to recover it. 00:38:28.913 [2024-10-01 17:38:27.323892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.913 [2024-10-01 17:38:27.323903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.913 qpair failed and we were unable to recover it. 00:38:28.913 [2024-10-01 17:38:27.324180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.913 [2024-10-01 17:38:27.324192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.913 qpair failed and we were unable to recover it. 00:38:28.913 [2024-10-01 17:38:27.324519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.913 [2024-10-01 17:38:27.324530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.913 qpair failed and we were unable to recover it. 00:38:28.913 [2024-10-01 17:38:27.324844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.913 [2024-10-01 17:38:27.324854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.913 qpair failed and we were unable to recover it. 00:38:28.913 [2024-10-01 17:38:27.325149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.913 [2024-10-01 17:38:27.325160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.913 qpair failed and we were unable to recover it. 00:38:28.913 [2024-10-01 17:38:27.325496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.913 [2024-10-01 17:38:27.325506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.913 qpair failed and we were unable to recover it. 00:38:28.913 [2024-10-01 17:38:27.325824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.913 [2024-10-01 17:38:27.325834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.913 qpair failed and we were unable to recover it. 00:38:28.913 [2024-10-01 17:38:27.326043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.913 [2024-10-01 17:38:27.326054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.913 qpair failed and we were unable to recover it. 00:38:28.913 [2024-10-01 17:38:27.326402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.913 [2024-10-01 17:38:27.326413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.913 qpair failed and we were unable to recover it. 00:38:28.913 [2024-10-01 17:38:27.326732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.913 [2024-10-01 17:38:27.326742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.913 qpair failed and we were unable to recover it. 00:38:28.913 [2024-10-01 17:38:27.327071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.913 [2024-10-01 17:38:27.327082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.913 qpair failed and we were unable to recover it. 00:38:28.913 [2024-10-01 17:38:27.327400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.913 [2024-10-01 17:38:27.327411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.913 qpair failed and we were unable to recover it. 00:38:28.913 [2024-10-01 17:38:27.327694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.913 [2024-10-01 17:38:27.327705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.913 qpair failed and we were unable to recover it. 00:38:28.913 [2024-10-01 17:38:27.328007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.913 [2024-10-01 17:38:27.328017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.913 qpair failed and we were unable to recover it. 00:38:28.913 [2024-10-01 17:38:27.328230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.913 [2024-10-01 17:38:27.328243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.913 qpair failed and we were unable to recover it. 00:38:28.913 [2024-10-01 17:38:27.328552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.913 [2024-10-01 17:38:27.328563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.913 qpair failed and we were unable to recover it. 00:38:28.913 [2024-10-01 17:38:27.328864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.913 [2024-10-01 17:38:27.328874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.913 qpair failed and we were unable to recover it. 00:38:28.913 [2024-10-01 17:38:27.329250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.913 [2024-10-01 17:38:27.329263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.913 qpair failed and we were unable to recover it. 00:38:28.913 [2024-10-01 17:38:27.329585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.913 [2024-10-01 17:38:27.329596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.913 qpair failed and we were unable to recover it. 00:38:28.913 [2024-10-01 17:38:27.329882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.913 [2024-10-01 17:38:27.329893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.913 qpair failed and we were unable to recover it. 00:38:28.913 [2024-10-01 17:38:27.330066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.913 [2024-10-01 17:38:27.330078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.913 qpair failed and we were unable to recover it. 00:38:28.913 [2024-10-01 17:38:27.330374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.913 [2024-10-01 17:38:27.330385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.913 qpair failed and we were unable to recover it. 00:38:28.913 [2024-10-01 17:38:27.330667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.913 [2024-10-01 17:38:27.330677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.913 qpair failed and we were unable to recover it. 00:38:28.913 [2024-10-01 17:38:27.330955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.914 [2024-10-01 17:38:27.330965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.914 qpair failed and we were unable to recover it. 00:38:28.914 [2024-10-01 17:38:27.331296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.914 [2024-10-01 17:38:27.331307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.914 qpair failed and we were unable to recover it. 00:38:28.914 [2024-10-01 17:38:27.331645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.914 [2024-10-01 17:38:27.331656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.914 qpair failed and we were unable to recover it. 00:38:28.914 [2024-10-01 17:38:27.331938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.914 [2024-10-01 17:38:27.331949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.914 qpair failed and we were unable to recover it. 00:38:28.914 [2024-10-01 17:38:27.332337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.914 [2024-10-01 17:38:27.332347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.914 qpair failed and we were unable to recover it. 00:38:28.914 [2024-10-01 17:38:27.332625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.914 [2024-10-01 17:38:27.332635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.914 qpair failed and we were unable to recover it. 00:38:28.914 [2024-10-01 17:38:27.332903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.914 [2024-10-01 17:38:27.332912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.914 qpair failed and we were unable to recover it. 00:38:28.914 [2024-10-01 17:38:27.333234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.914 [2024-10-01 17:38:27.333244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.914 qpair failed and we were unable to recover it. 00:38:28.914 [2024-10-01 17:38:27.333560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.914 [2024-10-01 17:38:27.333570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.914 qpair failed and we were unable to recover it. 00:38:28.914 [2024-10-01 17:38:27.333895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.914 [2024-10-01 17:38:27.333905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.914 qpair failed and we were unable to recover it. 00:38:28.914 [2024-10-01 17:38:27.334217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.914 [2024-10-01 17:38:27.334230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.914 qpair failed and we were unable to recover it. 00:38:28.914 [2024-10-01 17:38:27.334599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.914 [2024-10-01 17:38:27.334609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.914 qpair failed and we were unable to recover it. 00:38:28.914 [2024-10-01 17:38:27.334913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.914 [2024-10-01 17:38:27.334923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.914 qpair failed and we were unable to recover it. 00:38:28.914 [2024-10-01 17:38:27.335242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.914 [2024-10-01 17:38:27.335252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.914 qpair failed and we were unable to recover it. 00:38:28.914 [2024-10-01 17:38:27.335528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.914 [2024-10-01 17:38:27.335538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.914 qpair failed and we were unable to recover it. 00:38:28.914 [2024-10-01 17:38:27.335805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.914 [2024-10-01 17:38:27.335814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.914 qpair failed and we were unable to recover it. 00:38:28.914 [2024-10-01 17:38:27.336000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.914 [2024-10-01 17:38:27.336011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.914 qpair failed and we were unable to recover it. 00:38:28.914 [2024-10-01 17:38:27.336392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.914 [2024-10-01 17:38:27.336402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.914 qpair failed and we were unable to recover it. 00:38:28.914 [2024-10-01 17:38:27.336729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.914 [2024-10-01 17:38:27.336739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.914 qpair failed and we were unable to recover it. 00:38:28.914 [2024-10-01 17:38:27.337027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.914 [2024-10-01 17:38:27.337037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.914 qpair failed and we were unable to recover it. 00:38:28.914 [2024-10-01 17:38:27.337327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.914 [2024-10-01 17:38:27.337336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.914 qpair failed and we were unable to recover it. 00:38:28.914 [2024-10-01 17:38:27.337526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.914 [2024-10-01 17:38:27.337537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.914 qpair failed and we were unable to recover it. 00:38:28.914 [2024-10-01 17:38:27.337972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.914 [2024-10-01 17:38:27.337981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.914 qpair failed and we were unable to recover it. 00:38:28.914 [2024-10-01 17:38:27.338288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.914 [2024-10-01 17:38:27.338299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.914 qpair failed and we were unable to recover it. 00:38:28.914 [2024-10-01 17:38:27.338627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.914 [2024-10-01 17:38:27.338637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.914 qpair failed and we were unable to recover it. 00:38:28.914 [2024-10-01 17:38:27.339015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.914 [2024-10-01 17:38:27.339025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.914 qpair failed and we were unable to recover it. 00:38:28.914 [2024-10-01 17:38:27.339448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.914 [2024-10-01 17:38:27.339459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.914 qpair failed and we were unable to recover it. 00:38:28.914 [2024-10-01 17:38:27.339672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.914 [2024-10-01 17:38:27.339682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.914 qpair failed and we were unable to recover it. 00:38:28.914 [2024-10-01 17:38:27.339969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.914 [2024-10-01 17:38:27.339980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.914 qpair failed and we were unable to recover it. 00:38:28.914 [2024-10-01 17:38:27.340308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.914 [2024-10-01 17:38:27.340318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.914 qpair failed and we were unable to recover it. 00:38:28.914 [2024-10-01 17:38:27.340683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.914 [2024-10-01 17:38:27.340693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.914 qpair failed and we were unable to recover it. 00:38:28.914 [2024-10-01 17:38:27.341006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.914 [2024-10-01 17:38:27.341017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.914 qpair failed and we were unable to recover it. 00:38:28.914 [2024-10-01 17:38:27.341322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.914 [2024-10-01 17:38:27.341333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.914 qpair failed and we were unable to recover it. 00:38:28.914 [2024-10-01 17:38:27.341638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.914 [2024-10-01 17:38:27.341647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.914 qpair failed and we were unable to recover it. 00:38:28.914 [2024-10-01 17:38:27.342008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.914 [2024-10-01 17:38:27.342018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.914 qpair failed and we were unable to recover it. 00:38:28.914 [2024-10-01 17:38:27.342327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.914 [2024-10-01 17:38:27.342337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.914 qpair failed and we were unable to recover it. 00:38:28.914 [2024-10-01 17:38:27.342632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.914 [2024-10-01 17:38:27.342642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.914 qpair failed and we were unable to recover it. 00:38:28.914 [2024-10-01 17:38:27.342932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.914 [2024-10-01 17:38:27.342942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.914 qpair failed and we were unable to recover it. 00:38:28.914 [2024-10-01 17:38:27.343238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.914 [2024-10-01 17:38:27.343249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.915 qpair failed and we were unable to recover it. 00:38:28.915 [2024-10-01 17:38:27.343543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.915 [2024-10-01 17:38:27.343553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.915 qpair failed and we were unable to recover it. 00:38:28.915 [2024-10-01 17:38:27.343835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.915 [2024-10-01 17:38:27.343845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.915 qpair failed and we were unable to recover it. 00:38:28.915 [2024-10-01 17:38:27.343988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.915 [2024-10-01 17:38:27.344006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.915 qpair failed and we were unable to recover it. 00:38:28.915 [2024-10-01 17:38:27.344296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.915 [2024-10-01 17:38:27.344313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.915 qpair failed and we were unable to recover it. 00:38:28.915 [2024-10-01 17:38:27.344628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.915 [2024-10-01 17:38:27.344638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.915 qpair failed and we were unable to recover it. 00:38:28.915 [2024-10-01 17:38:27.344958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.915 [2024-10-01 17:38:27.344969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.915 qpair failed and we were unable to recover it. 00:38:28.915 [2024-10-01 17:38:27.345280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.915 [2024-10-01 17:38:27.345291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.915 qpair failed and we were unable to recover it. 00:38:28.915 [2024-10-01 17:38:27.345578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.915 [2024-10-01 17:38:27.345589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.915 qpair failed and we were unable to recover it. 00:38:28.915 [2024-10-01 17:38:27.345918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.915 [2024-10-01 17:38:27.345929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.915 qpair failed and we were unable to recover it. 00:38:28.915 [2024-10-01 17:38:27.346247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.915 [2024-10-01 17:38:27.346258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.915 qpair failed and we were unable to recover it. 00:38:28.915 [2024-10-01 17:38:27.346557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.915 [2024-10-01 17:38:27.346567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.915 qpair failed and we were unable to recover it. 00:38:28.915 [2024-10-01 17:38:27.346841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.915 [2024-10-01 17:38:27.346854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.915 qpair failed and we were unable to recover it. 00:38:28.915 [2024-10-01 17:38:27.347136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.915 [2024-10-01 17:38:27.347147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.915 qpair failed and we were unable to recover it. 00:38:28.915 [2024-10-01 17:38:27.347511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.915 [2024-10-01 17:38:27.347521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.915 qpair failed and we were unable to recover it. 00:38:28.915 [2024-10-01 17:38:27.347789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.915 [2024-10-01 17:38:27.347799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.915 qpair failed and we were unable to recover it. 00:38:28.915 [2024-10-01 17:38:27.348086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.915 [2024-10-01 17:38:27.348097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.915 qpair failed and we were unable to recover it. 00:38:28.915 [2024-10-01 17:38:27.348398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.915 [2024-10-01 17:38:27.348408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.915 qpair failed and we were unable to recover it. 00:38:28.915 [2024-10-01 17:38:27.348693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.915 [2024-10-01 17:38:27.348705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.915 qpair failed and we were unable to recover it. 00:38:28.915 [2024-10-01 17:38:27.349002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.915 [2024-10-01 17:38:27.349013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.915 qpair failed and we were unable to recover it. 00:38:28.915 [2024-10-01 17:38:27.349310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.915 [2024-10-01 17:38:27.349320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.915 qpair failed and we were unable to recover it. 00:38:28.915 [2024-10-01 17:38:27.349622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.915 [2024-10-01 17:38:27.349632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.915 qpair failed and we were unable to recover it. 00:38:28.915 [2024-10-01 17:38:27.349818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.915 [2024-10-01 17:38:27.349827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.915 qpair failed and we were unable to recover it. 00:38:28.915 [2024-10-01 17:38:27.350167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.915 [2024-10-01 17:38:27.350177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.915 qpair failed and we were unable to recover it. 00:38:28.915 [2024-10-01 17:38:27.350492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.915 [2024-10-01 17:38:27.350502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.915 qpair failed and we were unable to recover it. 00:38:28.915 [2024-10-01 17:38:27.350805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.915 [2024-10-01 17:38:27.350815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.915 qpair failed and we were unable to recover it. 00:38:28.915 [2024-10-01 17:38:27.351117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.915 [2024-10-01 17:38:27.351128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.915 qpair failed and we were unable to recover it. 00:38:28.915 [2024-10-01 17:38:27.351421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.915 [2024-10-01 17:38:27.351431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.915 qpair failed and we were unable to recover it. 00:38:28.915 [2024-10-01 17:38:27.351710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.915 [2024-10-01 17:38:27.351720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.915 qpair failed and we were unable to recover it. 00:38:28.915 [2024-10-01 17:38:27.352037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.915 [2024-10-01 17:38:27.352048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.915 qpair failed and we were unable to recover it. 00:38:28.915 [2024-10-01 17:38:27.352227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.915 [2024-10-01 17:38:27.352237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.915 qpair failed and we were unable to recover it. 00:38:28.915 [2024-10-01 17:38:27.352603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.915 [2024-10-01 17:38:27.352614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.915 qpair failed and we were unable to recover it. 00:38:28.915 [2024-10-01 17:38:27.352970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.915 [2024-10-01 17:38:27.352984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.915 qpair failed and we were unable to recover it. 00:38:28.915 [2024-10-01 17:38:27.353310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.915 [2024-10-01 17:38:27.353325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.915 qpair failed and we were unable to recover it. 00:38:28.915 [2024-10-01 17:38:27.353631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.915 [2024-10-01 17:38:27.353645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.915 qpair failed and we were unable to recover it. 00:38:28.915 [2024-10-01 17:38:27.353956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.915 [2024-10-01 17:38:27.353969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.915 qpair failed and we were unable to recover it. 00:38:28.915 [2024-10-01 17:38:27.354358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.915 [2024-10-01 17:38:27.354372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.915 qpair failed and we were unable to recover it. 00:38:28.915 [2024-10-01 17:38:27.354791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.915 [2024-10-01 17:38:27.354805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.915 qpair failed and we were unable to recover it. 00:38:28.915 [2024-10-01 17:38:27.355120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.915 [2024-10-01 17:38:27.355135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.915 qpair failed and we were unable to recover it. 00:38:28.915 [2024-10-01 17:38:27.355474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.916 [2024-10-01 17:38:27.355491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.916 qpair failed and we were unable to recover it. 00:38:28.916 [2024-10-01 17:38:27.355790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.916 [2024-10-01 17:38:27.355804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.916 qpair failed and we were unable to recover it. 00:38:28.916 [2024-10-01 17:38:27.356030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.916 [2024-10-01 17:38:27.356044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.916 qpair failed and we were unable to recover it. 00:38:28.916 [2024-10-01 17:38:27.356323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.916 [2024-10-01 17:38:27.356336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.916 qpair failed and we were unable to recover it. 00:38:28.916 [2024-10-01 17:38:27.356645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.916 [2024-10-01 17:38:27.356658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.916 qpair failed and we were unable to recover it. 00:38:28.916 [2024-10-01 17:38:27.356905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.916 [2024-10-01 17:38:27.356918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.916 qpair failed and we were unable to recover it. 00:38:28.916 [2024-10-01 17:38:27.357207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.916 [2024-10-01 17:38:27.357221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.916 qpair failed and we were unable to recover it. 00:38:28.916 [2024-10-01 17:38:27.357541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.916 [2024-10-01 17:38:27.357554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.916 qpair failed and we were unable to recover it. 00:38:28.916 [2024-10-01 17:38:27.357941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.916 [2024-10-01 17:38:27.357954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.916 qpair failed and we were unable to recover it. 00:38:28.916 [2024-10-01 17:38:27.358269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.916 [2024-10-01 17:38:27.358282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.916 qpair failed and we were unable to recover it. 00:38:28.916 [2024-10-01 17:38:27.358576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.916 [2024-10-01 17:38:27.358589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.916 qpair failed and we were unable to recover it. 00:38:28.916 [2024-10-01 17:38:27.358868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.916 [2024-10-01 17:38:27.358882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.916 qpair failed and we were unable to recover it. 00:38:28.916 [2024-10-01 17:38:27.359215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.916 [2024-10-01 17:38:27.359230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.916 qpair failed and we were unable to recover it. 00:38:28.916 [2024-10-01 17:38:27.359463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.916 [2024-10-01 17:38:27.359477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.916 qpair failed and we were unable to recover it. 00:38:28.916 [2024-10-01 17:38:27.359775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.916 [2024-10-01 17:38:27.359789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.916 qpair failed and we were unable to recover it. 00:38:28.916 [2024-10-01 17:38:27.360109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.916 [2024-10-01 17:38:27.360123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.916 qpair failed and we were unable to recover it. 00:38:28.916 [2024-10-01 17:38:27.360431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.916 [2024-10-01 17:38:27.360445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.916 qpair failed and we were unable to recover it. 00:38:28.916 [2024-10-01 17:38:27.360766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.916 [2024-10-01 17:38:27.360779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.916 qpair failed and we were unable to recover it. 00:38:28.916 [2024-10-01 17:38:27.361145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.916 [2024-10-01 17:38:27.361159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.916 qpair failed and we were unable to recover it. 00:38:28.916 [2024-10-01 17:38:27.361497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.916 [2024-10-01 17:38:27.361510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.916 qpair failed and we were unable to recover it. 00:38:28.916 [2024-10-01 17:38:27.361713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.916 [2024-10-01 17:38:27.361728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.916 qpair failed and we were unable to recover it. 00:38:28.916 [2024-10-01 17:38:27.362040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.916 [2024-10-01 17:38:27.362054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.916 qpair failed and we were unable to recover it. 00:38:28.916 [2024-10-01 17:38:27.362411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.916 [2024-10-01 17:38:27.362425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.916 qpair failed and we were unable to recover it. 00:38:28.916 [2024-10-01 17:38:27.362644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.916 [2024-10-01 17:38:27.362657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.916 qpair failed and we were unable to recover it. 00:38:28.916 [2024-10-01 17:38:27.362966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.916 [2024-10-01 17:38:27.362980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.916 qpair failed and we were unable to recover it. 00:38:28.916 [2024-10-01 17:38:27.363321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.916 [2024-10-01 17:38:27.363335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.916 qpair failed and we were unable to recover it. 00:38:28.916 [2024-10-01 17:38:27.363530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.916 [2024-10-01 17:38:27.363545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.916 qpair failed and we were unable to recover it. 00:38:28.916 [2024-10-01 17:38:27.363893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.916 [2024-10-01 17:38:27.363907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.916 qpair failed and we were unable to recover it. 00:38:28.916 [2024-10-01 17:38:27.364182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.916 [2024-10-01 17:38:27.364197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.916 qpair failed and we were unable to recover it. 00:38:28.916 [2024-10-01 17:38:27.364506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.916 [2024-10-01 17:38:27.364520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.916 qpair failed and we were unable to recover it. 00:38:28.916 [2024-10-01 17:38:27.364824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.916 [2024-10-01 17:38:27.364838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.916 qpair failed and we were unable to recover it. 00:38:28.916 [2024-10-01 17:38:27.365150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.916 [2024-10-01 17:38:27.365164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.916 qpair failed and we were unable to recover it. 00:38:28.916 [2024-10-01 17:38:27.365469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.916 [2024-10-01 17:38:27.365483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.916 qpair failed and we were unable to recover it. 00:38:28.916 [2024-10-01 17:38:27.365782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.916 [2024-10-01 17:38:27.365796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.916 qpair failed and we were unable to recover it. 00:38:28.916 [2024-10-01 17:38:27.365980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.916 [2024-10-01 17:38:27.365999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.916 qpair failed and we were unable to recover it. 00:38:28.916 [2024-10-01 17:38:27.366395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.916 [2024-10-01 17:38:27.366408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.916 qpair failed and we were unable to recover it. 00:38:28.916 [2024-10-01 17:38:27.366777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.916 [2024-10-01 17:38:27.366791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.916 qpair failed and we were unable to recover it. 00:38:28.916 [2024-10-01 17:38:27.367106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.916 [2024-10-01 17:38:27.367119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.916 qpair failed and we were unable to recover it. 00:38:28.916 [2024-10-01 17:38:27.367410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.917 [2024-10-01 17:38:27.367424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.917 qpair failed and we were unable to recover it. 00:38:28.917 [2024-10-01 17:38:27.367622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.917 [2024-10-01 17:38:27.367635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.917 qpair failed and we were unable to recover it. 00:38:28.917 [2024-10-01 17:38:27.367926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.917 [2024-10-01 17:38:27.367944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.917 qpair failed and we were unable to recover it. 00:38:28.917 [2024-10-01 17:38:27.368240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.917 [2024-10-01 17:38:27.368255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.917 qpair failed and we were unable to recover it. 00:38:28.917 [2024-10-01 17:38:27.368572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.917 [2024-10-01 17:38:27.368586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.917 qpair failed and we were unable to recover it. 00:38:28.917 [2024-10-01 17:38:27.368905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.917 [2024-10-01 17:38:27.368919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.917 qpair failed and we were unable to recover it. 00:38:28.917 [2024-10-01 17:38:27.369108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.917 [2024-10-01 17:38:27.369127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.917 qpair failed and we were unable to recover it. 00:38:28.917 [2024-10-01 17:38:27.369445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.917 [2024-10-01 17:38:27.369463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.917 qpair failed and we were unable to recover it. 00:38:28.917 [2024-10-01 17:38:27.369759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.917 [2024-10-01 17:38:27.369775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.917 qpair failed and we were unable to recover it. 00:38:28.917 [2024-10-01 17:38:27.369986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.917 [2024-10-01 17:38:27.370010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.917 qpair failed and we were unable to recover it. 00:38:28.917 [2024-10-01 17:38:27.370368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.917 [2024-10-01 17:38:27.370386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.917 qpair failed and we were unable to recover it. 00:38:28.917 [2024-10-01 17:38:27.370704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.917 [2024-10-01 17:38:27.370721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.917 qpair failed and we were unable to recover it. 00:38:28.917 [2024-10-01 17:38:27.371012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.917 [2024-10-01 17:38:27.371030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.917 qpair failed and we were unable to recover it. 00:38:28.917 [2024-10-01 17:38:27.371364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.917 [2024-10-01 17:38:27.371381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.917 qpair failed and we were unable to recover it. 00:38:28.917 [2024-10-01 17:38:27.371727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.917 [2024-10-01 17:38:27.371744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.917 qpair failed and we were unable to recover it. 00:38:28.917 [2024-10-01 17:38:27.371958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.917 [2024-10-01 17:38:27.371975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.917 qpair failed and we were unable to recover it. 00:38:28.917 [2024-10-01 17:38:27.372289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.917 [2024-10-01 17:38:27.372308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.917 qpair failed and we were unable to recover it. 00:38:28.917 [2024-10-01 17:38:27.372628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.917 [2024-10-01 17:38:27.372646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.917 qpair failed and we were unable to recover it. 00:38:28.917 [2024-10-01 17:38:27.372946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.917 [2024-10-01 17:38:27.372963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.917 qpair failed and we were unable to recover it. 00:38:28.917 [2024-10-01 17:38:27.373181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.917 [2024-10-01 17:38:27.373201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.917 qpair failed and we were unable to recover it. 00:38:28.917 [2024-10-01 17:38:27.373593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.917 [2024-10-01 17:38:27.373611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.917 qpair failed and we were unable to recover it. 00:38:28.917 [2024-10-01 17:38:27.373929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.917 [2024-10-01 17:38:27.373947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.917 qpair failed and we were unable to recover it. 00:38:28.917 [2024-10-01 17:38:27.374259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.917 [2024-10-01 17:38:27.374278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.917 qpair failed and we were unable to recover it. 00:38:28.917 [2024-10-01 17:38:27.374631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.917 [2024-10-01 17:38:27.374649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.917 qpair failed and we were unable to recover it. 00:38:28.917 [2024-10-01 17:38:27.374964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.917 [2024-10-01 17:38:27.374982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.917 qpair failed and we were unable to recover it. 00:38:28.917 [2024-10-01 17:38:27.375304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.917 [2024-10-01 17:38:27.375322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.917 qpair failed and we were unable to recover it. 00:38:28.917 [2024-10-01 17:38:27.375630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.917 [2024-10-01 17:38:27.375647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.917 qpair failed and we were unable to recover it. 00:38:28.917 [2024-10-01 17:38:27.375937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.917 [2024-10-01 17:38:27.375954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.917 qpair failed and we were unable to recover it. 00:38:28.917 [2024-10-01 17:38:27.376272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.917 [2024-10-01 17:38:27.376291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.917 qpair failed and we were unable to recover it. 00:38:28.917 [2024-10-01 17:38:27.376616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.917 [2024-10-01 17:38:27.376634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.917 qpair failed and we were unable to recover it. 00:38:28.917 [2024-10-01 17:38:27.376939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.917 [2024-10-01 17:38:27.376957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.917 qpair failed and we were unable to recover it. 00:38:28.917 [2024-10-01 17:38:27.377278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.917 [2024-10-01 17:38:27.377297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.917 qpair failed and we were unable to recover it. 00:38:28.917 [2024-10-01 17:38:27.377613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.917 [2024-10-01 17:38:27.377631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.917 qpair failed and we were unable to recover it. 00:38:28.917 [2024-10-01 17:38:27.377969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.917 [2024-10-01 17:38:27.377986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.917 qpair failed and we were unable to recover it. 00:38:28.918 [2024-10-01 17:38:27.378308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.918 [2024-10-01 17:38:27.378328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.918 qpair failed and we were unable to recover it. 00:38:28.918 [2024-10-01 17:38:27.378649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.918 [2024-10-01 17:38:27.378675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.918 qpair failed and we were unable to recover it. 00:38:28.918 [2024-10-01 17:38:27.378901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.918 [2024-10-01 17:38:27.378929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.918 qpair failed and we were unable to recover it. 00:38:28.918 [2024-10-01 17:38:27.379253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.918 [2024-10-01 17:38:27.379281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.918 qpair failed and we were unable to recover it. 00:38:28.918 [2024-10-01 17:38:27.379605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.918 [2024-10-01 17:38:27.379630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.918 qpair failed and we were unable to recover it. 00:38:28.918 [2024-10-01 17:38:27.380020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.918 [2024-10-01 17:38:27.380047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.918 qpair failed and we were unable to recover it. 00:38:28.918 [2024-10-01 17:38:27.380417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.918 [2024-10-01 17:38:27.380442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.918 qpair failed and we were unable to recover it. 00:38:28.918 [2024-10-01 17:38:27.380786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.918 [2024-10-01 17:38:27.380811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.918 qpair failed and we were unable to recover it. 00:38:28.918 [2024-10-01 17:38:27.381170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.918 [2024-10-01 17:38:27.381202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.918 qpair failed and we were unable to recover it. 00:38:28.918 [2024-10-01 17:38:27.381564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.918 [2024-10-01 17:38:27.381588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.918 qpair failed and we were unable to recover it. 00:38:28.918 [2024-10-01 17:38:27.381917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.918 [2024-10-01 17:38:27.381942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.918 qpair failed and we were unable to recover it. 00:38:28.918 [2024-10-01 17:38:27.382293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.918 [2024-10-01 17:38:27.382319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.918 qpair failed and we were unable to recover it. 00:38:28.918 [2024-10-01 17:38:27.382676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.918 [2024-10-01 17:38:27.382701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.918 qpair failed and we were unable to recover it. 00:38:28.918 [2024-10-01 17:38:27.383045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.918 [2024-10-01 17:38:27.383072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.918 qpair failed and we were unable to recover it. 00:38:28.918 [2024-10-01 17:38:27.383417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.918 [2024-10-01 17:38:27.383443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.918 qpair failed and we were unable to recover it. 00:38:28.918 [2024-10-01 17:38:27.383679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.918 [2024-10-01 17:38:27.383704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.918 qpair failed and we were unable to recover it. 00:38:28.918 [2024-10-01 17:38:27.384075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.918 [2024-10-01 17:38:27.384103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.918 qpair failed and we were unable to recover it. 00:38:28.918 [2024-10-01 17:38:27.384432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.918 [2024-10-01 17:38:27.384456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.918 qpair failed and we were unable to recover it. 00:38:28.918 [2024-10-01 17:38:27.384818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.918 [2024-10-01 17:38:27.384841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.918 qpair failed and we were unable to recover it. 00:38:28.918 [2024-10-01 17:38:27.385178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.918 [2024-10-01 17:38:27.385205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.918 qpair failed and we were unable to recover it. 00:38:28.918 [2024-10-01 17:38:27.385513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.918 [2024-10-01 17:38:27.385537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.918 qpair failed and we were unable to recover it. 00:38:28.918 [2024-10-01 17:38:27.385855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.918 [2024-10-01 17:38:27.385879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.918 qpair failed and we were unable to recover it. 00:38:28.918 [2024-10-01 17:38:27.386126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.918 [2024-10-01 17:38:27.386153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.918 qpair failed and we were unable to recover it. 00:38:28.918 [2024-10-01 17:38:27.386499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.918 [2024-10-01 17:38:27.386524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.918 qpair failed and we were unable to recover it. 00:38:28.918 [2024-10-01 17:38:27.386892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.918 [2024-10-01 17:38:27.386917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.918 qpair failed and we were unable to recover it. 00:38:28.918 [2024-10-01 17:38:27.387260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.918 [2024-10-01 17:38:27.387287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.918 qpair failed and we were unable to recover it. 00:38:28.918 [2024-10-01 17:38:27.387629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.918 [2024-10-01 17:38:27.387654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.918 qpair failed and we were unable to recover it. 00:38:28.918 [2024-10-01 17:38:27.388048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.918 [2024-10-01 17:38:27.388075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.918 qpair failed and we were unable to recover it. 00:38:28.918 [2024-10-01 17:38:27.388310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.918 [2024-10-01 17:38:27.388335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.918 qpair failed and we were unable to recover it. 00:38:28.918 [2024-10-01 17:38:27.388655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.918 [2024-10-01 17:38:27.388680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.918 qpair failed and we were unable to recover it. 00:38:28.918 [2024-10-01 17:38:27.389011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.918 [2024-10-01 17:38:27.389037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.918 qpair failed and we were unable to recover it. 00:38:28.918 [2024-10-01 17:38:27.389380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.918 [2024-10-01 17:38:27.389404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.918 qpair failed and we were unable to recover it. 00:38:28.918 [2024-10-01 17:38:27.389762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.918 [2024-10-01 17:38:27.389786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.918 qpair failed and we were unable to recover it. 00:38:28.918 [2024-10-01 17:38:27.390031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.918 [2024-10-01 17:38:27.390057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.918 qpair failed and we were unable to recover it. 00:38:28.918 [2024-10-01 17:38:27.390451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.918 [2024-10-01 17:38:27.390475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.918 qpair failed and we were unable to recover it. 00:38:28.918 [2024-10-01 17:38:27.390815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.918 [2024-10-01 17:38:27.390840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.918 qpair failed and we were unable to recover it. 00:38:28.918 [2024-10-01 17:38:27.391182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.918 [2024-10-01 17:38:27.391208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.918 qpair failed and we were unable to recover it. 00:38:28.918 [2024-10-01 17:38:27.391572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.918 [2024-10-01 17:38:27.391599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.918 qpair failed and we were unable to recover it. 00:38:28.918 [2024-10-01 17:38:27.391941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.919 [2024-10-01 17:38:27.391969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.919 qpair failed and we were unable to recover it. 00:38:28.919 [2024-10-01 17:38:27.392356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.919 [2024-10-01 17:38:27.392384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.919 qpair failed and we were unable to recover it. 00:38:28.919 [2024-10-01 17:38:27.392714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.919 [2024-10-01 17:38:27.392742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.919 qpair failed and we were unable to recover it. 00:38:28.919 [2024-10-01 17:38:27.393087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.919 [2024-10-01 17:38:27.393119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.919 qpair failed and we were unable to recover it. 00:38:28.919 [2024-10-01 17:38:27.393439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.919 [2024-10-01 17:38:27.393467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.919 qpair failed and we were unable to recover it. 00:38:28.919 [2024-10-01 17:38:27.393845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.919 [2024-10-01 17:38:27.393873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.919 qpair failed and we were unable to recover it. 00:38:28.919 [2024-10-01 17:38:27.394252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.919 [2024-10-01 17:38:27.394280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.919 qpair failed and we were unable to recover it. 00:38:28.919 [2024-10-01 17:38:27.394627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.919 [2024-10-01 17:38:27.394655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.919 qpair failed and we were unable to recover it. 00:38:28.919 [2024-10-01 17:38:27.395008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.919 [2024-10-01 17:38:27.395038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.919 qpair failed and we were unable to recover it. 00:38:28.919 [2024-10-01 17:38:27.395375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.919 [2024-10-01 17:38:27.395403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.919 qpair failed and we were unable to recover it. 00:38:28.919 [2024-10-01 17:38:27.395764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.919 [2024-10-01 17:38:27.395799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.919 qpair failed and we were unable to recover it. 00:38:28.919 [2024-10-01 17:38:27.396088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.919 [2024-10-01 17:38:27.396119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.919 qpair failed and we were unable to recover it. 00:38:28.919 [2024-10-01 17:38:27.396476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.919 [2024-10-01 17:38:27.396505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.919 qpair failed and we were unable to recover it. 00:38:28.919 [2024-10-01 17:38:27.396874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.919 [2024-10-01 17:38:27.396902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.919 qpair failed and we were unable to recover it. 00:38:28.919 [2024-10-01 17:38:27.397222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.919 [2024-10-01 17:38:27.397251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.919 qpair failed and we were unable to recover it. 00:38:28.919 [2024-10-01 17:38:27.397617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.919 [2024-10-01 17:38:27.397645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.919 qpair failed and we were unable to recover it. 00:38:28.919 [2024-10-01 17:38:27.397939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.919 [2024-10-01 17:38:27.397967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.919 qpair failed and we were unable to recover it. 00:38:28.919 [2024-10-01 17:38:27.398333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.919 [2024-10-01 17:38:27.398362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.919 qpair failed and we were unable to recover it. 00:38:28.919 [2024-10-01 17:38:27.398701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.919 [2024-10-01 17:38:27.398728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.919 qpair failed and we were unable to recover it. 00:38:28.919 [2024-10-01 17:38:27.399076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.919 [2024-10-01 17:38:27.399104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.919 qpair failed and we were unable to recover it. 00:38:28.919 [2024-10-01 17:38:27.399488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.919 [2024-10-01 17:38:27.399516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.919 qpair failed and we were unable to recover it. 00:38:28.919 [2024-10-01 17:38:27.399874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.919 [2024-10-01 17:38:27.399902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.919 qpair failed and we were unable to recover it. 00:38:28.919 [2024-10-01 17:38:27.400240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.919 [2024-10-01 17:38:27.400269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.919 qpair failed and we were unable to recover it. 00:38:28.919 [2024-10-01 17:38:27.400511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.919 [2024-10-01 17:38:27.400538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.919 qpair failed and we were unable to recover it. 00:38:28.919 [2024-10-01 17:38:27.400861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.919 [2024-10-01 17:38:27.400890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.919 qpair failed and we were unable to recover it. 00:38:28.919 [2024-10-01 17:38:27.401123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.919 [2024-10-01 17:38:27.401154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.919 qpair failed and we were unable to recover it. 00:38:28.919 [2024-10-01 17:38:27.401488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.919 [2024-10-01 17:38:27.401516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.919 qpair failed and we were unable to recover it. 00:38:28.919 [2024-10-01 17:38:27.401858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.919 [2024-10-01 17:38:27.401887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.919 qpair failed and we were unable to recover it. 00:38:28.919 [2024-10-01 17:38:27.402296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.919 [2024-10-01 17:38:27.402325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.919 qpair failed and we were unable to recover it. 00:38:28.919 [2024-10-01 17:38:27.402680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.919 [2024-10-01 17:38:27.402708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.919 qpair failed and we were unable to recover it. 00:38:28.919 [2024-10-01 17:38:27.403078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.919 [2024-10-01 17:38:27.403107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.919 qpair failed and we were unable to recover it. 00:38:28.919 [2024-10-01 17:38:27.403505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.919 [2024-10-01 17:38:27.403532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.919 qpair failed and we were unable to recover it. 00:38:28.919 [2024-10-01 17:38:27.403771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.919 [2024-10-01 17:38:27.403799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.919 qpair failed and we were unable to recover it. 00:38:28.919 [2024-10-01 17:38:27.404148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.919 [2024-10-01 17:38:27.404179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.919 qpair failed and we were unable to recover it. 00:38:28.919 [2024-10-01 17:38:27.404537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.919 [2024-10-01 17:38:27.404565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.919 qpair failed and we were unable to recover it. 00:38:28.919 [2024-10-01 17:38:27.404910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.919 [2024-10-01 17:38:27.404937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.919 qpair failed and we were unable to recover it. 00:38:28.919 [2024-10-01 17:38:27.405315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.919 [2024-10-01 17:38:27.405345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.919 qpair failed and we were unable to recover it. 00:38:28.919 [2024-10-01 17:38:27.405623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.919 [2024-10-01 17:38:27.405651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.919 qpair failed and we were unable to recover it. 00:38:28.919 [2024-10-01 17:38:27.405976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.919 [2024-10-01 17:38:27.406013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.919 qpair failed and we were unable to recover it. 00:38:28.920 [2024-10-01 17:38:27.406361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.920 [2024-10-01 17:38:27.406391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.920 qpair failed and we were unable to recover it. 00:38:28.920 [2024-10-01 17:38:27.406802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.920 [2024-10-01 17:38:27.406830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.920 qpair failed and we were unable to recover it. 00:38:28.920 [2024-10-01 17:38:27.407183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.920 [2024-10-01 17:38:27.407211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.920 qpair failed and we were unable to recover it. 00:38:28.920 [2024-10-01 17:38:27.407526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.920 [2024-10-01 17:38:27.407554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.920 qpair failed and we were unable to recover it. 00:38:28.920 [2024-10-01 17:38:27.407906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.920 [2024-10-01 17:38:27.407934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.920 qpair failed and we were unable to recover it. 00:38:28.920 [2024-10-01 17:38:27.408306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.920 [2024-10-01 17:38:27.408336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.920 qpair failed and we were unable to recover it. 00:38:28.920 [2024-10-01 17:38:27.408693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.920 [2024-10-01 17:38:27.408720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.920 qpair failed and we were unable to recover it. 00:38:28.920 [2024-10-01 17:38:27.409081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.920 [2024-10-01 17:38:27.409110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.920 qpair failed and we were unable to recover it. 00:38:28.920 [2024-10-01 17:38:27.409453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.920 [2024-10-01 17:38:27.409481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.920 qpair failed and we were unable to recover it. 00:38:28.920 [2024-10-01 17:38:27.409842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.920 [2024-10-01 17:38:27.409870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.920 qpair failed and we were unable to recover it. 00:38:28.920 [2024-10-01 17:38:27.410290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.920 [2024-10-01 17:38:27.410319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.920 qpair failed and we were unable to recover it. 00:38:28.920 [2024-10-01 17:38:27.410661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.920 [2024-10-01 17:38:27.410695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.920 qpair failed and we were unable to recover it. 00:38:28.920 [2024-10-01 17:38:27.411047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.920 [2024-10-01 17:38:27.411075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.920 qpair failed and we were unable to recover it. 00:38:28.920 [2024-10-01 17:38:27.411432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.920 [2024-10-01 17:38:27.411459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.920 qpair failed and we were unable to recover it. 00:38:28.920 [2024-10-01 17:38:27.411802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.920 [2024-10-01 17:38:27.411830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.920 qpair failed and we were unable to recover it. 00:38:28.920 [2024-10-01 17:38:27.412193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.920 [2024-10-01 17:38:27.412224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.920 qpair failed and we were unable to recover it. 00:38:28.920 [2024-10-01 17:38:27.412568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.920 [2024-10-01 17:38:27.412596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.920 qpair failed and we were unable to recover it. 00:38:28.920 [2024-10-01 17:38:27.413002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.920 [2024-10-01 17:38:27.413032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.920 qpair failed and we were unable to recover it. 00:38:28.920 [2024-10-01 17:38:27.413383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.920 [2024-10-01 17:38:27.413411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.920 qpair failed and we were unable to recover it. 00:38:28.920 [2024-10-01 17:38:27.413770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.920 [2024-10-01 17:38:27.413797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.920 qpair failed and we were unable to recover it. 00:38:28.920 [2024-10-01 17:38:27.414191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.920 [2024-10-01 17:38:27.414220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.920 qpair failed and we were unable to recover it. 00:38:28.920 [2024-10-01 17:38:27.414536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.920 [2024-10-01 17:38:27.414564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.920 qpair failed and we were unable to recover it. 00:38:28.920 [2024-10-01 17:38:27.414935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.920 [2024-10-01 17:38:27.414962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.920 qpair failed and we were unable to recover it. 00:38:28.920 [2024-10-01 17:38:27.415218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.920 [2024-10-01 17:38:27.415250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.920 qpair failed and we were unable to recover it. 00:38:28.920 [2024-10-01 17:38:27.415587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.920 [2024-10-01 17:38:27.415616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.920 qpair failed and we were unable to recover it. 00:38:28.920 [2024-10-01 17:38:27.415941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.920 [2024-10-01 17:38:27.415969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.920 qpair failed and we were unable to recover it. 00:38:28.920 [2024-10-01 17:38:27.416331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.920 [2024-10-01 17:38:27.416361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.920 qpair failed and we were unable to recover it. 00:38:28.920 [2024-10-01 17:38:27.416601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.920 [2024-10-01 17:38:27.416628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.920 qpair failed and we were unable to recover it. 00:38:28.920 [2024-10-01 17:38:27.416980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.920 [2024-10-01 17:38:27.417020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.920 qpair failed and we were unable to recover it. 00:38:28.920 [2024-10-01 17:38:27.417331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.920 [2024-10-01 17:38:27.417360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.920 qpair failed and we were unable to recover it. 00:38:28.920 [2024-10-01 17:38:27.417689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.920 [2024-10-01 17:38:27.417717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.920 qpair failed and we were unable to recover it. 00:38:28.920 [2024-10-01 17:38:27.418065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.920 [2024-10-01 17:38:27.418095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.920 qpair failed and we were unable to recover it. 00:38:28.920 [2024-10-01 17:38:27.418422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.920 [2024-10-01 17:38:27.418451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.920 qpair failed and we were unable to recover it. 00:38:28.920 [2024-10-01 17:38:27.418808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.920 [2024-10-01 17:38:27.418836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.920 qpair failed and we were unable to recover it. 00:38:28.921 [2024-10-01 17:38:27.419173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.921 [2024-10-01 17:38:27.419202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.921 qpair failed and we were unable to recover it. 00:38:28.921 [2024-10-01 17:38:27.419442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.921 [2024-10-01 17:38:27.419473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.921 qpair failed and we were unable to recover it. 00:38:28.921 [2024-10-01 17:38:27.419846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.921 [2024-10-01 17:38:27.419874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.921 qpair failed and we were unable to recover it. 00:38:28.921 [2024-10-01 17:38:27.420209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.921 [2024-10-01 17:38:27.420239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.921 qpair failed and we were unable to recover it. 00:38:28.921 [2024-10-01 17:38:27.420475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.921 [2024-10-01 17:38:27.420506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.921 qpair failed and we were unable to recover it. 00:38:28.921 [2024-10-01 17:38:27.420827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.921 [2024-10-01 17:38:27.420856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.921 qpair failed and we were unable to recover it. 00:38:28.921 [2024-10-01 17:38:27.421198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.921 [2024-10-01 17:38:27.421228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.921 qpair failed and we were unable to recover it. 00:38:28.921 [2024-10-01 17:38:27.421640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.921 [2024-10-01 17:38:27.421668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.921 qpair failed and we were unable to recover it. 00:38:28.921 [2024-10-01 17:38:27.422020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.921 [2024-10-01 17:38:27.422049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.921 qpair failed and we were unable to recover it. 00:38:28.921 [2024-10-01 17:38:27.422429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.921 [2024-10-01 17:38:27.422456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.921 qpair failed and we were unable to recover it. 00:38:28.921 [2024-10-01 17:38:27.422802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.921 [2024-10-01 17:38:27.422831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.921 qpair failed and we were unable to recover it. 00:38:28.921 [2024-10-01 17:38:27.423169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.921 [2024-10-01 17:38:27.423198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.921 qpair failed and we were unable to recover it. 00:38:28.921 [2024-10-01 17:38:27.423569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.921 [2024-10-01 17:38:27.423597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.921 qpair failed and we were unable to recover it. 00:38:28.921 [2024-10-01 17:38:27.423913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.921 [2024-10-01 17:38:27.423942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.921 qpair failed and we were unable to recover it. 00:38:28.921 [2024-10-01 17:38:27.424301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.921 [2024-10-01 17:38:27.424331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.921 qpair failed and we were unable to recover it. 00:38:28.921 [2024-10-01 17:38:27.424670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.921 [2024-10-01 17:38:27.424698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.921 qpair failed and we were unable to recover it. 00:38:28.921 [2024-10-01 17:38:27.425046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.921 [2024-10-01 17:38:27.425075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.921 qpair failed and we were unable to recover it. 00:38:28.921 [2024-10-01 17:38:27.425435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.921 [2024-10-01 17:38:27.425470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.921 qpair failed and we were unable to recover it. 00:38:28.921 [2024-10-01 17:38:27.425783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.921 [2024-10-01 17:38:27.425812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.921 qpair failed and we were unable to recover it. 00:38:28.921 [2024-10-01 17:38:27.426157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.921 [2024-10-01 17:38:27.426187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.921 qpair failed and we were unable to recover it. 00:38:28.921 [2024-10-01 17:38:27.426527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.921 [2024-10-01 17:38:27.426555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.921 qpair failed and we were unable to recover it. 00:38:28.921 [2024-10-01 17:38:27.426913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.921 [2024-10-01 17:38:27.426940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.921 qpair failed and we were unable to recover it. 00:38:28.921 [2024-10-01 17:38:27.427292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.921 [2024-10-01 17:38:27.427322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.921 qpair failed and we were unable to recover it. 00:38:28.921 [2024-10-01 17:38:27.427700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.921 [2024-10-01 17:38:27.427728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.921 qpair failed and we were unable to recover it. 00:38:28.921 [2024-10-01 17:38:27.428060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.921 [2024-10-01 17:38:27.428089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.921 qpair failed and we were unable to recover it. 00:38:28.921 [2024-10-01 17:38:27.428442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.921 [2024-10-01 17:38:27.428471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.921 qpair failed and we were unable to recover it. 00:38:28.921 [2024-10-01 17:38:27.428822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.921 [2024-10-01 17:38:27.428850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.921 qpair failed and we were unable to recover it. 00:38:28.921 [2024-10-01 17:38:27.429189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.921 [2024-10-01 17:38:27.429218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.921 qpair failed and we were unable to recover it. 00:38:28.921 [2024-10-01 17:38:27.429560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.921 [2024-10-01 17:38:27.429587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.921 qpair failed and we were unable to recover it. 00:38:28.921 [2024-10-01 17:38:27.429845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.921 [2024-10-01 17:38:27.429873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.921 qpair failed and we were unable to recover it. 00:38:28.921 [2024-10-01 17:38:27.430182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.921 [2024-10-01 17:38:27.430211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.921 qpair failed and we were unable to recover it. 00:38:28.921 [2024-10-01 17:38:27.430587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.921 [2024-10-01 17:38:27.430616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.921 qpair failed and we were unable to recover it. 00:38:28.921 [2024-10-01 17:38:27.430961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.921 [2024-10-01 17:38:27.430989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.921 qpair failed and we were unable to recover it. 00:38:28.921 [2024-10-01 17:38:27.431346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.921 [2024-10-01 17:38:27.431375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.921 qpair failed and we were unable to recover it. 00:38:28.921 [2024-10-01 17:38:27.431627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.921 [2024-10-01 17:38:27.431655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.921 qpair failed and we were unable to recover it. 00:38:28.922 [2024-10-01 17:38:27.432005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.922 [2024-10-01 17:38:27.432035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.922 qpair failed and we were unable to recover it. 00:38:28.922 [2024-10-01 17:38:27.436020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.922 [2024-10-01 17:38:27.436079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.922 qpair failed and we were unable to recover it. 00:38:28.922 [2024-10-01 17:38:27.436458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.922 [2024-10-01 17:38:27.436489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.922 qpair failed and we were unable to recover it. 00:38:28.922 [2024-10-01 17:38:27.436821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.922 [2024-10-01 17:38:27.436852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.922 qpair failed and we were unable to recover it. 00:38:28.922 [2024-10-01 17:38:27.437189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.922 [2024-10-01 17:38:27.437223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.922 qpair failed and we were unable to recover it. 00:38:28.922 [2024-10-01 17:38:27.437549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.922 [2024-10-01 17:38:27.437580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.922 qpair failed and we were unable to recover it. 00:38:28.922 [2024-10-01 17:38:27.437946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.922 [2024-10-01 17:38:27.437978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.922 qpair failed and we were unable to recover it. 00:38:28.922 [2024-10-01 17:38:27.438368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.922 [2024-10-01 17:38:27.438400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.922 qpair failed and we were unable to recover it. 00:38:28.922 [2024-10-01 17:38:27.438766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.922 [2024-10-01 17:38:27.438794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.922 qpair failed and we were unable to recover it. 00:38:28.922 [2024-10-01 17:38:27.439155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.922 [2024-10-01 17:38:27.439187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.922 qpair failed and we were unable to recover it. 00:38:28.922 [2024-10-01 17:38:27.439531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.922 [2024-10-01 17:38:27.439560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.922 qpair failed and we were unable to recover it. 00:38:28.922 [2024-10-01 17:38:27.439928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.922 [2024-10-01 17:38:27.439959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.922 qpair failed and we were unable to recover it. 00:38:28.922 [2024-10-01 17:38:27.440313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.922 [2024-10-01 17:38:27.440344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.922 qpair failed and we were unable to recover it. 00:38:28.922 [2024-10-01 17:38:27.440758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.922 [2024-10-01 17:38:27.440789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.922 qpair failed and we were unable to recover it. 00:38:28.922 [2024-10-01 17:38:27.441147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.922 [2024-10-01 17:38:27.441178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.922 qpair failed and we were unable to recover it. 00:38:28.922 [2024-10-01 17:38:27.441544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.922 [2024-10-01 17:38:27.441574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.922 qpair failed and we were unable to recover it. 00:38:28.922 [2024-10-01 17:38:27.441899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.922 [2024-10-01 17:38:27.441929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.922 qpair failed and we were unable to recover it. 00:38:28.922 [2024-10-01 17:38:27.442183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.922 [2024-10-01 17:38:27.442219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.922 qpair failed and we were unable to recover it. 00:38:28.922 [2024-10-01 17:38:27.442542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.922 [2024-10-01 17:38:27.442572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.922 qpair failed and we were unable to recover it. 00:38:28.922 [2024-10-01 17:38:27.442895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.922 [2024-10-01 17:38:27.442924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.922 qpair failed and we were unable to recover it. 00:38:28.922 [2024-10-01 17:38:27.443342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.922 [2024-10-01 17:38:27.443374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.922 qpair failed and we were unable to recover it. 00:38:28.922 [2024-10-01 17:38:27.443779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.922 [2024-10-01 17:38:27.443810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.922 qpair failed and we were unable to recover it. 00:38:28.922 [2024-10-01 17:38:27.444202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.922 [2024-10-01 17:38:27.444235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.922 qpair failed and we were unable to recover it. 00:38:28.922 [2024-10-01 17:38:27.444571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.922 [2024-10-01 17:38:27.444595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.922 qpair failed and we were unable to recover it. 00:38:28.922 [2024-10-01 17:38:27.444855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.922 [2024-10-01 17:38:27.444879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.922 qpair failed and we were unable to recover it. 00:38:28.922 [2024-10-01 17:38:27.445259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.922 [2024-10-01 17:38:27.445285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.922 qpair failed and we were unable to recover it. 00:38:28.922 [2024-10-01 17:38:27.445645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.922 [2024-10-01 17:38:27.445671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.922 qpair failed and we were unable to recover it. 00:38:28.922 [2024-10-01 17:38:27.446039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.922 [2024-10-01 17:38:27.446067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.922 qpair failed and we were unable to recover it. 00:38:28.922 [2024-10-01 17:38:27.446390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.922 [2024-10-01 17:38:27.446412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.922 qpair failed and we were unable to recover it. 00:38:28.922 [2024-10-01 17:38:27.446755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.922 [2024-10-01 17:38:27.446778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.922 qpair failed and we were unable to recover it. 00:38:28.922 [2024-10-01 17:38:27.447118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.922 [2024-10-01 17:38:27.447146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.922 qpair failed and we were unable to recover it. 00:38:28.922 [2024-10-01 17:38:27.447490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.922 [2024-10-01 17:38:27.447520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.922 qpair failed and we were unable to recover it. 00:38:28.922 [2024-10-01 17:38:27.447838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.923 [2024-10-01 17:38:27.447863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.923 qpair failed and we were unable to recover it. 00:38:28.923 [2024-10-01 17:38:27.448217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.923 [2024-10-01 17:38:27.448240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.923 qpair failed and we were unable to recover it. 00:38:28.923 [2024-10-01 17:38:27.448568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.923 [2024-10-01 17:38:27.448592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.923 qpair failed and we were unable to recover it. 00:38:28.923 [2024-10-01 17:38:27.448917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.923 [2024-10-01 17:38:27.448941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.923 qpair failed and we were unable to recover it. 00:38:28.923 [2024-10-01 17:38:27.449277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.923 [2024-10-01 17:38:27.449302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.923 qpair failed and we were unable to recover it. 00:38:28.923 [2024-10-01 17:38:27.449632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.923 [2024-10-01 17:38:27.449654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.923 qpair failed and we were unable to recover it. 00:38:28.923 [2024-10-01 17:38:27.449988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.923 [2024-10-01 17:38:27.450022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.923 qpair failed and we were unable to recover it. 00:38:28.923 [2024-10-01 17:38:27.450233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.923 [2024-10-01 17:38:27.450255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.923 qpair failed and we were unable to recover it. 00:38:28.923 [2024-10-01 17:38:27.450379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.923 [2024-10-01 17:38:27.450401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.923 qpair failed and we were unable to recover it. 00:38:28.923 [2024-10-01 17:38:27.450731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.923 [2024-10-01 17:38:27.450754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.923 qpair failed and we were unable to recover it. 00:38:28.923 [2024-10-01 17:38:27.451076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.923 [2024-10-01 17:38:27.451100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.923 qpair failed and we were unable to recover it. 00:38:28.923 [2024-10-01 17:38:27.451449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.923 [2024-10-01 17:38:27.451473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.923 qpair failed and we were unable to recover it. 00:38:28.923 [2024-10-01 17:38:27.451817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.923 [2024-10-01 17:38:27.451841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.923 qpair failed and we were unable to recover it. 00:38:28.923 [2024-10-01 17:38:27.452197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.923 [2024-10-01 17:38:27.452220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.923 qpair failed and we were unable to recover it. 00:38:28.923 [2024-10-01 17:38:27.452558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.923 [2024-10-01 17:38:27.452581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.923 qpair failed and we were unable to recover it. 00:38:28.923 [2024-10-01 17:38:27.452840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.923 [2024-10-01 17:38:27.452863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.923 qpair failed and we were unable to recover it. 00:38:28.923 [2024-10-01 17:38:27.453220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.923 [2024-10-01 17:38:27.453245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.923 qpair failed and we were unable to recover it. 00:38:28.923 [2024-10-01 17:38:27.453570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.923 [2024-10-01 17:38:27.453594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:28.923 qpair failed and we were unable to recover it. 00:38:29.199 [2024-10-01 17:38:27.453918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.199 [2024-10-01 17:38:27.453943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.199 qpair failed and we were unable to recover it. 00:38:29.199 [2024-10-01 17:38:27.454287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.199 [2024-10-01 17:38:27.454311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.199 qpair failed and we were unable to recover it. 00:38:29.199 [2024-10-01 17:38:27.454652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.199 [2024-10-01 17:38:27.454674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.199 qpair failed and we were unable to recover it. 00:38:29.199 [2024-10-01 17:38:27.454885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.199 [2024-10-01 17:38:27.454907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.199 qpair failed and we were unable to recover it. 00:38:29.199 [2024-10-01 17:38:27.455217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.199 [2024-10-01 17:38:27.455242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.199 qpair failed and we were unable to recover it. 00:38:29.199 [2024-10-01 17:38:27.455596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.199 [2024-10-01 17:38:27.455619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.199 qpair failed and we were unable to recover it. 00:38:29.199 [2024-10-01 17:38:27.455914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.199 [2024-10-01 17:38:27.455937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.199 qpair failed and we were unable to recover it. 00:38:29.199 [2024-10-01 17:38:27.456291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.199 [2024-10-01 17:38:27.456315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.199 qpair failed and we were unable to recover it. 00:38:29.199 [2024-10-01 17:38:27.456528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.199 [2024-10-01 17:38:27.456550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.199 qpair failed and we were unable to recover it. 00:38:29.199 [2024-10-01 17:38:27.456920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.199 [2024-10-01 17:38:27.456943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.199 qpair failed and we were unable to recover it. 00:38:29.199 [2024-10-01 17:38:27.457311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.199 [2024-10-01 17:38:27.457335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.199 qpair failed and we were unable to recover it. 00:38:29.199 [2024-10-01 17:38:27.457661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.199 [2024-10-01 17:38:27.457684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.199 qpair failed and we were unable to recover it. 00:38:29.199 [2024-10-01 17:38:27.457894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.199 [2024-10-01 17:38:27.457926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.199 qpair failed and we were unable to recover it. 00:38:29.199 [2024-10-01 17:38:27.458270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.199 [2024-10-01 17:38:27.458295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.199 qpair failed and we were unable to recover it. 00:38:29.199 [2024-10-01 17:38:27.458611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.199 [2024-10-01 17:38:27.458635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.199 qpair failed and we were unable to recover it. 00:38:29.199 [2024-10-01 17:38:27.458980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.199 [2024-10-01 17:38:27.459014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.199 qpair failed and we were unable to recover it. 00:38:29.199 [2024-10-01 17:38:27.459344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.199 [2024-10-01 17:38:27.459366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.199 qpair failed and we were unable to recover it. 00:38:29.199 [2024-10-01 17:38:27.459703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.200 [2024-10-01 17:38:27.459726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.200 qpair failed and we were unable to recover it. 00:38:29.200 [2024-10-01 17:38:27.460089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.200 [2024-10-01 17:38:27.460113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.200 qpair failed and we were unable to recover it. 00:38:29.200 [2024-10-01 17:38:27.460449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.200 [2024-10-01 17:38:27.460471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.200 qpair failed and we were unable to recover it. 00:38:29.200 [2024-10-01 17:38:27.460773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.200 [2024-10-01 17:38:27.460796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.200 qpair failed and we were unable to recover it. 00:38:29.200 [2024-10-01 17:38:27.461049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.200 [2024-10-01 17:38:27.461072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.200 qpair failed and we were unable to recover it. 00:38:29.200 [2024-10-01 17:38:27.461432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.200 [2024-10-01 17:38:27.461455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.200 qpair failed and we were unable to recover it. 00:38:29.200 [2024-10-01 17:38:27.461816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.200 [2024-10-01 17:38:27.461839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.200 qpair failed and we were unable to recover it. 00:38:29.200 [2024-10-01 17:38:27.462164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.200 [2024-10-01 17:38:27.462188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.200 qpair failed and we were unable to recover it. 00:38:29.200 [2024-10-01 17:38:27.462533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.200 [2024-10-01 17:38:27.462556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.200 qpair failed and we were unable to recover it. 00:38:29.200 [2024-10-01 17:38:27.462919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.200 [2024-10-01 17:38:27.462942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.200 qpair failed and we were unable to recover it. 00:38:29.200 [2024-10-01 17:38:27.463287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.200 [2024-10-01 17:38:27.463311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.200 qpair failed and we were unable to recover it. 00:38:29.200 [2024-10-01 17:38:27.463659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.200 [2024-10-01 17:38:27.463683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.200 qpair failed and we were unable to recover it. 00:38:29.200 [2024-10-01 17:38:27.464034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.200 [2024-10-01 17:38:27.464059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.200 qpair failed and we were unable to recover it. 00:38:29.200 [2024-10-01 17:38:27.464380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.200 [2024-10-01 17:38:27.464404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.200 qpair failed and we were unable to recover it. 00:38:29.200 [2024-10-01 17:38:27.464758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.200 [2024-10-01 17:38:27.464781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.200 qpair failed and we were unable to recover it. 00:38:29.200 [2024-10-01 17:38:27.465125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.200 [2024-10-01 17:38:27.465150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.200 qpair failed and we were unable to recover it. 00:38:29.200 [2024-10-01 17:38:27.465480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.200 [2024-10-01 17:38:27.465503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.200 qpair failed and we were unable to recover it. 00:38:29.200 [2024-10-01 17:38:27.465827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.200 [2024-10-01 17:38:27.465851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.200 qpair failed and we were unable to recover it. 00:38:29.200 [2024-10-01 17:38:27.466202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.200 [2024-10-01 17:38:27.466227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.200 qpair failed and we were unable to recover it. 00:38:29.200 [2024-10-01 17:38:27.466574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.200 [2024-10-01 17:38:27.466597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.200 qpair failed and we were unable to recover it. 00:38:29.200 [2024-10-01 17:38:27.466960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.200 [2024-10-01 17:38:27.466983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.200 qpair failed and we were unable to recover it. 00:38:29.200 [2024-10-01 17:38:27.467387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.200 [2024-10-01 17:38:27.467411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.200 qpair failed and we were unable to recover it. 00:38:29.200 [2024-10-01 17:38:27.467633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.200 [2024-10-01 17:38:27.467658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.200 qpair failed and we were unable to recover it. 00:38:29.200 [2024-10-01 17:38:27.468032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.200 [2024-10-01 17:38:27.468057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.200 qpair failed and we were unable to recover it. 00:38:29.200 [2024-10-01 17:38:27.468378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.200 [2024-10-01 17:38:27.468401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.200 qpair failed and we were unable to recover it. 00:38:29.200 [2024-10-01 17:38:27.468624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.200 [2024-10-01 17:38:27.468649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.200 qpair failed and we were unable to recover it. 00:38:29.200 [2024-10-01 17:38:27.468978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.200 [2024-10-01 17:38:27.469016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.200 qpair failed and we were unable to recover it. 00:38:29.200 [2024-10-01 17:38:27.469207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.200 [2024-10-01 17:38:27.469236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.200 qpair failed and we were unable to recover it. 00:38:29.200 [2024-10-01 17:38:27.469580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.200 [2024-10-01 17:38:27.469610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.200 qpair failed and we were unable to recover it. 00:38:29.200 [2024-10-01 17:38:27.469942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.200 [2024-10-01 17:38:27.469971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.200 qpair failed and we were unable to recover it. 00:38:29.200 [2024-10-01 17:38:27.470363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.200 [2024-10-01 17:38:27.470392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.200 qpair failed and we were unable to recover it. 00:38:29.200 [2024-10-01 17:38:27.470822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.200 [2024-10-01 17:38:27.470850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.200 qpair failed and we were unable to recover it. 00:38:29.200 [2024-10-01 17:38:27.471262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.200 [2024-10-01 17:38:27.471292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.200 qpair failed and we were unable to recover it. 00:38:29.200 [2024-10-01 17:38:27.471643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.200 [2024-10-01 17:38:27.471672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.200 qpair failed and we were unable to recover it. 00:38:29.200 [2024-10-01 17:38:27.472044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.200 [2024-10-01 17:38:27.472075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.200 qpair failed and we were unable to recover it. 00:38:29.200 [2024-10-01 17:38:27.472444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.200 [2024-10-01 17:38:27.472480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.200 qpair failed and we were unable to recover it. 00:38:29.200 [2024-10-01 17:38:27.472815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.200 [2024-10-01 17:38:27.472845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.200 qpair failed and we were unable to recover it. 00:38:29.200 [2024-10-01 17:38:27.473211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.200 [2024-10-01 17:38:27.473241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.200 qpair failed and we were unable to recover it. 00:38:29.200 [2024-10-01 17:38:27.473580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.200 [2024-10-01 17:38:27.473609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.200 qpair failed and we were unable to recover it. 00:38:29.201 [2024-10-01 17:38:27.473964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.201 [2024-10-01 17:38:27.473991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.201 qpair failed and we were unable to recover it. 00:38:29.201 Read completed with error (sct=0, sc=8) 00:38:29.201 starting I/O failed 00:38:29.201 Read completed with error (sct=0, sc=8) 00:38:29.201 starting I/O failed 00:38:29.201 Read completed with error (sct=0, sc=8) 00:38:29.201 starting I/O failed 00:38:29.201 Read completed with error (sct=0, sc=8) 00:38:29.201 starting I/O failed 00:38:29.201 Read completed with error (sct=0, sc=8) 00:38:29.201 starting I/O failed 00:38:29.201 Read completed with error (sct=0, sc=8) 00:38:29.201 starting I/O failed 00:38:29.201 Read completed with error (sct=0, sc=8) 00:38:29.201 starting I/O failed 00:38:29.201 Read completed with error (sct=0, sc=8) 00:38:29.201 starting I/O failed 00:38:29.201 Read completed with error (sct=0, sc=8) 00:38:29.201 starting I/O failed 00:38:29.201 Read completed with error (sct=0, sc=8) 00:38:29.201 starting I/O failed 00:38:29.201 Write completed with error (sct=0, sc=8) 00:38:29.201 starting I/O failed 00:38:29.201 Read completed with error (sct=0, sc=8) 00:38:29.201 starting I/O failed 00:38:29.201 Write completed with error (sct=0, sc=8) 00:38:29.201 starting I/O failed 00:38:29.201 Read completed with error (sct=0, sc=8) 00:38:29.201 starting I/O failed 00:38:29.201 Read completed with error (sct=0, sc=8) 00:38:29.201 starting I/O failed 00:38:29.201 Write completed with error (sct=0, sc=8) 00:38:29.201 starting I/O failed 00:38:29.201 Read completed with error (sct=0, sc=8) 00:38:29.201 starting I/O failed 00:38:29.201 Read completed with error (sct=0, sc=8) 00:38:29.201 starting I/O failed 00:38:29.201 Read completed with error (sct=0, sc=8) 00:38:29.201 starting I/O failed 00:38:29.201 Read completed with error (sct=0, sc=8) 00:38:29.201 starting I/O failed 00:38:29.201 Read completed with error (sct=0, sc=8) 00:38:29.201 starting I/O failed 00:38:29.201 Read completed with error (sct=0, sc=8) 00:38:29.201 starting I/O failed 00:38:29.201 Write completed with error (sct=0, sc=8) 00:38:29.201 starting I/O failed 00:38:29.201 Read completed with error (sct=0, sc=8) 00:38:29.201 starting I/O failed 00:38:29.201 Write completed with error (sct=0, sc=8) 00:38:29.201 starting I/O failed 00:38:29.201 Read completed with error (sct=0, sc=8) 00:38:29.201 starting I/O failed 00:38:29.201 Read completed with error (sct=0, sc=8) 00:38:29.201 starting I/O failed 00:38:29.201 Write completed with error (sct=0, sc=8) 00:38:29.201 starting I/O failed 00:38:29.201 Write completed with error (sct=0, sc=8) 00:38:29.201 starting I/O failed 00:38:29.201 Read completed with error (sct=0, sc=8) 00:38:29.201 starting I/O failed 00:38:29.201 Write completed with error (sct=0, sc=8) 00:38:29.201 starting I/O failed 00:38:29.201 Read completed with error (sct=0, sc=8) 00:38:29.201 starting I/O failed 00:38:29.201 [2024-10-01 17:38:27.474227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:29.201 [2024-10-01 17:38:27.474651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.201 [2024-10-01 17:38:27.474671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.201 qpair failed and we were unable to recover it. 00:38:29.201 [2024-10-01 17:38:27.474978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.201 [2024-10-01 17:38:27.474987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.201 qpair failed and we were unable to recover it. 00:38:29.201 [2024-10-01 17:38:27.475329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.201 [2024-10-01 17:38:27.475338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.201 qpair failed and we were unable to recover it. 00:38:29.201 [2024-10-01 17:38:27.475659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.201 [2024-10-01 17:38:27.475671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.201 qpair failed and we were unable to recover it. 00:38:29.201 [2024-10-01 17:38:27.476003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.201 [2024-10-01 17:38:27.476018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.201 qpair failed and we were unable to recover it. 00:38:29.201 [2024-10-01 17:38:27.476372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.201 [2024-10-01 17:38:27.476386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.201 qpair failed and we were unable to recover it. 00:38:29.201 [2024-10-01 17:38:27.476939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.201 [2024-10-01 17:38:27.476953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.201 qpair failed and we were unable to recover it. 00:38:29.201 [2024-10-01 17:38:27.477404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.201 [2024-10-01 17:38:27.477434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.201 qpair failed and we were unable to recover it. 00:38:29.201 [2024-10-01 17:38:27.477750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.201 [2024-10-01 17:38:27.477759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.201 qpair failed and we were unable to recover it. 00:38:29.201 [2024-10-01 17:38:27.478168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.201 [2024-10-01 17:38:27.478199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.201 qpair failed and we were unable to recover it. 00:38:29.201 [2024-10-01 17:38:27.478550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.201 [2024-10-01 17:38:27.478558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.201 qpair failed and we were unable to recover it. 00:38:29.201 [2024-10-01 17:38:27.478867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.201 [2024-10-01 17:38:27.478874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.201 qpair failed and we were unable to recover it. 00:38:29.201 [2024-10-01 17:38:27.479265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.201 [2024-10-01 17:38:27.479295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.201 qpair failed and we were unable to recover it. 00:38:29.201 [2024-10-01 17:38:27.479649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.201 [2024-10-01 17:38:27.479658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.201 qpair failed and we were unable to recover it. 00:38:29.201 [2024-10-01 17:38:27.479977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.201 [2024-10-01 17:38:27.479984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.201 qpair failed and we were unable to recover it. 00:38:29.201 [2024-10-01 17:38:27.480422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.201 [2024-10-01 17:38:27.480456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.201 qpair failed and we were unable to recover it. 00:38:29.201 [2024-10-01 17:38:27.480743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.201 [2024-10-01 17:38:27.480751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.201 qpair failed and we were unable to recover it. 00:38:29.201 [2024-10-01 17:38:27.481213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.201 [2024-10-01 17:38:27.481243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.201 qpair failed and we were unable to recover it. 00:38:29.201 [2024-10-01 17:38:27.481543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.201 [2024-10-01 17:38:27.481553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.201 qpair failed and we were unable to recover it. 00:38:29.201 [2024-10-01 17:38:27.481865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.201 [2024-10-01 17:38:27.481872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.201 qpair failed and we were unable to recover it. 00:38:29.201 [2024-10-01 17:38:27.482163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.201 [2024-10-01 17:38:27.482171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.201 qpair failed and we were unable to recover it. 00:38:29.201 [2024-10-01 17:38:27.482489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.201 [2024-10-01 17:38:27.482495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.201 qpair failed and we were unable to recover it. 00:38:29.201 [2024-10-01 17:38:27.482817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.201 [2024-10-01 17:38:27.482824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.201 qpair failed and we were unable to recover it. 00:38:29.201 [2024-10-01 17:38:27.483035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.201 [2024-10-01 17:38:27.483042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.201 qpair failed and we were unable to recover it. 00:38:29.201 [2024-10-01 17:38:27.483302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.201 [2024-10-01 17:38:27.483309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.201 qpair failed and we were unable to recover it. 00:38:29.201 [2024-10-01 17:38:27.483675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.201 [2024-10-01 17:38:27.483682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.201 qpair failed and we were unable to recover it. 00:38:29.201 [2024-10-01 17:38:27.483966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.202 [2024-10-01 17:38:27.483973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.202 qpair failed and we were unable to recover it. 00:38:29.202 [2024-10-01 17:38:27.484284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.202 [2024-10-01 17:38:27.484291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.202 qpair failed and we were unable to recover it. 00:38:29.202 [2024-10-01 17:38:27.484606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.202 [2024-10-01 17:38:27.484613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.202 qpair failed and we were unable to recover it. 00:38:29.202 [2024-10-01 17:38:27.484925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.202 [2024-10-01 17:38:27.484932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.202 qpair failed and we were unable to recover it. 00:38:29.202 [2024-10-01 17:38:27.485219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.202 [2024-10-01 17:38:27.485227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.202 qpair failed and we were unable to recover it. 00:38:29.202 [2024-10-01 17:38:27.485532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.202 [2024-10-01 17:38:27.485539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.202 qpair failed and we were unable to recover it. 00:38:29.202 [2024-10-01 17:38:27.485853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.202 [2024-10-01 17:38:27.485860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.202 qpair failed and we were unable to recover it. 00:38:29.202 [2024-10-01 17:38:27.486176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.202 [2024-10-01 17:38:27.486184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.202 qpair failed and we were unable to recover it. 00:38:29.202 [2024-10-01 17:38:27.486465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.202 [2024-10-01 17:38:27.486472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.202 qpair failed and we were unable to recover it. 00:38:29.202 [2024-10-01 17:38:27.486785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.202 [2024-10-01 17:38:27.486793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.202 qpair failed and we were unable to recover it. 00:38:29.202 [2024-10-01 17:38:27.487084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.202 [2024-10-01 17:38:27.487092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.202 qpair failed and we were unable to recover it. 00:38:29.202 [2024-10-01 17:38:27.487381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.202 [2024-10-01 17:38:27.487389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.202 qpair failed and we were unable to recover it. 00:38:29.202 [2024-10-01 17:38:27.487697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.202 [2024-10-01 17:38:27.487705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.202 qpair failed and we were unable to recover it. 00:38:29.202 [2024-10-01 17:38:27.488035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.202 [2024-10-01 17:38:27.488044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.202 qpair failed and we were unable to recover it. 00:38:29.202 [2024-10-01 17:38:27.488356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.202 [2024-10-01 17:38:27.488364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.202 qpair failed and we were unable to recover it. 00:38:29.202 [2024-10-01 17:38:27.488651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.202 [2024-10-01 17:38:27.488659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.202 qpair failed and we were unable to recover it. 00:38:29.202 [2024-10-01 17:38:27.488958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.202 [2024-10-01 17:38:27.488964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.202 qpair failed and we were unable to recover it. 00:38:29.202 [2024-10-01 17:38:27.489273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.202 [2024-10-01 17:38:27.489280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.202 qpair failed and we were unable to recover it. 00:38:29.202 [2024-10-01 17:38:27.489587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.202 [2024-10-01 17:38:27.489594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.202 qpair failed and we were unable to recover it. 00:38:29.202 [2024-10-01 17:38:27.489894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.202 [2024-10-01 17:38:27.489901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.202 qpair failed and we were unable to recover it. 00:38:29.202 [2024-10-01 17:38:27.490209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.202 [2024-10-01 17:38:27.490216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.202 qpair failed and we were unable to recover it. 00:38:29.202 [2024-10-01 17:38:27.490525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.202 [2024-10-01 17:38:27.490532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.202 qpair failed and we were unable to recover it. 00:38:29.202 [2024-10-01 17:38:27.490831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.202 [2024-10-01 17:38:27.490838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.202 qpair failed and we were unable to recover it. 00:38:29.202 [2024-10-01 17:38:27.491018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.202 [2024-10-01 17:38:27.491026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.202 qpair failed and we were unable to recover it. 00:38:29.202 [2024-10-01 17:38:27.491396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.202 [2024-10-01 17:38:27.491402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.202 qpair failed and we were unable to recover it. 00:38:29.202 [2024-10-01 17:38:27.491770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.202 [2024-10-01 17:38:27.491777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.202 qpair failed and we were unable to recover it. 00:38:29.202 [2024-10-01 17:38:27.492182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.202 [2024-10-01 17:38:27.492190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.202 qpair failed and we were unable to recover it. 00:38:29.202 [2024-10-01 17:38:27.492509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.202 [2024-10-01 17:38:27.492515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.202 qpair failed and we were unable to recover it. 00:38:29.202 [2024-10-01 17:38:27.492698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.202 [2024-10-01 17:38:27.492705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.202 qpair failed and we were unable to recover it. 00:38:29.202 [2024-10-01 17:38:27.492934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.202 [2024-10-01 17:38:27.492943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.202 qpair failed and we were unable to recover it. 00:38:29.202 [2024-10-01 17:38:27.493194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.202 [2024-10-01 17:38:27.493202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.202 qpair failed and we were unable to recover it. 00:38:29.202 [2024-10-01 17:38:27.493553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.202 [2024-10-01 17:38:27.493559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.202 qpair failed and we were unable to recover it. 00:38:29.202 [2024-10-01 17:38:27.493952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.202 [2024-10-01 17:38:27.493959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.202 qpair failed and we were unable to recover it. 00:38:29.202 [2024-10-01 17:38:27.494278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.202 [2024-10-01 17:38:27.494285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.202 qpair failed and we were unable to recover it. 00:38:29.202 [2024-10-01 17:38:27.494593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.202 [2024-10-01 17:38:27.494600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.202 qpair failed and we were unable to recover it. 00:38:29.202 [2024-10-01 17:38:27.494906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.202 [2024-10-01 17:38:27.494913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.202 qpair failed and we were unable to recover it. 00:38:29.202 [2024-10-01 17:38:27.495241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.202 [2024-10-01 17:38:27.495248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.202 qpair failed and we were unable to recover it. 00:38:29.202 [2024-10-01 17:38:27.495538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.202 [2024-10-01 17:38:27.495545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.202 qpair failed and we were unable to recover it. 00:38:29.202 [2024-10-01 17:38:27.495870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.202 [2024-10-01 17:38:27.495876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.203 qpair failed and we were unable to recover it. 00:38:29.203 [2024-10-01 17:38:27.496207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.203 [2024-10-01 17:38:27.496215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.203 qpair failed and we were unable to recover it. 00:38:29.203 [2024-10-01 17:38:27.496567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.203 [2024-10-01 17:38:27.496574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.203 qpair failed and we were unable to recover it. 00:38:29.203 [2024-10-01 17:38:27.496814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.203 [2024-10-01 17:38:27.496821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.203 qpair failed and we were unable to recover it. 00:38:29.203 [2024-10-01 17:38:27.497163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.203 [2024-10-01 17:38:27.497170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.203 qpair failed and we were unable to recover it. 00:38:29.203 [2024-10-01 17:38:27.497481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.203 [2024-10-01 17:38:27.497488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.203 qpair failed and we were unable to recover it. 00:38:29.203 [2024-10-01 17:38:27.497795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.203 [2024-10-01 17:38:27.497802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.203 qpair failed and we were unable to recover it. 00:38:29.203 [2024-10-01 17:38:27.498013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.203 [2024-10-01 17:38:27.498020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.203 qpair failed and we were unable to recover it. 00:38:29.203 [2024-10-01 17:38:27.498304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.203 [2024-10-01 17:38:27.498311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.203 qpair failed and we were unable to recover it. 00:38:29.203 [2024-10-01 17:38:27.498621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.203 [2024-10-01 17:38:27.498628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.203 qpair failed and we were unable to recover it. 00:38:29.203 [2024-10-01 17:38:27.498938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.203 [2024-10-01 17:38:27.498945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.203 qpair failed and we were unable to recover it. 00:38:29.203 [2024-10-01 17:38:27.499142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.203 [2024-10-01 17:38:27.499150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.203 qpair failed and we were unable to recover it. 00:38:29.203 [2024-10-01 17:38:27.499479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.203 [2024-10-01 17:38:27.499486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.203 qpair failed and we were unable to recover it. 00:38:29.203 [2024-10-01 17:38:27.499752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.203 [2024-10-01 17:38:27.499759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.203 qpair failed and we were unable to recover it. 00:38:29.203 [2024-10-01 17:38:27.500083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.203 [2024-10-01 17:38:27.500090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.203 qpair failed and we were unable to recover it. 00:38:29.203 [2024-10-01 17:38:27.500416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.203 [2024-10-01 17:38:27.500424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.203 qpair failed and we were unable to recover it. 00:38:29.203 [2024-10-01 17:38:27.500741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.203 [2024-10-01 17:38:27.500748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.203 qpair failed and we were unable to recover it. 00:38:29.203 [2024-10-01 17:38:27.501042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.203 [2024-10-01 17:38:27.501049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.203 qpair failed and we were unable to recover it. 00:38:29.203 [2024-10-01 17:38:27.501380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.203 [2024-10-01 17:38:27.501387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.203 qpair failed and we were unable to recover it. 00:38:29.203 [2024-10-01 17:38:27.501675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.203 [2024-10-01 17:38:27.501688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.203 qpair failed and we were unable to recover it. 00:38:29.203 [2024-10-01 17:38:27.502023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.203 [2024-10-01 17:38:27.502030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.203 qpair failed and we were unable to recover it. 00:38:29.203 [2024-10-01 17:38:27.502339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.203 [2024-10-01 17:38:27.502346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.203 qpair failed and we were unable to recover it. 00:38:29.203 [2024-10-01 17:38:27.502658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.203 [2024-10-01 17:38:27.502665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.203 qpair failed and we were unable to recover it. 00:38:29.203 [2024-10-01 17:38:27.502959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.203 [2024-10-01 17:38:27.502966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.203 qpair failed and we were unable to recover it. 00:38:29.203 [2024-10-01 17:38:27.503291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.203 [2024-10-01 17:38:27.503299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.203 qpair failed and we were unable to recover it. 00:38:29.203 [2024-10-01 17:38:27.503605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.203 [2024-10-01 17:38:27.503612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.203 qpair failed and we were unable to recover it. 00:38:29.203 [2024-10-01 17:38:27.503907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.203 [2024-10-01 17:38:27.503914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.203 qpair failed and we were unable to recover it. 00:38:29.203 [2024-10-01 17:38:27.504237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.203 [2024-10-01 17:38:27.504245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.203 qpair failed and we were unable to recover it. 00:38:29.203 [2024-10-01 17:38:27.504557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.203 [2024-10-01 17:38:27.504565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.203 qpair failed and we were unable to recover it. 00:38:29.203 [2024-10-01 17:38:27.504859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.203 [2024-10-01 17:38:27.504867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.203 qpair failed and we were unable to recover it. 00:38:29.203 [2024-10-01 17:38:27.505161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.203 [2024-10-01 17:38:27.505170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.203 qpair failed and we were unable to recover it. 00:38:29.203 [2024-10-01 17:38:27.505490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.203 [2024-10-01 17:38:27.505499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.203 qpair failed and we were unable to recover it. 00:38:29.203 [2024-10-01 17:38:27.505679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.203 [2024-10-01 17:38:27.505688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.203 qpair failed and we were unable to recover it. 00:38:29.203 [2024-10-01 17:38:27.506006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.203 [2024-10-01 17:38:27.506014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.203 qpair failed and we were unable to recover it. 00:38:29.203 [2024-10-01 17:38:27.506324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.203 [2024-10-01 17:38:27.506332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.203 qpair failed and we were unable to recover it. 00:38:29.203 [2024-10-01 17:38:27.506639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.203 [2024-10-01 17:38:27.506649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.203 qpair failed and we were unable to recover it. 00:38:29.203 [2024-10-01 17:38:27.506935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.203 [2024-10-01 17:38:27.506942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.203 qpair failed and we were unable to recover it. 00:38:29.203 [2024-10-01 17:38:27.507267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.203 [2024-10-01 17:38:27.507274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.203 qpair failed and we were unable to recover it. 00:38:29.203 [2024-10-01 17:38:27.507567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.204 [2024-10-01 17:38:27.507574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.204 qpair failed and we were unable to recover it. 00:38:29.204 [2024-10-01 17:38:27.507894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.204 [2024-10-01 17:38:27.507901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.204 qpair failed and we were unable to recover it. 00:38:29.204 [2024-10-01 17:38:27.508116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.204 [2024-10-01 17:38:27.508123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.204 qpair failed and we were unable to recover it. 00:38:29.204 [2024-10-01 17:38:27.508439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.204 [2024-10-01 17:38:27.508446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.204 qpair failed and we were unable to recover it. 00:38:29.204 [2024-10-01 17:38:27.508648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.204 [2024-10-01 17:38:27.508655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.204 qpair failed and we were unable to recover it. 00:38:29.204 [2024-10-01 17:38:27.508999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.204 [2024-10-01 17:38:27.509006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.204 qpair failed and we were unable to recover it. 00:38:29.204 [2024-10-01 17:38:27.509326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.204 [2024-10-01 17:38:27.509333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.204 qpair failed and we were unable to recover it. 00:38:29.204 [2024-10-01 17:38:27.509640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.204 [2024-10-01 17:38:27.509647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.204 qpair failed and we were unable to recover it. 00:38:29.204 [2024-10-01 17:38:27.509960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.204 [2024-10-01 17:38:27.509967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.204 qpair failed and we were unable to recover it. 00:38:29.204 [2024-10-01 17:38:27.510279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.204 [2024-10-01 17:38:27.510286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.204 qpair failed and we were unable to recover it. 00:38:29.204 [2024-10-01 17:38:27.510595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.204 [2024-10-01 17:38:27.510602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.204 qpair failed and we were unable to recover it. 00:38:29.204 [2024-10-01 17:38:27.510919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.204 [2024-10-01 17:38:27.510926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.204 qpair failed and we were unable to recover it. 00:38:29.204 [2024-10-01 17:38:27.511228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.204 [2024-10-01 17:38:27.511236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.204 qpair failed and we were unable to recover it. 00:38:29.204 [2024-10-01 17:38:27.511578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.204 [2024-10-01 17:38:27.511586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.204 qpair failed and we were unable to recover it. 00:38:29.204 [2024-10-01 17:38:27.511903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.204 [2024-10-01 17:38:27.511910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.204 qpair failed and we were unable to recover it. 00:38:29.204 [2024-10-01 17:38:27.512210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.204 [2024-10-01 17:38:27.512217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.204 qpair failed and we were unable to recover it. 00:38:29.204 [2024-10-01 17:38:27.512515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.204 [2024-10-01 17:38:27.512522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.204 qpair failed and we were unable to recover it. 00:38:29.204 [2024-10-01 17:38:27.512904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.204 [2024-10-01 17:38:27.512911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.204 qpair failed and we were unable to recover it. 00:38:29.204 [2024-10-01 17:38:27.513213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.204 [2024-10-01 17:38:27.513220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.204 qpair failed and we were unable to recover it. 00:38:29.204 [2024-10-01 17:38:27.513525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.204 [2024-10-01 17:38:27.513532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.204 qpair failed and we were unable to recover it. 00:38:29.204 [2024-10-01 17:38:27.513755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.204 [2024-10-01 17:38:27.513763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.204 qpair failed and we were unable to recover it. 00:38:29.204 [2024-10-01 17:38:27.514091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.204 [2024-10-01 17:38:27.514098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.204 qpair failed and we were unable to recover it. 00:38:29.204 [2024-10-01 17:38:27.514421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.204 [2024-10-01 17:38:27.514427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.204 qpair failed and we were unable to recover it. 00:38:29.204 [2024-10-01 17:38:27.514634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.204 [2024-10-01 17:38:27.514641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.204 qpair failed and we were unable to recover it. 00:38:29.204 [2024-10-01 17:38:27.514915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.204 [2024-10-01 17:38:27.514922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.204 qpair failed and we were unable to recover it. 00:38:29.204 [2024-10-01 17:38:27.515252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.204 [2024-10-01 17:38:27.515259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.204 qpair failed and we were unable to recover it. 00:38:29.204 [2024-10-01 17:38:27.515551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.204 [2024-10-01 17:38:27.515558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.204 qpair failed and we were unable to recover it. 00:38:29.204 [2024-10-01 17:38:27.515857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.204 [2024-10-01 17:38:27.515864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.204 qpair failed and we were unable to recover it. 00:38:29.204 [2024-10-01 17:38:27.516164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.204 [2024-10-01 17:38:27.516171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.204 qpair failed and we were unable to recover it. 00:38:29.204 [2024-10-01 17:38:27.516531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.204 [2024-10-01 17:38:27.516537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.204 qpair failed and we were unable to recover it. 00:38:29.204 [2024-10-01 17:38:27.516830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.204 [2024-10-01 17:38:27.516837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.204 qpair failed and we were unable to recover it. 00:38:29.204 [2024-10-01 17:38:27.517192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.204 [2024-10-01 17:38:27.517199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.204 qpair failed and we were unable to recover it. 00:38:29.204 [2024-10-01 17:38:27.517509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.204 [2024-10-01 17:38:27.517516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.204 qpair failed and we were unable to recover it. 00:38:29.204 [2024-10-01 17:38:27.517804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.204 [2024-10-01 17:38:27.517812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.205 qpair failed and we were unable to recover it. 00:38:29.205 [2024-10-01 17:38:27.518154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.205 [2024-10-01 17:38:27.518162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.205 qpair failed and we were unable to recover it. 00:38:29.205 [2024-10-01 17:38:27.518472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.205 [2024-10-01 17:38:27.518478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.205 qpair failed and we were unable to recover it. 00:38:29.205 [2024-10-01 17:38:27.518766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.205 [2024-10-01 17:38:27.518773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.205 qpair failed and we were unable to recover it. 00:38:29.205 [2024-10-01 17:38:27.519072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.205 [2024-10-01 17:38:27.519079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.205 qpair failed and we were unable to recover it. 00:38:29.205 [2024-10-01 17:38:27.519417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.205 [2024-10-01 17:38:27.519424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.205 qpair failed and we were unable to recover it. 00:38:29.205 [2024-10-01 17:38:27.519734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.205 [2024-10-01 17:38:27.519741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.205 qpair failed and we were unable to recover it. 00:38:29.205 [2024-10-01 17:38:27.520039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.205 [2024-10-01 17:38:27.520046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.205 qpair failed and we were unable to recover it. 00:38:29.205 [2024-10-01 17:38:27.520356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.205 [2024-10-01 17:38:27.520362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.205 qpair failed and we were unable to recover it. 00:38:29.205 [2024-10-01 17:38:27.520539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.205 [2024-10-01 17:38:27.520547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.205 qpair failed and we were unable to recover it. 00:38:29.205 [2024-10-01 17:38:27.520812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.205 [2024-10-01 17:38:27.520819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.205 qpair failed and we were unable to recover it. 00:38:29.205 [2024-10-01 17:38:27.521120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.205 [2024-10-01 17:38:27.521127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.205 qpair failed and we were unable to recover it. 00:38:29.205 [2024-10-01 17:38:27.521425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.205 [2024-10-01 17:38:27.521432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.205 qpair failed and we were unable to recover it. 00:38:29.205 [2024-10-01 17:38:27.521754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.205 [2024-10-01 17:38:27.521761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.205 qpair failed and we were unable to recover it. 00:38:29.205 [2024-10-01 17:38:27.522066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.205 [2024-10-01 17:38:27.522073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.205 qpair failed and we were unable to recover it. 00:38:29.205 [2024-10-01 17:38:27.522377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.205 [2024-10-01 17:38:27.522384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.205 qpair failed and we were unable to recover it. 00:38:29.205 [2024-10-01 17:38:27.522689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.205 [2024-10-01 17:38:27.522695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.205 qpair failed and we were unable to recover it. 00:38:29.205 [2024-10-01 17:38:27.523008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.205 [2024-10-01 17:38:27.523015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.205 qpair failed and we were unable to recover it. 00:38:29.205 [2024-10-01 17:38:27.523333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.205 [2024-10-01 17:38:27.523340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.205 qpair failed and we were unable to recover it. 00:38:29.205 [2024-10-01 17:38:27.523646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.205 [2024-10-01 17:38:27.523653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.205 qpair failed and we were unable to recover it. 00:38:29.205 [2024-10-01 17:38:27.523958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.205 [2024-10-01 17:38:27.523965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.205 qpair failed and we were unable to recover it. 00:38:29.205 [2024-10-01 17:38:27.524286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.205 [2024-10-01 17:38:27.524294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.205 qpair failed and we were unable to recover it. 00:38:29.205 [2024-10-01 17:38:27.524598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.205 [2024-10-01 17:38:27.524604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.205 qpair failed and we were unable to recover it. 00:38:29.205 [2024-10-01 17:38:27.524959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.205 [2024-10-01 17:38:27.524967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.205 qpair failed and we were unable to recover it. 00:38:29.205 [2024-10-01 17:38:27.525340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.205 [2024-10-01 17:38:27.525348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.205 qpair failed and we were unable to recover it. 00:38:29.205 [2024-10-01 17:38:27.525653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.205 [2024-10-01 17:38:27.525661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.205 qpair failed and we were unable to recover it. 00:38:29.205 [2024-10-01 17:38:27.525969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.205 [2024-10-01 17:38:27.525977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.205 qpair failed and we were unable to recover it. 00:38:29.205 [2024-10-01 17:38:27.526303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.205 [2024-10-01 17:38:27.526311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.205 qpair failed and we were unable to recover it. 00:38:29.205 [2024-10-01 17:38:27.526613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.205 [2024-10-01 17:38:27.526619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.205 qpair failed and we were unable to recover it. 00:38:29.205 [2024-10-01 17:38:27.526931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.205 [2024-10-01 17:38:27.526938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.205 qpair failed and we were unable to recover it. 00:38:29.205 [2024-10-01 17:38:27.527152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.205 [2024-10-01 17:38:27.527159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.205 qpair failed and we were unable to recover it. 00:38:29.205 [2024-10-01 17:38:27.527461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.205 [2024-10-01 17:38:27.527468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.205 qpair failed and we were unable to recover it. 00:38:29.205 [2024-10-01 17:38:27.527822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.205 [2024-10-01 17:38:27.527829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.205 qpair failed and we were unable to recover it. 00:38:29.205 [2024-10-01 17:38:27.528120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.205 [2024-10-01 17:38:27.528127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.205 qpair failed and we were unable to recover it. 00:38:29.205 [2024-10-01 17:38:27.528447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.205 [2024-10-01 17:38:27.528453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.205 qpair failed and we were unable to recover it. 00:38:29.205 [2024-10-01 17:38:27.528766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.205 [2024-10-01 17:38:27.528772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.205 qpair failed and we were unable to recover it. 00:38:29.205 [2024-10-01 17:38:27.529077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.205 [2024-10-01 17:38:27.529084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.205 qpair failed and we were unable to recover it. 00:38:29.205 [2024-10-01 17:38:27.529465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.205 [2024-10-01 17:38:27.529472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.205 qpair failed and we were unable to recover it. 00:38:29.205 [2024-10-01 17:38:27.529752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.205 [2024-10-01 17:38:27.529759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.205 qpair failed and we were unable to recover it. 00:38:29.206 [2024-10-01 17:38:27.530052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.206 [2024-10-01 17:38:27.530060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.206 qpair failed and we were unable to recover it. 00:38:29.206 [2024-10-01 17:38:27.530374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.206 [2024-10-01 17:38:27.530382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.206 qpair failed and we were unable to recover it. 00:38:29.206 [2024-10-01 17:38:27.530713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.206 [2024-10-01 17:38:27.530720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.206 qpair failed and we were unable to recover it. 00:38:29.206 [2024-10-01 17:38:27.531026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.206 [2024-10-01 17:38:27.531034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.206 qpair failed and we were unable to recover it. 00:38:29.206 [2024-10-01 17:38:27.531201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.206 [2024-10-01 17:38:27.531208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.206 qpair failed and we were unable to recover it. 00:38:29.206 [2024-10-01 17:38:27.531600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.206 [2024-10-01 17:38:27.531606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.206 qpair failed and we were unable to recover it. 00:38:29.206 [2024-10-01 17:38:27.531898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.206 [2024-10-01 17:38:27.531905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.206 qpair failed and we were unable to recover it. 00:38:29.206 [2024-10-01 17:38:27.532212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.206 [2024-10-01 17:38:27.532218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.206 qpair failed and we were unable to recover it. 00:38:29.206 [2024-10-01 17:38:27.532530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.206 [2024-10-01 17:38:27.532537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.206 qpair failed and we were unable to recover it. 00:38:29.206 [2024-10-01 17:38:27.532813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.206 [2024-10-01 17:38:27.532820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.206 qpair failed and we were unable to recover it. 00:38:29.206 [2024-10-01 17:38:27.533025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.206 [2024-10-01 17:38:27.533032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.206 qpair failed and we were unable to recover it. 00:38:29.206 [2024-10-01 17:38:27.533345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.206 [2024-10-01 17:38:27.533352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.206 qpair failed and we were unable to recover it. 00:38:29.206 [2024-10-01 17:38:27.533678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.206 [2024-10-01 17:38:27.533685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.206 qpair failed and we were unable to recover it. 00:38:29.206 [2024-10-01 17:38:27.533999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.206 [2024-10-01 17:38:27.534005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.206 qpair failed and we were unable to recover it. 00:38:29.206 [2024-10-01 17:38:27.534311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.206 [2024-10-01 17:38:27.534318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.206 qpair failed and we were unable to recover it. 00:38:29.206 [2024-10-01 17:38:27.534614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.206 [2024-10-01 17:38:27.534621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.206 qpair failed and we were unable to recover it. 00:38:29.206 [2024-10-01 17:38:27.534931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.206 [2024-10-01 17:38:27.534938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.206 qpair failed and we were unable to recover it. 00:38:29.206 [2024-10-01 17:38:27.535247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.206 [2024-10-01 17:38:27.535254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.206 qpair failed and we were unable to recover it. 00:38:29.206 [2024-10-01 17:38:27.535551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.206 [2024-10-01 17:38:27.535558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.206 qpair failed and we were unable to recover it. 00:38:29.206 [2024-10-01 17:38:27.535865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.206 [2024-10-01 17:38:27.535873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.206 qpair failed and we were unable to recover it. 00:38:29.206 [2024-10-01 17:38:27.536166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.206 [2024-10-01 17:38:27.536173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.206 qpair failed and we were unable to recover it. 00:38:29.206 [2024-10-01 17:38:27.536469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.206 [2024-10-01 17:38:27.536476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.206 qpair failed and we were unable to recover it. 00:38:29.206 [2024-10-01 17:38:27.536783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.206 [2024-10-01 17:38:27.536789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.206 qpair failed and we were unable to recover it. 00:38:29.206 [2024-10-01 17:38:27.537016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.206 [2024-10-01 17:38:27.537023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.206 qpair failed and we were unable to recover it. 00:38:29.206 [2024-10-01 17:38:27.537358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.206 [2024-10-01 17:38:27.537366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.206 qpair failed and we were unable to recover it. 00:38:29.206 [2024-10-01 17:38:27.537676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.206 [2024-10-01 17:38:27.537683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.206 qpair failed and we were unable to recover it. 00:38:29.206 [2024-10-01 17:38:27.537992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.206 [2024-10-01 17:38:27.538008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.206 qpair failed and we were unable to recover it. 00:38:29.206 [2024-10-01 17:38:27.538311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.206 [2024-10-01 17:38:27.538318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.206 qpair failed and we were unable to recover it. 00:38:29.206 [2024-10-01 17:38:27.538583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.206 [2024-10-01 17:38:27.538590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.206 qpair failed and we were unable to recover it. 00:38:29.206 [2024-10-01 17:38:27.538889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.206 [2024-10-01 17:38:27.538895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.206 qpair failed and we were unable to recover it. 00:38:29.206 [2024-10-01 17:38:27.539203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.206 [2024-10-01 17:38:27.539210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.206 qpair failed and we were unable to recover it. 00:38:29.206 [2024-10-01 17:38:27.539518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.206 [2024-10-01 17:38:27.539525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.206 qpair failed and we were unable to recover it. 00:38:29.206 [2024-10-01 17:38:27.539830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.206 [2024-10-01 17:38:27.539837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.206 qpair failed and we were unable to recover it. 00:38:29.206 [2024-10-01 17:38:27.540170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.206 [2024-10-01 17:38:27.540178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.206 qpair failed and we were unable to recover it. 00:38:29.206 [2024-10-01 17:38:27.540485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.206 [2024-10-01 17:38:27.540491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.206 qpair failed and we were unable to recover it. 00:38:29.206 [2024-10-01 17:38:27.540780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.206 [2024-10-01 17:38:27.540788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.206 qpair failed and we were unable to recover it. 00:38:29.206 [2024-10-01 17:38:27.541087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.206 [2024-10-01 17:38:27.541094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.206 qpair failed and we were unable to recover it. 00:38:29.206 [2024-10-01 17:38:27.541400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.207 [2024-10-01 17:38:27.541406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.207 qpair failed and we were unable to recover it. 00:38:29.207 [2024-10-01 17:38:27.541715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.207 [2024-10-01 17:38:27.541721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.207 qpair failed and we were unable to recover it. 00:38:29.207 [2024-10-01 17:38:27.542018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.207 [2024-10-01 17:38:27.542025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.207 qpair failed and we were unable to recover it. 00:38:29.207 [2024-10-01 17:38:27.542376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.207 [2024-10-01 17:38:27.542384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.207 qpair failed and we were unable to recover it. 00:38:29.207 [2024-10-01 17:38:27.542689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.207 [2024-10-01 17:38:27.542697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.207 qpair failed and we were unable to recover it. 00:38:29.207 [2024-10-01 17:38:27.542984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.207 [2024-10-01 17:38:27.542991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.207 qpair failed and we were unable to recover it. 00:38:29.207 [2024-10-01 17:38:27.543298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.207 [2024-10-01 17:38:27.543305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.207 qpair failed and we were unable to recover it. 00:38:29.207 [2024-10-01 17:38:27.543589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.207 [2024-10-01 17:38:27.543596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.207 qpair failed and we were unable to recover it. 00:38:29.207 [2024-10-01 17:38:27.543906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.207 [2024-10-01 17:38:27.543912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.207 qpair failed and we were unable to recover it. 00:38:29.207 [2024-10-01 17:38:27.544191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.207 [2024-10-01 17:38:27.544198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.207 qpair failed and we were unable to recover it. 00:38:29.207 [2024-10-01 17:38:27.544514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.207 [2024-10-01 17:38:27.544521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.207 qpair failed and we were unable to recover it. 00:38:29.207 [2024-10-01 17:38:27.544825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.207 [2024-10-01 17:38:27.544832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.207 qpair failed and we were unable to recover it. 00:38:29.207 [2024-10-01 17:38:27.545138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.207 [2024-10-01 17:38:27.545145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.207 qpair failed and we were unable to recover it. 00:38:29.207 [2024-10-01 17:38:27.545422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.207 [2024-10-01 17:38:27.545429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.207 qpair failed and we were unable to recover it. 00:38:29.207 [2024-10-01 17:38:27.545759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.207 [2024-10-01 17:38:27.545765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.207 qpair failed and we were unable to recover it. 00:38:29.207 [2024-10-01 17:38:27.546073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.207 [2024-10-01 17:38:27.546080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.207 qpair failed and we were unable to recover it. 00:38:29.207 [2024-10-01 17:38:27.546399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.207 [2024-10-01 17:38:27.546405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.207 qpair failed and we were unable to recover it. 00:38:29.207 [2024-10-01 17:38:27.546785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.207 [2024-10-01 17:38:27.546792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.207 qpair failed and we were unable to recover it. 00:38:29.207 [2024-10-01 17:38:27.547078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.207 [2024-10-01 17:38:27.547085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.207 qpair failed and we were unable to recover it. 00:38:29.207 [2024-10-01 17:38:27.547388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.207 [2024-10-01 17:38:27.547395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.207 qpair failed and we were unable to recover it. 00:38:29.207 [2024-10-01 17:38:27.547692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.207 [2024-10-01 17:38:27.547698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.207 qpair failed and we were unable to recover it. 00:38:29.207 [2024-10-01 17:38:27.547907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.207 [2024-10-01 17:38:27.547914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.207 qpair failed and we were unable to recover it. 00:38:29.207 [2024-10-01 17:38:27.548229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.207 [2024-10-01 17:38:27.548236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.207 qpair failed and we were unable to recover it. 00:38:29.207 [2024-10-01 17:38:27.548540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.207 [2024-10-01 17:38:27.548547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.207 qpair failed and we were unable to recover it. 00:38:29.207 [2024-10-01 17:38:27.548856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.207 [2024-10-01 17:38:27.548862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.207 qpair failed and we were unable to recover it. 00:38:29.207 [2024-10-01 17:38:27.549088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.207 [2024-10-01 17:38:27.549095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.207 qpair failed and we were unable to recover it. 00:38:29.207 [2024-10-01 17:38:27.549337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.207 [2024-10-01 17:38:27.549344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.207 qpair failed and we were unable to recover it. 00:38:29.207 [2024-10-01 17:38:27.549659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.207 [2024-10-01 17:38:27.549666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.207 qpair failed and we were unable to recover it. 00:38:29.207 [2024-10-01 17:38:27.549973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.207 [2024-10-01 17:38:27.549981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.207 qpair failed and we were unable to recover it. 00:38:29.207 [2024-10-01 17:38:27.550270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.207 [2024-10-01 17:38:27.550277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.207 qpair failed and we were unable to recover it. 00:38:29.207 [2024-10-01 17:38:27.550564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.207 [2024-10-01 17:38:27.550571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.207 qpair failed and we were unable to recover it. 00:38:29.207 [2024-10-01 17:38:27.550880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.207 [2024-10-01 17:38:27.550888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.207 qpair failed and we were unable to recover it. 00:38:29.207 [2024-10-01 17:38:27.551184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.207 [2024-10-01 17:38:27.551191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.207 qpair failed and we were unable to recover it. 00:38:29.207 [2024-10-01 17:38:27.551488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.207 [2024-10-01 17:38:27.551502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.207 qpair failed and we were unable to recover it. 00:38:29.207 [2024-10-01 17:38:27.551803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.207 [2024-10-01 17:38:27.551811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.207 qpair failed and we were unable to recover it. 00:38:29.207 [2024-10-01 17:38:27.552108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.207 [2024-10-01 17:38:27.552116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.207 qpair failed and we were unable to recover it. 00:38:29.207 [2024-10-01 17:38:27.552483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.207 [2024-10-01 17:38:27.552490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.207 qpair failed and we were unable to recover it. 00:38:29.207 [2024-10-01 17:38:27.552808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.207 [2024-10-01 17:38:27.552814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.207 qpair failed and we were unable to recover it. 00:38:29.207 [2024-10-01 17:38:27.553101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.207 [2024-10-01 17:38:27.553108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.207 qpair failed and we were unable to recover it. 00:38:29.208 [2024-10-01 17:38:27.553429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.208 [2024-10-01 17:38:27.553436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.208 qpair failed and we were unable to recover it. 00:38:29.208 [2024-10-01 17:38:27.553746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.208 [2024-10-01 17:38:27.553753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.208 qpair failed and we were unable to recover it. 00:38:29.208 [2024-10-01 17:38:27.554063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.208 [2024-10-01 17:38:27.554070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.208 qpair failed and we were unable to recover it. 00:38:29.208 [2024-10-01 17:38:27.554275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.208 [2024-10-01 17:38:27.554282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.208 qpair failed and we were unable to recover it. 00:38:29.208 [2024-10-01 17:38:27.554538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.208 [2024-10-01 17:38:27.554546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.208 qpair failed and we were unable to recover it. 00:38:29.208 [2024-10-01 17:38:27.554870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.208 [2024-10-01 17:38:27.554878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.208 qpair failed and we were unable to recover it. 00:38:29.208 [2024-10-01 17:38:27.555277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.208 [2024-10-01 17:38:27.555284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.208 qpair failed and we were unable to recover it. 00:38:29.208 [2024-10-01 17:38:27.555591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.208 [2024-10-01 17:38:27.555597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.208 qpair failed and we were unable to recover it. 00:38:29.208 [2024-10-01 17:38:27.555836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.208 [2024-10-01 17:38:27.555843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.208 qpair failed and we were unable to recover it. 00:38:29.208 [2024-10-01 17:38:27.556157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.208 [2024-10-01 17:38:27.556164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.208 qpair failed and we were unable to recover it. 00:38:29.208 [2024-10-01 17:38:27.556481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.208 [2024-10-01 17:38:27.556488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.208 qpair failed and we were unable to recover it. 00:38:29.208 [2024-10-01 17:38:27.556775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.208 [2024-10-01 17:38:27.556782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.208 qpair failed and we were unable to recover it. 00:38:29.208 [2024-10-01 17:38:27.557089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.208 [2024-10-01 17:38:27.557096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.208 qpair failed and we were unable to recover it. 00:38:29.208 [2024-10-01 17:38:27.557414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.208 [2024-10-01 17:38:27.557421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.208 qpair failed and we were unable to recover it. 00:38:29.208 [2024-10-01 17:38:27.557711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.208 [2024-10-01 17:38:27.557718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.208 qpair failed and we were unable to recover it. 00:38:29.208 [2024-10-01 17:38:27.558035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.208 [2024-10-01 17:38:27.558042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.208 qpair failed and we were unable to recover it. 00:38:29.208 [2024-10-01 17:38:27.558327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.208 [2024-10-01 17:38:27.558333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.208 qpair failed and we were unable to recover it. 00:38:29.208 [2024-10-01 17:38:27.558531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.208 [2024-10-01 17:38:27.558538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.208 qpair failed and we were unable to recover it. 00:38:29.208 [2024-10-01 17:38:27.558720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.208 [2024-10-01 17:38:27.558727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.208 qpair failed and we were unable to recover it. 00:38:29.208 [2024-10-01 17:38:27.559031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.208 [2024-10-01 17:38:27.559038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.208 qpair failed and we were unable to recover it. 00:38:29.208 [2024-10-01 17:38:27.559369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.208 [2024-10-01 17:38:27.559376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.208 qpair failed and we were unable to recover it. 00:38:29.208 [2024-10-01 17:38:27.559689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.208 [2024-10-01 17:38:27.559696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.208 qpair failed and we were unable to recover it. 00:38:29.208 [2024-10-01 17:38:27.559976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.208 [2024-10-01 17:38:27.559983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.208 qpair failed and we were unable to recover it. 00:38:29.208 [2024-10-01 17:38:27.560306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.208 [2024-10-01 17:38:27.560313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.208 qpair failed and we were unable to recover it. 00:38:29.208 [2024-10-01 17:38:27.560625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.208 [2024-10-01 17:38:27.560632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.208 qpair failed and we were unable to recover it. 00:38:29.208 [2024-10-01 17:38:27.560836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.208 [2024-10-01 17:38:27.560842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.208 qpair failed and we were unable to recover it. 00:38:29.208 [2024-10-01 17:38:27.561148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.208 [2024-10-01 17:38:27.561155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.208 qpair failed and we were unable to recover it. 00:38:29.208 [2024-10-01 17:38:27.561481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.208 [2024-10-01 17:38:27.561487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.208 qpair failed and we were unable to recover it. 00:38:29.208 [2024-10-01 17:38:27.561875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.208 [2024-10-01 17:38:27.561882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.208 qpair failed and we were unable to recover it. 00:38:29.208 [2024-10-01 17:38:27.562171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.208 [2024-10-01 17:38:27.562178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.208 qpair failed and we were unable to recover it. 00:38:29.208 [2024-10-01 17:38:27.562491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.208 [2024-10-01 17:38:27.562499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.208 qpair failed and we were unable to recover it. 00:38:29.208 [2024-10-01 17:38:27.562697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.208 [2024-10-01 17:38:27.562703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.208 qpair failed and we were unable to recover it. 00:38:29.208 [2024-10-01 17:38:27.563268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.208 [2024-10-01 17:38:27.563359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.208 qpair failed and we were unable to recover it. 00:38:29.208 [2024-10-01 17:38:27.563785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.209 [2024-10-01 17:38:27.563821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.209 qpair failed and we were unable to recover it. 00:38:29.209 [2024-10-01 17:38:27.564303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.209 [2024-10-01 17:38:27.564394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:29.209 qpair failed and we were unable to recover it. 00:38:29.209 [2024-10-01 17:38:27.564635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.209 [2024-10-01 17:38:27.564643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.209 qpair failed and we were unable to recover it. 00:38:29.209 [2024-10-01 17:38:27.564913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.209 [2024-10-01 17:38:27.564919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.209 qpair failed and we were unable to recover it. 00:38:29.209 [2024-10-01 17:38:27.565260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.209 [2024-10-01 17:38:27.565267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.209 qpair failed and we were unable to recover it. 00:38:29.209 [2024-10-01 17:38:27.565585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.209 [2024-10-01 17:38:27.565592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.209 qpair failed and we were unable to recover it. 00:38:29.209 [2024-10-01 17:38:27.565879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.209 [2024-10-01 17:38:27.565885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.209 qpair failed and we were unable to recover it. 00:38:29.209 [2024-10-01 17:38:27.566203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.209 [2024-10-01 17:38:27.566210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.209 qpair failed and we were unable to recover it. 00:38:29.209 [2024-10-01 17:38:27.566416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.209 [2024-10-01 17:38:27.566422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.209 qpair failed and we were unable to recover it. 00:38:29.209 [2024-10-01 17:38:27.566575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.209 [2024-10-01 17:38:27.566582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.209 qpair failed and we were unable to recover it. 00:38:29.209 [2024-10-01 17:38:27.566862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.209 [2024-10-01 17:38:27.566869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.209 qpair failed and we were unable to recover it. 00:38:29.209 [2024-10-01 17:38:27.567170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.209 [2024-10-01 17:38:27.567177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.209 qpair failed and we were unable to recover it. 00:38:29.209 [2024-10-01 17:38:27.567473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.209 [2024-10-01 17:38:27.567482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.209 qpair failed and we were unable to recover it. 00:38:29.209 [2024-10-01 17:38:27.567765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.209 [2024-10-01 17:38:27.567773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.209 qpair failed and we were unable to recover it. 00:38:29.209 [2024-10-01 17:38:27.567982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.209 [2024-10-01 17:38:27.567990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.209 qpair failed and we were unable to recover it. 00:38:29.209 [2024-10-01 17:38:27.568268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.209 [2024-10-01 17:38:27.568276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.209 qpair failed and we were unable to recover it. 00:38:29.209 [2024-10-01 17:38:27.568578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.209 [2024-10-01 17:38:27.568584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.209 qpair failed and we were unable to recover it. 00:38:29.209 [2024-10-01 17:38:27.568748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.209 [2024-10-01 17:38:27.568755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.209 qpair failed and we were unable to recover it. 00:38:29.209 [2024-10-01 17:38:27.569029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.209 [2024-10-01 17:38:27.569037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.209 qpair failed and we were unable to recover it. 00:38:29.209 [2024-10-01 17:38:27.569357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.209 [2024-10-01 17:38:27.569364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.209 qpair failed and we were unable to recover it. 00:38:29.209 [2024-10-01 17:38:27.569628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.209 [2024-10-01 17:38:27.569636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.209 qpair failed and we were unable to recover it. 00:38:29.209 [2024-10-01 17:38:27.569964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.209 [2024-10-01 17:38:27.569971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.209 qpair failed and we were unable to recover it. 00:38:29.209 [2024-10-01 17:38:27.570259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.209 [2024-10-01 17:38:27.570267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.209 qpair failed and we were unable to recover it. 00:38:29.209 [2024-10-01 17:38:27.570571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.209 [2024-10-01 17:38:27.570579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.209 qpair failed and we were unable to recover it. 00:38:29.209 [2024-10-01 17:38:27.570882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.209 [2024-10-01 17:38:27.570889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.209 qpair failed and we were unable to recover it. 00:38:29.209 [2024-10-01 17:38:27.571178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.209 [2024-10-01 17:38:27.571186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.209 qpair failed and we were unable to recover it. 00:38:29.209 [2024-10-01 17:38:27.571504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.209 [2024-10-01 17:38:27.571512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.209 qpair failed and we were unable to recover it. 00:38:29.209 [2024-10-01 17:38:27.571708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.209 [2024-10-01 17:38:27.571715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.209 qpair failed and we were unable to recover it. 00:38:29.209 [2024-10-01 17:38:27.572030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.209 [2024-10-01 17:38:27.572037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.209 qpair failed and we were unable to recover it. 00:38:29.209 [2024-10-01 17:38:27.572370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.209 [2024-10-01 17:38:27.572377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.209 qpair failed and we were unable to recover it. 00:38:29.209 [2024-10-01 17:38:27.572678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.209 [2024-10-01 17:38:27.572685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.209 qpair failed and we were unable to recover it. 00:38:29.209 [2024-10-01 17:38:27.572904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.209 [2024-10-01 17:38:27.572910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.209 qpair failed and we were unable to recover it. 00:38:29.209 [2024-10-01 17:38:27.573154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.209 [2024-10-01 17:38:27.573161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.209 qpair failed and we were unable to recover it. 00:38:29.209 [2024-10-01 17:38:27.573366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.209 [2024-10-01 17:38:27.573372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.209 qpair failed and we were unable to recover it. 00:38:29.209 [2024-10-01 17:38:27.573645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.209 [2024-10-01 17:38:27.573652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.209 qpair failed and we were unable to recover it. 00:38:29.209 [2024-10-01 17:38:27.573847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.209 [2024-10-01 17:38:27.573853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.209 qpair failed and we were unable to recover it. 00:38:29.209 [2024-10-01 17:38:27.574072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.209 [2024-10-01 17:38:27.574080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.209 qpair failed and we were unable to recover it. 00:38:29.209 [2024-10-01 17:38:27.574295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.209 [2024-10-01 17:38:27.574302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.209 qpair failed and we were unable to recover it. 00:38:29.209 [2024-10-01 17:38:27.574502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.209 [2024-10-01 17:38:27.574509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.209 qpair failed and we were unable to recover it. 00:38:29.210 [2024-10-01 17:38:27.574831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.210 [2024-10-01 17:38:27.574839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.210 qpair failed and we were unable to recover it. 00:38:29.210 [2024-10-01 17:38:27.575074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.210 [2024-10-01 17:38:27.575081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.210 qpair failed and we were unable to recover it. 00:38:29.210 [2024-10-01 17:38:27.575368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.210 [2024-10-01 17:38:27.575375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.210 qpair failed and we were unable to recover it. 00:38:29.210 [2024-10-01 17:38:27.575657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.210 [2024-10-01 17:38:27.575664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.210 qpair failed and we were unable to recover it. 00:38:29.210 [2024-10-01 17:38:27.575965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.210 [2024-10-01 17:38:27.575972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.210 qpair failed and we were unable to recover it. 00:38:29.210 [2024-10-01 17:38:27.576264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.210 [2024-10-01 17:38:27.576271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.210 qpair failed and we were unable to recover it. 00:38:29.210 [2024-10-01 17:38:27.576478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.210 [2024-10-01 17:38:27.576485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.210 qpair failed and we were unable to recover it. 00:38:29.210 [2024-10-01 17:38:27.576777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.210 [2024-10-01 17:38:27.576784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.210 qpair failed and we were unable to recover it. 00:38:29.210 [2024-10-01 17:38:27.576987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.210 [2024-10-01 17:38:27.577000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.210 qpair failed and we were unable to recover it. 00:38:29.210 [2024-10-01 17:38:27.577185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.210 [2024-10-01 17:38:27.577192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.210 qpair failed and we were unable to recover it. 00:38:29.210 [2024-10-01 17:38:27.577479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.210 [2024-10-01 17:38:27.577486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.210 qpair failed and we were unable to recover it. 00:38:29.210 [2024-10-01 17:38:27.577797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.210 [2024-10-01 17:38:27.577804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.210 qpair failed and we were unable to recover it. 00:38:29.210 [2024-10-01 17:38:27.578031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.210 [2024-10-01 17:38:27.578038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.210 qpair failed and we were unable to recover it. 00:38:29.210 [2024-10-01 17:38:27.578412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.210 [2024-10-01 17:38:27.578420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.210 qpair failed and we were unable to recover it. 00:38:29.210 [2024-10-01 17:38:27.578589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.210 [2024-10-01 17:38:27.578596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.210 qpair failed and we were unable to recover it. 00:38:29.210 [2024-10-01 17:38:27.578831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.210 [2024-10-01 17:38:27.578838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.210 qpair failed and we were unable to recover it. 00:38:29.210 [2024-10-01 17:38:27.579119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.210 [2024-10-01 17:38:27.579126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.210 qpair failed and we were unable to recover it. 00:38:29.210 [2024-10-01 17:38:27.579300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.210 [2024-10-01 17:38:27.579307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.210 qpair failed and we were unable to recover it. 00:38:29.210 [2024-10-01 17:38:27.579678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.210 [2024-10-01 17:38:27.579685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.210 qpair failed and we were unable to recover it. 00:38:29.210 [2024-10-01 17:38:27.579889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.210 [2024-10-01 17:38:27.579896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.210 qpair failed and we were unable to recover it. 00:38:29.210 [2024-10-01 17:38:27.580184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.210 [2024-10-01 17:38:27.580191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.210 qpair failed and we were unable to recover it. 00:38:29.210 [2024-10-01 17:38:27.580525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.210 [2024-10-01 17:38:27.580532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.210 qpair failed and we were unable to recover it. 00:38:29.210 [2024-10-01 17:38:27.580830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.210 [2024-10-01 17:38:27.580836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.210 qpair failed and we were unable to recover it. 00:38:29.210 [2024-10-01 17:38:27.581140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.210 [2024-10-01 17:38:27.581147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.210 qpair failed and we were unable to recover it. 00:38:29.210 [2024-10-01 17:38:27.581455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.210 [2024-10-01 17:38:27.581461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.210 qpair failed and we were unable to recover it. 00:38:29.210 [2024-10-01 17:38:27.581771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.210 [2024-10-01 17:38:27.581779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.210 qpair failed and we were unable to recover it. 00:38:29.210 [2024-10-01 17:38:27.582085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.210 [2024-10-01 17:38:27.582092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.210 qpair failed and we were unable to recover it. 00:38:29.210 [2024-10-01 17:38:27.582413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.210 [2024-10-01 17:38:27.582420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.210 qpair failed and we were unable to recover it. 00:38:29.210 [2024-10-01 17:38:27.582723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.210 [2024-10-01 17:38:27.582730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.210 qpair failed and we were unable to recover it. 00:38:29.210 [2024-10-01 17:38:27.582919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.210 [2024-10-01 17:38:27.582927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.210 qpair failed and we were unable to recover it. 00:38:29.210 [2024-10-01 17:38:27.583226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.210 [2024-10-01 17:38:27.583234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.210 qpair failed and we were unable to recover it. 00:38:29.210 [2024-10-01 17:38:27.583537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.210 [2024-10-01 17:38:27.583545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.210 qpair failed and we were unable to recover it. 00:38:29.210 [2024-10-01 17:38:27.583904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.210 [2024-10-01 17:38:27.583912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.210 qpair failed and we were unable to recover it. 00:38:29.210 [2024-10-01 17:38:27.584221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.210 [2024-10-01 17:38:27.584228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.210 qpair failed and we were unable to recover it. 00:38:29.210 [2024-10-01 17:38:27.584530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.210 [2024-10-01 17:38:27.584538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.210 qpair failed and we were unable to recover it. 00:38:29.210 [2024-10-01 17:38:27.584829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.210 [2024-10-01 17:38:27.584837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.210 qpair failed and we were unable to recover it. 00:38:29.210 [2024-10-01 17:38:27.585131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.210 [2024-10-01 17:38:27.585138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.210 qpair failed and we were unable to recover it. 00:38:29.210 [2024-10-01 17:38:27.585465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.210 [2024-10-01 17:38:27.585472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.210 qpair failed and we were unable to recover it. 00:38:29.210 [2024-10-01 17:38:27.585799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.211 [2024-10-01 17:38:27.585805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.211 qpair failed and we were unable to recover it. 00:38:29.211 [2024-10-01 17:38:27.586201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.211 [2024-10-01 17:38:27.586208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.211 qpair failed and we were unable to recover it. 00:38:29.211 [2024-10-01 17:38:27.586525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.211 [2024-10-01 17:38:27.586533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.211 qpair failed and we were unable to recover it. 00:38:29.211 [2024-10-01 17:38:27.586860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.211 [2024-10-01 17:38:27.586867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.211 qpair failed and we were unable to recover it. 00:38:29.211 [2024-10-01 17:38:27.587246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.211 [2024-10-01 17:38:27.587253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.211 qpair failed and we were unable to recover it. 00:38:29.211 [2024-10-01 17:38:27.587547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.211 [2024-10-01 17:38:27.587554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.211 qpair failed and we were unable to recover it. 00:38:29.211 [2024-10-01 17:38:27.587862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.211 [2024-10-01 17:38:27.587869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.211 qpair failed and we were unable to recover it. 00:38:29.211 [2024-10-01 17:38:27.588272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.211 [2024-10-01 17:38:27.588279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.211 qpair failed and we were unable to recover it. 00:38:29.211 [2024-10-01 17:38:27.588587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.211 [2024-10-01 17:38:27.588594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.211 qpair failed and we were unable to recover it. 00:38:29.211 [2024-10-01 17:38:27.588808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.211 [2024-10-01 17:38:27.588815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.211 qpair failed and we were unable to recover it. 00:38:29.211 [2024-10-01 17:38:27.589117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.211 [2024-10-01 17:38:27.589125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.211 qpair failed and we were unable to recover it. 00:38:29.211 [2024-10-01 17:38:27.589340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.211 [2024-10-01 17:38:27.589347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.211 qpair failed and we were unable to recover it. 00:38:29.211 [2024-10-01 17:38:27.589543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.211 [2024-10-01 17:38:27.589550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.211 qpair failed and we were unable to recover it. 00:38:29.211 [2024-10-01 17:38:27.589882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.211 [2024-10-01 17:38:27.589889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.211 qpair failed and we were unable to recover it. 00:38:29.211 [2024-10-01 17:38:27.590161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.211 [2024-10-01 17:38:27.590168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.211 qpair failed and we were unable to recover it. 00:38:29.211 [2024-10-01 17:38:27.590496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.211 [2024-10-01 17:38:27.590502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.211 qpair failed and we were unable to recover it. 00:38:29.211 [2024-10-01 17:38:27.590791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.211 [2024-10-01 17:38:27.590798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.211 qpair failed and we were unable to recover it. 00:38:29.211 [2024-10-01 17:38:27.591166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.211 [2024-10-01 17:38:27.591173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.211 qpair failed and we were unable to recover it. 00:38:29.211 [2024-10-01 17:38:27.591476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.211 [2024-10-01 17:38:27.591483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.211 qpair failed and we were unable to recover it. 00:38:29.211 [2024-10-01 17:38:27.591686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.211 [2024-10-01 17:38:27.591693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.211 qpair failed and we were unable to recover it. 00:38:29.211 [2024-10-01 17:38:27.591887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.211 [2024-10-01 17:38:27.591895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.211 qpair failed and we were unable to recover it. 00:38:29.211 [2024-10-01 17:38:27.592269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.211 [2024-10-01 17:38:27.592276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.211 qpair failed and we were unable to recover it. 00:38:29.211 [2024-10-01 17:38:27.592588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.211 [2024-10-01 17:38:27.592595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.211 qpair failed and we were unable to recover it. 00:38:29.211 [2024-10-01 17:38:27.592791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.211 [2024-10-01 17:38:27.592798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.211 qpair failed and we were unable to recover it. 00:38:29.211 [2024-10-01 17:38:27.593002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.211 [2024-10-01 17:38:27.593009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.211 qpair failed and we were unable to recover it. 00:38:29.211 [2024-10-01 17:38:27.593393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.211 [2024-10-01 17:38:27.593400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.211 qpair failed and we were unable to recover it. 00:38:29.211 [2024-10-01 17:38:27.593596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.211 [2024-10-01 17:38:27.593603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.211 qpair failed and we were unable to recover it. 00:38:29.211 [2024-10-01 17:38:27.593947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.211 [2024-10-01 17:38:27.593955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.211 qpair failed and we were unable to recover it. 00:38:29.211 [2024-10-01 17:38:27.594279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.211 [2024-10-01 17:38:27.594286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.211 qpair failed and we were unable to recover it. 00:38:29.211 [2024-10-01 17:38:27.594480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.211 [2024-10-01 17:38:27.594487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.211 qpair failed and we were unable to recover it. 00:38:29.211 [2024-10-01 17:38:27.594833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.211 [2024-10-01 17:38:27.594841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.211 qpair failed and we were unable to recover it. 00:38:29.211 [2024-10-01 17:38:27.595019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.211 [2024-10-01 17:38:27.595027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.211 qpair failed and we were unable to recover it. 00:38:29.211 [2024-10-01 17:38:27.595232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.211 [2024-10-01 17:38:27.595238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.211 qpair failed and we were unable to recover it. 00:38:29.211 [2024-10-01 17:38:27.595562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.211 [2024-10-01 17:38:27.595569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.211 qpair failed and we were unable to recover it. 00:38:29.211 [2024-10-01 17:38:27.595878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.211 [2024-10-01 17:38:27.595885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.211 qpair failed and we were unable to recover it. 00:38:29.212 [2024-10-01 17:38:27.596084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.212 [2024-10-01 17:38:27.596091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.212 qpair failed and we were unable to recover it. 00:38:29.212 [2024-10-01 17:38:27.596428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.212 [2024-10-01 17:38:27.596435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.212 qpair failed and we were unable to recover it. 00:38:29.212 [2024-10-01 17:38:27.596753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.212 [2024-10-01 17:38:27.596760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.212 qpair failed and we were unable to recover it. 00:38:29.212 [2024-10-01 17:38:27.596982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.212 [2024-10-01 17:38:27.596989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.212 qpair failed and we were unable to recover it. 00:38:29.212 [2024-10-01 17:38:27.597295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.212 [2024-10-01 17:38:27.597302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.212 qpair failed and we were unable to recover it. 00:38:29.212 [2024-10-01 17:38:27.597603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.212 [2024-10-01 17:38:27.597609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.212 qpair failed and we were unable to recover it. 00:38:29.212 [2024-10-01 17:38:27.597892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.212 [2024-10-01 17:38:27.597898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.212 qpair failed and we were unable to recover it. 00:38:29.212 [2024-10-01 17:38:27.598232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.212 [2024-10-01 17:38:27.598242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.212 qpair failed and we were unable to recover it. 00:38:29.212 [2024-10-01 17:38:27.598524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.212 [2024-10-01 17:38:27.598531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.212 qpair failed and we were unable to recover it. 00:38:29.212 [2024-10-01 17:38:27.598812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.212 [2024-10-01 17:38:27.598818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.212 qpair failed and we were unable to recover it. 00:38:29.212 [2024-10-01 17:38:27.599108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.212 [2024-10-01 17:38:27.599115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.212 qpair failed and we were unable to recover it. 00:38:29.212 [2024-10-01 17:38:27.599435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.212 [2024-10-01 17:38:27.599441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.212 qpair failed and we were unable to recover it. 00:38:29.212 [2024-10-01 17:38:27.599759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.212 [2024-10-01 17:38:27.599766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.212 qpair failed and we were unable to recover it. 00:38:29.212 [2024-10-01 17:38:27.600102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.212 [2024-10-01 17:38:27.600109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.212 qpair failed and we were unable to recover it. 00:38:29.212 [2024-10-01 17:38:27.600436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.212 [2024-10-01 17:38:27.600444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.212 qpair failed and we were unable to recover it. 00:38:29.212 [2024-10-01 17:38:27.600739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.212 [2024-10-01 17:38:27.600747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.212 qpair failed and we were unable to recover it. 00:38:29.212 [2024-10-01 17:38:27.601055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.212 [2024-10-01 17:38:27.601062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.212 qpair failed and we were unable to recover it. 00:38:29.212 [2024-10-01 17:38:27.601365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.212 [2024-10-01 17:38:27.601373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.212 qpair failed and we were unable to recover it. 00:38:29.212 [2024-10-01 17:38:27.601682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.212 [2024-10-01 17:38:27.601688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.212 qpair failed and we were unable to recover it. 00:38:29.212 [2024-10-01 17:38:27.601972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.212 [2024-10-01 17:38:27.601979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.212 qpair failed and we were unable to recover it. 00:38:29.212 [2024-10-01 17:38:27.602280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.212 [2024-10-01 17:38:27.602287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.212 qpair failed and we were unable to recover it. 00:38:29.212 [2024-10-01 17:38:27.602597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.212 [2024-10-01 17:38:27.602604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.212 qpair failed and we were unable to recover it. 00:38:29.212 [2024-10-01 17:38:27.602791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.212 [2024-10-01 17:38:27.602798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.212 qpair failed and we were unable to recover it. 00:38:29.212 [2024-10-01 17:38:27.603120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.212 [2024-10-01 17:38:27.603127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.212 qpair failed and we were unable to recover it. 00:38:29.212 [2024-10-01 17:38:27.603450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.212 [2024-10-01 17:38:27.603456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.212 qpair failed and we were unable to recover it. 00:38:29.212 [2024-10-01 17:38:27.603769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.212 [2024-10-01 17:38:27.603776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.212 qpair failed and we were unable to recover it. 00:38:29.212 [2024-10-01 17:38:27.604071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.212 [2024-10-01 17:38:27.604078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.212 qpair failed and we were unable to recover it. 00:38:29.212 [2024-10-01 17:38:27.604256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.212 [2024-10-01 17:38:27.604263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.212 qpair failed and we were unable to recover it. 00:38:29.212 [2024-10-01 17:38:27.604479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.212 [2024-10-01 17:38:27.604486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.212 qpair failed and we were unable to recover it. 00:38:29.212 [2024-10-01 17:38:27.604826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.212 [2024-10-01 17:38:27.604834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.212 qpair failed and we were unable to recover it. 00:38:29.212 [2024-10-01 17:38:27.605044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.212 [2024-10-01 17:38:27.605051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.212 qpair failed and we were unable to recover it. 00:38:29.212 [2024-10-01 17:38:27.605385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.212 [2024-10-01 17:38:27.605392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.212 qpair failed and we were unable to recover it. 00:38:29.212 [2024-10-01 17:38:27.605702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.212 [2024-10-01 17:38:27.605708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.212 qpair failed and we were unable to recover it. 00:38:29.212 [2024-10-01 17:38:27.606021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.212 [2024-10-01 17:38:27.606028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.212 qpair failed and we were unable to recover it. 00:38:29.212 [2024-10-01 17:38:27.606373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.212 [2024-10-01 17:38:27.606379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.212 qpair failed and we were unable to recover it. 00:38:29.212 [2024-10-01 17:38:27.606667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.212 [2024-10-01 17:38:27.606680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.212 qpair failed and we were unable to recover it. 00:38:29.212 [2024-10-01 17:38:27.606991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.212 [2024-10-01 17:38:27.607002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.212 qpair failed and we were unable to recover it. 00:38:29.212 [2024-10-01 17:38:27.607327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.212 [2024-10-01 17:38:27.607335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.212 qpair failed and we were unable to recover it. 00:38:29.212 [2024-10-01 17:38:27.607631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.213 [2024-10-01 17:38:27.607638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.213 qpair failed and we were unable to recover it. 00:38:29.213 [2024-10-01 17:38:27.607927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.213 [2024-10-01 17:38:27.607934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.213 qpair failed and we were unable to recover it. 00:38:29.213 [2024-10-01 17:38:27.608092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.213 [2024-10-01 17:38:27.608099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.213 qpair failed and we were unable to recover it. 00:38:29.213 [2024-10-01 17:38:27.608469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.213 [2024-10-01 17:38:27.608476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.213 qpair failed and we were unable to recover it. 00:38:29.213 [2024-10-01 17:38:27.608816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.213 [2024-10-01 17:38:27.608824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.213 qpair failed and we were unable to recover it. 00:38:29.213 [2024-10-01 17:38:27.609170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.213 [2024-10-01 17:38:27.609177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.213 qpair failed and we were unable to recover it. 00:38:29.213 [2024-10-01 17:38:27.609485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.213 [2024-10-01 17:38:27.609492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.213 qpair failed and we were unable to recover it. 00:38:29.213 [2024-10-01 17:38:27.609803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.213 [2024-10-01 17:38:27.609810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.213 qpair failed and we were unable to recover it. 00:38:29.213 [2024-10-01 17:38:27.610107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.213 [2024-10-01 17:38:27.610114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.213 qpair failed and we were unable to recover it. 00:38:29.213 [2024-10-01 17:38:27.610408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.213 [2024-10-01 17:38:27.610416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.213 qpair failed and we were unable to recover it. 00:38:29.213 [2024-10-01 17:38:27.610692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.213 [2024-10-01 17:38:27.610700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.213 qpair failed and we were unable to recover it. 00:38:29.213 [2024-10-01 17:38:27.610999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.213 [2024-10-01 17:38:27.611006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.213 qpair failed and we were unable to recover it. 00:38:29.213 [2024-10-01 17:38:27.611190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.213 [2024-10-01 17:38:27.611197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.213 qpair failed and we were unable to recover it. 00:38:29.213 [2024-10-01 17:38:27.611501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.213 [2024-10-01 17:38:27.611509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.213 qpair failed and we were unable to recover it. 00:38:29.213 [2024-10-01 17:38:27.611823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.213 [2024-10-01 17:38:27.611830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.213 qpair failed and we were unable to recover it. 00:38:29.213 [2024-10-01 17:38:27.612158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.213 [2024-10-01 17:38:27.612166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.213 qpair failed and we were unable to recover it. 00:38:29.213 [2024-10-01 17:38:27.612485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.213 [2024-10-01 17:38:27.612493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.213 qpair failed and we were unable to recover it. 00:38:29.213 [2024-10-01 17:38:27.612840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.213 [2024-10-01 17:38:27.612847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.213 qpair failed and we were unable to recover it. 00:38:29.213 [2024-10-01 17:38:27.613119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.213 [2024-10-01 17:38:27.613126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.213 qpair failed and we were unable to recover it. 00:38:29.213 [2024-10-01 17:38:27.613303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.213 [2024-10-01 17:38:27.613310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.213 qpair failed and we were unable to recover it. 00:38:29.213 [2024-10-01 17:38:27.613631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.213 [2024-10-01 17:38:27.613638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.213 qpair failed and we were unable to recover it. 00:38:29.213 [2024-10-01 17:38:27.613959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.213 [2024-10-01 17:38:27.613965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.213 qpair failed and we were unable to recover it. 00:38:29.213 [2024-10-01 17:38:27.614311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.213 [2024-10-01 17:38:27.614318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.213 qpair failed and we were unable to recover it. 00:38:29.213 [2024-10-01 17:38:27.614503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.213 [2024-10-01 17:38:27.614511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.213 qpair failed and we were unable to recover it. 00:38:29.213 [2024-10-01 17:38:27.614825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.213 [2024-10-01 17:38:27.614832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.213 qpair failed and we were unable to recover it. 00:38:29.213 [2024-10-01 17:38:27.615163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.213 [2024-10-01 17:38:27.615170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.213 qpair failed and we were unable to recover it. 00:38:29.213 [2024-10-01 17:38:27.615472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.213 [2024-10-01 17:38:27.615479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.213 qpair failed and we were unable to recover it. 00:38:29.213 [2024-10-01 17:38:27.615795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.213 [2024-10-01 17:38:27.615802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.213 qpair failed and we were unable to recover it. 00:38:29.213 [2024-10-01 17:38:27.616133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.213 [2024-10-01 17:38:27.616140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.213 qpair failed and we were unable to recover it. 00:38:29.213 [2024-10-01 17:38:27.616447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.213 [2024-10-01 17:38:27.616454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.213 qpair failed and we were unable to recover it. 00:38:29.213 [2024-10-01 17:38:27.616773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.214 [2024-10-01 17:38:27.616780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.214 qpair failed and we were unable to recover it. 00:38:29.214 [2024-10-01 17:38:27.617064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.214 [2024-10-01 17:38:27.617071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.214 qpair failed and we were unable to recover it. 00:38:29.214 [2024-10-01 17:38:27.617418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.214 [2024-10-01 17:38:27.617426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.214 qpair failed and we were unable to recover it. 00:38:29.214 [2024-10-01 17:38:27.617721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.214 [2024-10-01 17:38:27.617729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.214 qpair failed and we were unable to recover it. 00:38:29.214 [2024-10-01 17:38:27.617943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.214 [2024-10-01 17:38:27.617951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.214 qpair failed and we were unable to recover it. 00:38:29.214 [2024-10-01 17:38:27.618132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.214 [2024-10-01 17:38:27.618140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.214 qpair failed and we were unable to recover it. 00:38:29.214 [2024-10-01 17:38:27.618424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.214 [2024-10-01 17:38:27.618431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.214 qpair failed and we were unable to recover it. 00:38:29.214 [2024-10-01 17:38:27.618738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.214 [2024-10-01 17:38:27.618746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.214 qpair failed and we were unable to recover it. 00:38:29.214 [2024-10-01 17:38:27.618945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.214 [2024-10-01 17:38:27.618952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.214 qpair failed and we were unable to recover it. 00:38:29.214 [2024-10-01 17:38:27.619205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.214 [2024-10-01 17:38:27.619212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.214 qpair failed and we were unable to recover it. 00:38:29.214 [2024-10-01 17:38:27.619552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.214 [2024-10-01 17:38:27.619560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.214 qpair failed and we were unable to recover it. 00:38:29.214 [2024-10-01 17:38:27.619856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.214 [2024-10-01 17:38:27.619864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.214 qpair failed and we were unable to recover it. 00:38:29.214 [2024-10-01 17:38:27.620055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.214 [2024-10-01 17:38:27.620062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.214 qpair failed and we were unable to recover it. 00:38:29.214 [2024-10-01 17:38:27.620352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.214 [2024-10-01 17:38:27.620359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.214 qpair failed and we were unable to recover it. 00:38:29.214 [2024-10-01 17:38:27.620663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.214 [2024-10-01 17:38:27.620670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.214 qpair failed and we were unable to recover it. 00:38:29.214 [2024-10-01 17:38:27.620972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.214 [2024-10-01 17:38:27.620979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.214 qpair failed and we were unable to recover it. 00:38:29.214 [2024-10-01 17:38:27.621287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.214 [2024-10-01 17:38:27.621294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.214 qpair failed and we were unable to recover it. 00:38:29.214 [2024-10-01 17:38:27.621602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.214 [2024-10-01 17:38:27.621609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.214 qpair failed and we were unable to recover it. 00:38:29.214 [2024-10-01 17:38:27.621952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.214 [2024-10-01 17:38:27.621959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.214 qpair failed and we were unable to recover it. 00:38:29.214 [2024-10-01 17:38:27.622249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.214 [2024-10-01 17:38:27.622259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.214 qpair failed and we were unable to recover it. 00:38:29.214 [2024-10-01 17:38:27.622574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.214 [2024-10-01 17:38:27.622581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.214 qpair failed and we were unable to recover it. 00:38:29.214 [2024-10-01 17:38:27.622776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.214 [2024-10-01 17:38:27.622783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.214 qpair failed and we were unable to recover it. 00:38:29.214 [2024-10-01 17:38:27.623141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.214 [2024-10-01 17:38:27.623148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.214 qpair failed and we were unable to recover it. 00:38:29.214 [2024-10-01 17:38:27.623475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.214 [2024-10-01 17:38:27.623482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.214 qpair failed and we were unable to recover it. 00:38:29.214 [2024-10-01 17:38:27.623790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.214 [2024-10-01 17:38:27.623797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.214 qpair failed and we were unable to recover it. 00:38:29.214 [2024-10-01 17:38:27.624073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.214 [2024-10-01 17:38:27.624081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.214 qpair failed and we were unable to recover it. 00:38:29.214 [2024-10-01 17:38:27.624469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.214 [2024-10-01 17:38:27.624476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.214 qpair failed and we were unable to recover it. 00:38:29.214 [2024-10-01 17:38:27.624806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.214 [2024-10-01 17:38:27.624813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.214 qpair failed and we were unable to recover it. 00:38:29.214 [2024-10-01 17:38:27.625126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.214 [2024-10-01 17:38:27.625133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.214 qpair failed and we were unable to recover it. 00:38:29.214 [2024-10-01 17:38:27.625319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.214 [2024-10-01 17:38:27.625326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.214 qpair failed and we were unable to recover it. 00:38:29.214 [2024-10-01 17:38:27.625661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.214 [2024-10-01 17:38:27.625668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.214 qpair failed and we were unable to recover it. 00:38:29.214 [2024-10-01 17:38:27.626000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.214 [2024-10-01 17:38:27.626007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.214 qpair failed and we were unable to recover it. 00:38:29.214 [2024-10-01 17:38:27.626298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.214 [2024-10-01 17:38:27.626304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.214 qpair failed and we were unable to recover it. 00:38:29.214 [2024-10-01 17:38:27.626618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.214 [2024-10-01 17:38:27.626625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.214 qpair failed and we were unable to recover it. 00:38:29.214 [2024-10-01 17:38:27.626955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.214 [2024-10-01 17:38:27.626962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.214 qpair failed and we were unable to recover it. 00:38:29.214 [2024-10-01 17:38:27.627285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.214 [2024-10-01 17:38:27.627293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.214 qpair failed and we were unable to recover it. 00:38:29.214 [2024-10-01 17:38:27.627607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.214 [2024-10-01 17:38:27.627613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.214 qpair failed and we were unable to recover it. 00:38:29.214 [2024-10-01 17:38:27.627925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.214 [2024-10-01 17:38:27.627932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.214 qpair failed and we were unable to recover it. 00:38:29.214 [2024-10-01 17:38:27.628241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.214 [2024-10-01 17:38:27.628248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.215 qpair failed and we were unable to recover it. 00:38:29.215 [2024-10-01 17:38:27.628533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.215 [2024-10-01 17:38:27.628548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.215 qpair failed and we were unable to recover it. 00:38:29.215 [2024-10-01 17:38:27.628725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.215 [2024-10-01 17:38:27.628732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.215 qpair failed and we were unable to recover it. 00:38:29.215 [2024-10-01 17:38:27.628904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.215 [2024-10-01 17:38:27.628912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.215 qpair failed and we were unable to recover it. 00:38:29.215 [2024-10-01 17:38:27.629193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.215 [2024-10-01 17:38:27.629201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.215 qpair failed and we were unable to recover it. 00:38:29.215 [2024-10-01 17:38:27.629498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.215 [2024-10-01 17:38:27.629504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.215 qpair failed and we were unable to recover it. 00:38:29.215 [2024-10-01 17:38:27.629813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.215 [2024-10-01 17:38:27.629820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.215 qpair failed and we were unable to recover it. 00:38:29.215 [2024-10-01 17:38:27.630133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.215 [2024-10-01 17:38:27.630141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.215 qpair failed and we were unable to recover it. 00:38:29.215 [2024-10-01 17:38:27.630448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.215 [2024-10-01 17:38:27.630456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.215 qpair failed and we were unable to recover it. 00:38:29.215 [2024-10-01 17:38:27.630646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.215 [2024-10-01 17:38:27.630654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.215 qpair failed and we were unable to recover it. 00:38:29.215 [2024-10-01 17:38:27.630972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.215 [2024-10-01 17:38:27.630978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.215 qpair failed and we were unable to recover it. 00:38:29.215 [2024-10-01 17:38:27.631273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.215 [2024-10-01 17:38:27.631280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.215 qpair failed and we were unable to recover it. 00:38:29.215 [2024-10-01 17:38:27.631660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.215 [2024-10-01 17:38:27.631667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.215 qpair failed and we were unable to recover it. 00:38:29.215 [2024-10-01 17:38:27.631973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.215 [2024-10-01 17:38:27.631979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.215 qpair failed and we were unable to recover it. 00:38:29.215 [2024-10-01 17:38:27.632247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.215 [2024-10-01 17:38:27.632255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.215 qpair failed and we were unable to recover it. 00:38:29.215 [2024-10-01 17:38:27.632577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.215 [2024-10-01 17:38:27.632584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.215 qpair failed and we were unable to recover it. 00:38:29.215 [2024-10-01 17:38:27.632767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.215 [2024-10-01 17:38:27.632775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.215 qpair failed and we were unable to recover it. 00:38:29.215 [2024-10-01 17:38:27.633088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.215 [2024-10-01 17:38:27.633095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.215 qpair failed and we were unable to recover it. 00:38:29.215 [2024-10-01 17:38:27.633412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.215 [2024-10-01 17:38:27.633419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.215 qpair failed and we were unable to recover it. 00:38:29.215 [2024-10-01 17:38:27.633732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.215 [2024-10-01 17:38:27.633738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.215 qpair failed and we were unable to recover it. 00:38:29.215 [2024-10-01 17:38:27.633934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.215 [2024-10-01 17:38:27.633941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.215 qpair failed and we were unable to recover it. 00:38:29.215 [2024-10-01 17:38:27.634177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.215 [2024-10-01 17:38:27.634186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.215 qpair failed and we were unable to recover it. 00:38:29.215 [2024-10-01 17:38:27.634457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.215 [2024-10-01 17:38:27.634464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.215 qpair failed and we were unable to recover it. 00:38:29.215 [2024-10-01 17:38:27.634795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.215 [2024-10-01 17:38:27.634802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.215 qpair failed and we were unable to recover it. 00:38:29.215 [2024-10-01 17:38:27.634991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.215 [2024-10-01 17:38:27.635001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.215 qpair failed and we were unable to recover it. 00:38:29.215 [2024-10-01 17:38:27.635315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.215 [2024-10-01 17:38:27.635322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.215 qpair failed and we were unable to recover it. 00:38:29.215 [2024-10-01 17:38:27.635630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.215 [2024-10-01 17:38:27.635637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.215 qpair failed and we were unable to recover it. 00:38:29.215 [2024-10-01 17:38:27.635926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.215 [2024-10-01 17:38:27.635933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.215 qpair failed and we were unable to recover it. 00:38:29.215 [2024-10-01 17:38:27.636153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.215 [2024-10-01 17:38:27.636160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.215 qpair failed and we were unable to recover it. 00:38:29.215 [2024-10-01 17:38:27.636478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.215 [2024-10-01 17:38:27.636485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.215 qpair failed and we were unable to recover it. 00:38:29.215 [2024-10-01 17:38:27.636646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.215 [2024-10-01 17:38:27.636653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.215 qpair failed and we were unable to recover it. 00:38:29.215 [2024-10-01 17:38:27.636971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.215 [2024-10-01 17:38:27.636978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.215 qpair failed and we were unable to recover it. 00:38:29.215 [2024-10-01 17:38:27.637314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.215 [2024-10-01 17:38:27.637322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.215 qpair failed and we were unable to recover it. 00:38:29.215 [2024-10-01 17:38:27.637501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.215 [2024-10-01 17:38:27.637509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.215 qpair failed and we were unable to recover it. 00:38:29.215 [2024-10-01 17:38:27.637807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.215 [2024-10-01 17:38:27.637813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.215 qpair failed and we were unable to recover it. 00:38:29.215 [2024-10-01 17:38:27.638003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.215 [2024-10-01 17:38:27.638011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.215 qpair failed and we were unable to recover it. 00:38:29.215 [2024-10-01 17:38:27.638291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.216 [2024-10-01 17:38:27.638298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.216 qpair failed and we were unable to recover it. 00:38:29.216 [2024-10-01 17:38:27.638620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.216 [2024-10-01 17:38:27.638627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.216 qpair failed and we were unable to recover it. 00:38:29.216 [2024-10-01 17:38:27.638934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.216 [2024-10-01 17:38:27.638942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.216 qpair failed and we were unable to recover it. 00:38:29.216 [2024-10-01 17:38:27.639236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.216 [2024-10-01 17:38:27.639243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.216 qpair failed and we were unable to recover it. 00:38:29.216 [2024-10-01 17:38:27.639537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.216 [2024-10-01 17:38:27.639544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.216 qpair failed and we were unable to recover it. 00:38:29.216 [2024-10-01 17:38:27.639913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.216 [2024-10-01 17:38:27.639920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.216 qpair failed and we were unable to recover it. 00:38:29.216 [2024-10-01 17:38:27.640209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.216 [2024-10-01 17:38:27.640217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.216 qpair failed and we were unable to recover it. 00:38:29.216 [2024-10-01 17:38:27.640522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.216 [2024-10-01 17:38:27.640529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.216 qpair failed and we were unable to recover it. 00:38:29.216 [2024-10-01 17:38:27.640810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.216 [2024-10-01 17:38:27.640818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.216 qpair failed and we were unable to recover it. 00:38:29.216 [2024-10-01 17:38:27.641028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.216 [2024-10-01 17:38:27.641036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.216 qpair failed and we were unable to recover it. 00:38:29.216 [2024-10-01 17:38:27.641346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.216 [2024-10-01 17:38:27.641353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.216 qpair failed and we were unable to recover it. 00:38:29.216 [2024-10-01 17:38:27.641665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.216 [2024-10-01 17:38:27.641672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.216 qpair failed and we were unable to recover it. 00:38:29.216 [2024-10-01 17:38:27.641968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.216 [2024-10-01 17:38:27.641975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.216 qpair failed and we were unable to recover it. 00:38:29.216 [2024-10-01 17:38:27.642335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.216 [2024-10-01 17:38:27.642343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.216 qpair failed and we were unable to recover it. 00:38:29.216 [2024-10-01 17:38:27.642646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.216 [2024-10-01 17:38:27.642653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.216 qpair failed and we were unable to recover it. 00:38:29.216 [2024-10-01 17:38:27.642946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.216 [2024-10-01 17:38:27.642953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.216 qpair failed and we were unable to recover it. 00:38:29.216 [2024-10-01 17:38:27.643286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.216 [2024-10-01 17:38:27.643293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.216 qpair failed and we were unable to recover it. 00:38:29.216 [2024-10-01 17:38:27.643600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.216 [2024-10-01 17:38:27.643607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.216 qpair failed and we were unable to recover it. 00:38:29.216 [2024-10-01 17:38:27.643923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.216 [2024-10-01 17:38:27.643930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.216 qpair failed and we were unable to recover it. 00:38:29.216 [2024-10-01 17:38:27.644238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.216 [2024-10-01 17:38:27.644245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.216 qpair failed and we were unable to recover it. 00:38:29.216 [2024-10-01 17:38:27.644480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.216 [2024-10-01 17:38:27.644487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.216 qpair failed and we were unable to recover it. 00:38:29.216 [2024-10-01 17:38:27.644778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.216 [2024-10-01 17:38:27.644785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.216 qpair failed and we were unable to recover it. 00:38:29.216 [2024-10-01 17:38:27.645115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.216 [2024-10-01 17:38:27.645122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.216 qpair failed and we were unable to recover it. 00:38:29.216 [2024-10-01 17:38:27.645506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.216 [2024-10-01 17:38:27.645514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.216 qpair failed and we were unable to recover it. 00:38:29.216 [2024-10-01 17:38:27.645811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.216 [2024-10-01 17:38:27.645818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.216 qpair failed and we were unable to recover it. 00:38:29.216 [2024-10-01 17:38:27.646104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.216 [2024-10-01 17:38:27.646113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.216 qpair failed and we were unable to recover it. 00:38:29.216 [2024-10-01 17:38:27.646433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.216 [2024-10-01 17:38:27.646440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.216 qpair failed and we were unable to recover it. 00:38:29.216 [2024-10-01 17:38:27.646703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.216 [2024-10-01 17:38:27.646711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.216 qpair failed and we were unable to recover it. 00:38:29.216 [2024-10-01 17:38:27.647043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.216 [2024-10-01 17:38:27.647050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.216 qpair failed and we were unable to recover it. 00:38:29.216 [2024-10-01 17:38:27.647355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.216 [2024-10-01 17:38:27.647363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.216 qpair failed and we were unable to recover it. 00:38:29.216 [2024-10-01 17:38:27.647667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.216 [2024-10-01 17:38:27.647674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.216 qpair failed and we were unable to recover it. 00:38:29.216 [2024-10-01 17:38:27.647987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.216 [2024-10-01 17:38:27.647998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.216 qpair failed and we were unable to recover it. 00:38:29.216 [2024-10-01 17:38:27.648332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.216 [2024-10-01 17:38:27.648339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.216 qpair failed and we were unable to recover it. 00:38:29.216 [2024-10-01 17:38:27.648629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.216 [2024-10-01 17:38:27.648636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.216 qpair failed and we were unable to recover it. 00:38:29.216 [2024-10-01 17:38:27.648962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.216 [2024-10-01 17:38:27.648969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.216 qpair failed and we were unable to recover it. 00:38:29.216 [2024-10-01 17:38:27.649278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.216 [2024-10-01 17:38:27.649285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.216 qpair failed and we were unable to recover it. 00:38:29.216 [2024-10-01 17:38:27.649485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.216 [2024-10-01 17:38:27.649493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.216 qpair failed and we were unable to recover it. 00:38:29.216 [2024-10-01 17:38:27.649614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.216 [2024-10-01 17:38:27.649620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.216 qpair failed and we were unable to recover it. 00:38:29.216 [2024-10-01 17:38:27.649935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.217 [2024-10-01 17:38:27.649943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.217 qpair failed and we were unable to recover it. 00:38:29.217 [2024-10-01 17:38:27.650285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.217 [2024-10-01 17:38:27.650293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.217 qpair failed and we were unable to recover it. 00:38:29.217 [2024-10-01 17:38:27.650586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.217 [2024-10-01 17:38:27.650601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.217 qpair failed and we were unable to recover it. 00:38:29.217 [2024-10-01 17:38:27.650892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.217 [2024-10-01 17:38:27.650899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.217 qpair failed and we were unable to recover it. 00:38:29.217 [2024-10-01 17:38:27.651205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.217 [2024-10-01 17:38:27.651212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.217 qpair failed and we were unable to recover it. 00:38:29.217 [2024-10-01 17:38:27.651520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.217 [2024-10-01 17:38:27.651528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.217 qpair failed and we were unable to recover it. 00:38:29.217 [2024-10-01 17:38:27.651820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.217 [2024-10-01 17:38:27.651828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.217 qpair failed and we were unable to recover it. 00:38:29.217 [2024-10-01 17:38:27.652144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.217 [2024-10-01 17:38:27.652151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.217 qpair failed and we were unable to recover it. 00:38:29.217 [2024-10-01 17:38:27.652482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.217 [2024-10-01 17:38:27.652490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.217 qpair failed and we were unable to recover it. 00:38:29.217 [2024-10-01 17:38:27.652797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.217 [2024-10-01 17:38:27.652804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.217 qpair failed and we were unable to recover it. 00:38:29.217 [2024-10-01 17:38:27.653104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.217 [2024-10-01 17:38:27.653112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.217 qpair failed and we were unable to recover it. 00:38:29.217 [2024-10-01 17:38:27.653473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.217 [2024-10-01 17:38:27.653480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.217 qpair failed and we were unable to recover it. 00:38:29.217 [2024-10-01 17:38:27.653776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.217 [2024-10-01 17:38:27.653782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.217 qpair failed and we were unable to recover it. 00:38:29.217 [2024-10-01 17:38:27.654070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.217 [2024-10-01 17:38:27.654077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.217 qpair failed and we were unable to recover it. 00:38:29.217 [2024-10-01 17:38:27.654364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.217 [2024-10-01 17:38:27.654371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.217 qpair failed and we were unable to recover it. 00:38:29.217 [2024-10-01 17:38:27.654662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.217 [2024-10-01 17:38:27.654669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.217 qpair failed and we were unable to recover it. 00:38:29.217 [2024-10-01 17:38:27.654986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.217 [2024-10-01 17:38:27.654997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.217 qpair failed and we were unable to recover it. 00:38:29.217 [2024-10-01 17:38:27.655322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.217 [2024-10-01 17:38:27.655330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.217 qpair failed and we were unable to recover it. 00:38:29.217 [2024-10-01 17:38:27.655615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.217 [2024-10-01 17:38:27.655623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.217 qpair failed and we were unable to recover it. 00:38:29.217 [2024-10-01 17:38:27.655936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.217 [2024-10-01 17:38:27.655943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.217 qpair failed and we were unable to recover it. 00:38:29.217 [2024-10-01 17:38:27.656316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.217 [2024-10-01 17:38:27.656324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.217 qpair failed and we were unable to recover it. 00:38:29.217 [2024-10-01 17:38:27.656593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.217 [2024-10-01 17:38:27.656600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.217 qpair failed and we were unable to recover it. 00:38:29.217 [2024-10-01 17:38:27.656884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.217 [2024-10-01 17:38:27.656891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.217 qpair failed and we were unable to recover it. 00:38:29.217 [2024-10-01 17:38:27.657203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.217 [2024-10-01 17:38:27.657211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.217 qpair failed and we were unable to recover it. 00:38:29.217 [2024-10-01 17:38:27.657526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.217 [2024-10-01 17:38:27.657534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.217 qpair failed and we were unable to recover it. 00:38:29.217 [2024-10-01 17:38:27.657860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.217 [2024-10-01 17:38:27.657869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.217 qpair failed and we were unable to recover it. 00:38:29.217 [2024-10-01 17:38:27.658174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.217 [2024-10-01 17:38:27.658183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.217 qpair failed and we were unable to recover it. 00:38:29.217 [2024-10-01 17:38:27.658486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.217 [2024-10-01 17:38:27.658497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.217 qpair failed and we were unable to recover it. 00:38:29.217 [2024-10-01 17:38:27.658761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.217 [2024-10-01 17:38:27.658767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.217 qpair failed and we were unable to recover it. 00:38:29.217 [2024-10-01 17:38:27.659075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.217 [2024-10-01 17:38:27.659082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.217 qpair failed and we were unable to recover it. 00:38:29.217 [2024-10-01 17:38:27.659373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.217 [2024-10-01 17:38:27.659380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.217 qpair failed and we were unable to recover it. 00:38:29.217 [2024-10-01 17:38:27.659670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.217 [2024-10-01 17:38:27.659677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.217 qpair failed and we were unable to recover it. 00:38:29.217 [2024-10-01 17:38:27.659999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.217 [2024-10-01 17:38:27.660006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.217 qpair failed and we were unable to recover it. 00:38:29.217 [2024-10-01 17:38:27.660319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.217 [2024-10-01 17:38:27.660326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.218 qpair failed and we were unable to recover it. 00:38:29.218 [2024-10-01 17:38:27.660632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.218 [2024-10-01 17:38:27.660639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.218 qpair failed and we were unable to recover it. 00:38:29.218 [2024-10-01 17:38:27.660912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.218 [2024-10-01 17:38:27.660919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.218 qpair failed and we were unable to recover it. 00:38:29.218 [2024-10-01 17:38:27.661271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.218 [2024-10-01 17:38:27.661280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.218 qpair failed and we were unable to recover it. 00:38:29.218 [2024-10-01 17:38:27.661579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.218 [2024-10-01 17:38:27.661587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.218 qpair failed and we were unable to recover it. 00:38:29.218 [2024-10-01 17:38:27.661934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.218 [2024-10-01 17:38:27.661942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.218 qpair failed and we were unable to recover it. 00:38:29.218 [2024-10-01 17:38:27.662230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.218 [2024-10-01 17:38:27.662239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.218 qpair failed and we were unable to recover it. 00:38:29.218 [2024-10-01 17:38:27.662425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.218 [2024-10-01 17:38:27.662434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.218 qpair failed and we were unable to recover it. 00:38:29.218 [2024-10-01 17:38:27.662755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.218 [2024-10-01 17:38:27.662761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.218 qpair failed and we were unable to recover it. 00:38:29.218 [2024-10-01 17:38:27.663049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.218 [2024-10-01 17:38:27.663058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.218 qpair failed and we were unable to recover it. 00:38:29.218 [2024-10-01 17:38:27.663424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.218 [2024-10-01 17:38:27.663430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.218 qpair failed and we were unable to recover it. 00:38:29.218 [2024-10-01 17:38:27.663638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.218 [2024-10-01 17:38:27.663645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.218 qpair failed and we were unable to recover it. 00:38:29.218 [2024-10-01 17:38:27.663829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.218 [2024-10-01 17:38:27.663837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.218 qpair failed and we were unable to recover it. 00:38:29.218 [2024-10-01 17:38:27.664147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.218 [2024-10-01 17:38:27.664154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.218 qpair failed and we were unable to recover it. 00:38:29.218 [2024-10-01 17:38:27.664366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.218 [2024-10-01 17:38:27.664372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.218 qpair failed and we were unable to recover it. 00:38:29.218 [2024-10-01 17:38:27.664731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.218 [2024-10-01 17:38:27.664738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.218 qpair failed and we were unable to recover it. 00:38:29.218 [2024-10-01 17:38:27.665061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.218 [2024-10-01 17:38:27.665069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.218 qpair failed and we were unable to recover it. 00:38:29.218 [2024-10-01 17:38:27.665405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.218 [2024-10-01 17:38:27.665413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.218 qpair failed and we were unable to recover it. 00:38:29.218 [2024-10-01 17:38:27.665575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.218 [2024-10-01 17:38:27.665583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.218 qpair failed and we were unable to recover it. 00:38:29.218 [2024-10-01 17:38:27.665877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.218 [2024-10-01 17:38:27.665885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.218 qpair failed and we were unable to recover it. 00:38:29.218 [2024-10-01 17:38:27.666211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.218 [2024-10-01 17:38:27.666219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.218 qpair failed and we were unable to recover it. 00:38:29.218 [2024-10-01 17:38:27.666389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.218 [2024-10-01 17:38:27.666397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.218 qpair failed and we were unable to recover it. 00:38:29.218 [2024-10-01 17:38:27.666761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.218 [2024-10-01 17:38:27.666768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.218 qpair failed and we were unable to recover it. 00:38:29.218 [2024-10-01 17:38:27.666959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.218 [2024-10-01 17:38:27.666966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.218 qpair failed and we were unable to recover it. 00:38:29.218 [2024-10-01 17:38:27.667160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.218 [2024-10-01 17:38:27.667168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.218 qpair failed and we were unable to recover it. 00:38:29.218 [2024-10-01 17:38:27.667444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.218 [2024-10-01 17:38:27.667452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.218 qpair failed and we were unable to recover it. 00:38:29.218 [2024-10-01 17:38:27.667761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.218 [2024-10-01 17:38:27.667769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.218 qpair failed and we were unable to recover it. 00:38:29.218 [2024-10-01 17:38:27.668060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.218 [2024-10-01 17:38:27.668068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.218 qpair failed and we were unable to recover it. 00:38:29.218 [2024-10-01 17:38:27.668279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.218 [2024-10-01 17:38:27.668286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.218 qpair failed and we were unable to recover it. 00:38:29.218 [2024-10-01 17:38:27.668435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.218 [2024-10-01 17:38:27.668442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.218 qpair failed and we were unable to recover it. 00:38:29.218 [2024-10-01 17:38:27.668663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.218 [2024-10-01 17:38:27.668670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.218 qpair failed and we were unable to recover it. 00:38:29.218 [2024-10-01 17:38:27.668959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.218 [2024-10-01 17:38:27.668966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.218 qpair failed and we were unable to recover it. 00:38:29.218 [2024-10-01 17:38:27.669269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.218 [2024-10-01 17:38:27.669276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.218 qpair failed and we were unable to recover it. 00:38:29.218 [2024-10-01 17:38:27.669604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.218 [2024-10-01 17:38:27.669611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.218 qpair failed and we were unable to recover it. 00:38:29.218 [2024-10-01 17:38:27.669904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.218 [2024-10-01 17:38:27.669914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.218 qpair failed and we were unable to recover it. 00:38:29.218 [2024-10-01 17:38:27.670200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.218 [2024-10-01 17:38:27.670207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.218 qpair failed and we were unable to recover it. 00:38:29.219 [2024-10-01 17:38:27.670425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.219 [2024-10-01 17:38:27.670432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.219 qpair failed and we were unable to recover it. 00:38:29.219 [2024-10-01 17:38:27.670639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.219 [2024-10-01 17:38:27.670646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.219 qpair failed and we were unable to recover it. 00:38:29.219 [2024-10-01 17:38:27.670952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.219 [2024-10-01 17:38:27.670959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.219 qpair failed and we were unable to recover it. 00:38:29.219 [2024-10-01 17:38:27.671145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.219 [2024-10-01 17:38:27.671152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.219 qpair failed and we were unable to recover it. 00:38:29.219 [2024-10-01 17:38:27.671380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.219 [2024-10-01 17:38:27.671387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.219 qpair failed and we were unable to recover it. 00:38:29.219 [2024-10-01 17:38:27.671665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.219 [2024-10-01 17:38:27.671672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.219 qpair failed and we were unable to recover it. 00:38:29.219 [2024-10-01 17:38:27.672003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.219 [2024-10-01 17:38:27.672011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.219 qpair failed and we were unable to recover it. 00:38:29.219 [2024-10-01 17:38:27.672320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.219 [2024-10-01 17:38:27.672327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.219 qpair failed and we were unable to recover it. 00:38:29.219 [2024-10-01 17:38:27.672646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.219 [2024-10-01 17:38:27.672653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.219 qpair failed and we were unable to recover it. 00:38:29.219 [2024-10-01 17:38:27.672951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.219 [2024-10-01 17:38:27.672958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.219 qpair failed and we were unable to recover it. 00:38:29.219 [2024-10-01 17:38:27.673284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.219 [2024-10-01 17:38:27.673292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.219 qpair failed and we were unable to recover it. 00:38:29.219 [2024-10-01 17:38:27.673548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.219 [2024-10-01 17:38:27.673555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.219 qpair failed and we were unable to recover it. 00:38:29.219 [2024-10-01 17:38:27.673920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.219 [2024-10-01 17:38:27.673927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.219 qpair failed and we were unable to recover it. 00:38:29.219 [2024-10-01 17:38:27.674094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.219 [2024-10-01 17:38:27.674102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.219 qpair failed and we were unable to recover it. 00:38:29.219 [2024-10-01 17:38:27.674296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.219 [2024-10-01 17:38:27.674303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.219 qpair failed and we were unable to recover it. 00:38:29.219 [2024-10-01 17:38:27.674632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.219 [2024-10-01 17:38:27.674639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.219 qpair failed and we were unable to recover it. 00:38:29.219 [2024-10-01 17:38:27.674916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.219 [2024-10-01 17:38:27.674923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.219 qpair failed and we were unable to recover it. 00:38:29.219 [2024-10-01 17:38:27.675095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.219 [2024-10-01 17:38:27.675104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.219 qpair failed and we were unable to recover it. 00:38:29.219 [2024-10-01 17:38:27.675411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.219 [2024-10-01 17:38:27.675418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.219 qpair failed and we were unable to recover it. 00:38:29.219 [2024-10-01 17:38:27.675707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.219 [2024-10-01 17:38:27.675714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.219 qpair failed and we were unable to recover it. 00:38:29.219 [2024-10-01 17:38:27.676029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.219 [2024-10-01 17:38:27.676036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.219 qpair failed and we were unable to recover it. 00:38:29.219 [2024-10-01 17:38:27.676360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.219 [2024-10-01 17:38:27.676368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.219 qpair failed and we were unable to recover it. 00:38:29.219 [2024-10-01 17:38:27.676658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.219 [2024-10-01 17:38:27.676666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.219 qpair failed and we were unable to recover it. 00:38:29.219 [2024-10-01 17:38:27.676974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.219 [2024-10-01 17:38:27.676981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.219 qpair failed and we were unable to recover it. 00:38:29.219 [2024-10-01 17:38:27.677315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.219 [2024-10-01 17:38:27.677322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.219 qpair failed and we were unable to recover it. 00:38:29.219 [2024-10-01 17:38:27.677618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.219 [2024-10-01 17:38:27.677625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.219 qpair failed and we were unable to recover it. 00:38:29.219 [2024-10-01 17:38:27.677909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.219 [2024-10-01 17:38:27.677923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.219 qpair failed and we were unable to recover it. 00:38:29.219 [2024-10-01 17:38:27.678139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.219 [2024-10-01 17:38:27.678145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.219 qpair failed and we were unable to recover it. 00:38:29.219 [2024-10-01 17:38:27.678467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.219 [2024-10-01 17:38:27.678474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.219 qpair failed and we were unable to recover it. 00:38:29.219 [2024-10-01 17:38:27.678777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.219 [2024-10-01 17:38:27.678784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.219 qpair failed and we were unable to recover it. 00:38:29.219 [2024-10-01 17:38:27.679063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.219 [2024-10-01 17:38:27.679070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.219 qpair failed and we were unable to recover it. 00:38:29.219 [2024-10-01 17:38:27.679395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.219 [2024-10-01 17:38:27.679402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.219 qpair failed and we were unable to recover it. 00:38:29.219 [2024-10-01 17:38:27.679709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.219 [2024-10-01 17:38:27.679716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.219 qpair failed and we were unable to recover it. 00:38:29.219 [2024-10-01 17:38:27.680005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.219 [2024-10-01 17:38:27.680012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.219 qpair failed and we were unable to recover it. 00:38:29.219 [2024-10-01 17:38:27.680346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.219 [2024-10-01 17:38:27.680354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.219 qpair failed and we were unable to recover it. 00:38:29.219 [2024-10-01 17:38:27.680666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.219 [2024-10-01 17:38:27.680672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.219 qpair failed and we were unable to recover it. 00:38:29.219 [2024-10-01 17:38:27.680968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.219 [2024-10-01 17:38:27.680975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.219 qpair failed and we were unable to recover it. 00:38:29.219 [2024-10-01 17:38:27.681303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.219 [2024-10-01 17:38:27.681311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.219 qpair failed and we were unable to recover it. 00:38:29.219 [2024-10-01 17:38:27.681602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.219 [2024-10-01 17:38:27.681611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.220 qpair failed and we were unable to recover it. 00:38:29.220 [2024-10-01 17:38:27.681925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.220 [2024-10-01 17:38:27.681932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.220 qpair failed and we were unable to recover it. 00:38:29.220 [2024-10-01 17:38:27.682284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.220 [2024-10-01 17:38:27.682291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.220 qpair failed and we were unable to recover it. 00:38:29.220 [2024-10-01 17:38:27.682627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.220 [2024-10-01 17:38:27.682634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.220 qpair failed and we were unable to recover it. 00:38:29.220 [2024-10-01 17:38:27.682924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.220 [2024-10-01 17:38:27.682931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.220 qpair failed and we were unable to recover it. 00:38:29.220 [2024-10-01 17:38:27.683298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.220 [2024-10-01 17:38:27.683305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.220 qpair failed and we were unable to recover it. 00:38:29.220 [2024-10-01 17:38:27.683589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.220 [2024-10-01 17:38:27.683605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.220 qpair failed and we were unable to recover it. 00:38:29.220 [2024-10-01 17:38:27.683897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.220 [2024-10-01 17:38:27.683904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.220 qpair failed and we were unable to recover it. 00:38:29.220 [2024-10-01 17:38:27.684225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.220 [2024-10-01 17:38:27.684232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.220 qpair failed and we were unable to recover it. 00:38:29.220 [2024-10-01 17:38:27.684544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.220 [2024-10-01 17:38:27.684551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.220 qpair failed and we were unable to recover it. 00:38:29.220 [2024-10-01 17:38:27.684857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.220 [2024-10-01 17:38:27.684864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.220 qpair failed and we were unable to recover it. 00:38:29.220 [2024-10-01 17:38:27.685160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.220 [2024-10-01 17:38:27.685167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.220 qpair failed and we were unable to recover it. 00:38:29.220 [2024-10-01 17:38:27.685489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.220 [2024-10-01 17:38:27.685497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.220 qpair failed and we were unable to recover it. 00:38:29.220 [2024-10-01 17:38:27.685769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.220 [2024-10-01 17:38:27.685776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.220 qpair failed and we were unable to recover it. 00:38:29.220 [2024-10-01 17:38:27.686057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.220 [2024-10-01 17:38:27.686064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.220 qpair failed and we were unable to recover it. 00:38:29.220 [2024-10-01 17:38:27.686383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.220 [2024-10-01 17:38:27.686391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.220 qpair failed and we were unable to recover it. 00:38:29.220 [2024-10-01 17:38:27.686691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.220 [2024-10-01 17:38:27.686698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.220 qpair failed and we were unable to recover it. 00:38:29.220 [2024-10-01 17:38:27.686890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.220 [2024-10-01 17:38:27.686898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.220 qpair failed and we were unable to recover it. 00:38:29.220 [2024-10-01 17:38:27.687091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.220 [2024-10-01 17:38:27.687098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.220 qpair failed and we were unable to recover it. 00:38:29.220 [2024-10-01 17:38:27.687366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.220 [2024-10-01 17:38:27.687373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.220 qpair failed and we were unable to recover it. 00:38:29.220 [2024-10-01 17:38:27.687706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.220 [2024-10-01 17:38:27.687712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.220 qpair failed and we were unable to recover it. 00:38:29.220 [2024-10-01 17:38:27.688020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.220 [2024-10-01 17:38:27.688027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.220 qpair failed and we were unable to recover it. 00:38:29.220 [2024-10-01 17:38:27.688317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.220 [2024-10-01 17:38:27.688324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.220 qpair failed and we were unable to recover it. 00:38:29.220 [2024-10-01 17:38:27.688511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.220 [2024-10-01 17:38:27.688518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.220 qpair failed and we were unable to recover it. 00:38:29.220 [2024-10-01 17:38:27.688720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.220 [2024-10-01 17:38:27.688727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.220 qpair failed and we were unable to recover it. 00:38:29.220 [2024-10-01 17:38:27.688916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.220 [2024-10-01 17:38:27.688924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.220 qpair failed and we were unable to recover it. 00:38:29.220 [2024-10-01 17:38:27.689100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.220 [2024-10-01 17:38:27.689108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.220 qpair failed and we were unable to recover it. 00:38:29.220 [2024-10-01 17:38:27.689435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.220 [2024-10-01 17:38:27.689442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.220 qpair failed and we were unable to recover it. 00:38:29.220 [2024-10-01 17:38:27.689730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.220 [2024-10-01 17:38:27.689737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.220 qpair failed and we were unable to recover it. 00:38:29.220 [2024-10-01 17:38:27.690056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.220 [2024-10-01 17:38:27.690063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.220 qpair failed and we were unable to recover it. 00:38:29.220 [2024-10-01 17:38:27.690379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.220 [2024-10-01 17:38:27.690386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.220 qpair failed and we were unable to recover it. 00:38:29.220 [2024-10-01 17:38:27.690708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.220 [2024-10-01 17:38:27.690715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.220 qpair failed and we were unable to recover it. 00:38:29.220 [2024-10-01 17:38:27.691002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.220 [2024-10-01 17:38:27.691009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.220 qpair failed and we were unable to recover it. 00:38:29.220 [2024-10-01 17:38:27.691317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.220 [2024-10-01 17:38:27.691323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.220 qpair failed and we were unable to recover it. 00:38:29.220 [2024-10-01 17:38:27.691628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.220 [2024-10-01 17:38:27.691635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.220 qpair failed and we were unable to recover it. 00:38:29.220 [2024-10-01 17:38:27.691920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.220 [2024-10-01 17:38:27.691928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.220 qpair failed and we were unable to recover it. 00:38:29.220 [2024-10-01 17:38:27.692243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.220 [2024-10-01 17:38:27.692250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.220 qpair failed and we were unable to recover it. 00:38:29.220 [2024-10-01 17:38:27.692565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.220 [2024-10-01 17:38:27.692572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.220 qpair failed and we were unable to recover it. 00:38:29.220 [2024-10-01 17:38:27.692879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.220 [2024-10-01 17:38:27.692887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.220 qpair failed and we were unable to recover it. 00:38:29.220 [2024-10-01 17:38:27.693114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.221 [2024-10-01 17:38:27.693121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.221 qpair failed and we were unable to recover it. 00:38:29.221 [2024-10-01 17:38:27.693424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.221 [2024-10-01 17:38:27.693433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.221 qpair failed and we were unable to recover it. 00:38:29.221 [2024-10-01 17:38:27.693743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.221 [2024-10-01 17:38:27.693751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.221 qpair failed and we were unable to recover it. 00:38:29.221 [2024-10-01 17:38:27.694065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.221 [2024-10-01 17:38:27.694072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.221 qpair failed and we were unable to recover it. 00:38:29.221 [2024-10-01 17:38:27.694407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.221 [2024-10-01 17:38:27.694414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.221 qpair failed and we were unable to recover it. 00:38:29.221 [2024-10-01 17:38:27.694717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.221 [2024-10-01 17:38:27.694724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.221 qpair failed and we were unable to recover it. 00:38:29.221 [2024-10-01 17:38:27.695040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.221 [2024-10-01 17:38:27.695047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.221 qpair failed and we were unable to recover it. 00:38:29.221 [2024-10-01 17:38:27.695414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.221 [2024-10-01 17:38:27.695420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.221 qpair failed and we were unable to recover it. 00:38:29.221 [2024-10-01 17:38:27.695752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.221 [2024-10-01 17:38:27.695760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.221 qpair failed and we were unable to recover it. 00:38:29.221 [2024-10-01 17:38:27.696098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.221 [2024-10-01 17:38:27.696105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.221 qpair failed and we were unable to recover it. 00:38:29.221 [2024-10-01 17:38:27.696408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.221 [2024-10-01 17:38:27.696421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.221 qpair failed and we were unable to recover it. 00:38:29.221 [2024-10-01 17:38:27.696740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.221 [2024-10-01 17:38:27.696747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.221 qpair failed and we were unable to recover it. 00:38:29.221 [2024-10-01 17:38:27.697029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.221 [2024-10-01 17:38:27.697036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.221 qpair failed and we were unable to recover it. 00:38:29.221 [2024-10-01 17:38:27.697374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.221 [2024-10-01 17:38:27.697381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.221 qpair failed and we were unable to recover it. 00:38:29.221 [2024-10-01 17:38:27.697703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.221 [2024-10-01 17:38:27.697711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.221 qpair failed and we were unable to recover it. 00:38:29.221 [2024-10-01 17:38:27.698019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.221 [2024-10-01 17:38:27.698026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.221 qpair failed and we were unable to recover it. 00:38:29.221 [2024-10-01 17:38:27.698340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.221 [2024-10-01 17:38:27.698348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.221 qpair failed and we were unable to recover it. 00:38:29.221 [2024-10-01 17:38:27.698649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.221 [2024-10-01 17:38:27.698656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.221 qpair failed and we were unable to recover it. 00:38:29.221 [2024-10-01 17:38:27.698964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.221 [2024-10-01 17:38:27.698970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.221 qpair failed and we were unable to recover it. 00:38:29.221 [2024-10-01 17:38:27.699250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.221 [2024-10-01 17:38:27.699257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.221 qpair failed and we were unable to recover it. 00:38:29.221 [2024-10-01 17:38:27.699546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.221 [2024-10-01 17:38:27.699553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.221 qpair failed and we were unable to recover it. 00:38:29.221 [2024-10-01 17:38:27.699852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.221 [2024-10-01 17:38:27.699858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.221 qpair failed and we were unable to recover it. 00:38:29.221 [2024-10-01 17:38:27.700160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.221 [2024-10-01 17:38:27.700168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.221 qpair failed and we were unable to recover it. 00:38:29.221 [2024-10-01 17:38:27.700490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.221 [2024-10-01 17:38:27.700496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.221 qpair failed and we were unable to recover it. 00:38:29.221 [2024-10-01 17:38:27.700690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.221 [2024-10-01 17:38:27.700697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.221 qpair failed and we were unable to recover it. 00:38:29.221 [2024-10-01 17:38:27.700977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.221 [2024-10-01 17:38:27.700984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.221 qpair failed and we were unable to recover it. 00:38:29.221 [2024-10-01 17:38:27.701270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.221 [2024-10-01 17:38:27.701278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.221 qpair failed and we were unable to recover it. 00:38:29.221 [2024-10-01 17:38:27.701639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.221 [2024-10-01 17:38:27.701645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.221 qpair failed and we were unable to recover it. 00:38:29.221 [2024-10-01 17:38:27.701933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.221 [2024-10-01 17:38:27.701940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.221 qpair failed and we were unable to recover it. 00:38:29.221 [2024-10-01 17:38:27.702263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.221 [2024-10-01 17:38:27.702271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.221 qpair failed and we were unable to recover it. 00:38:29.221 [2024-10-01 17:38:27.702431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.221 [2024-10-01 17:38:27.702439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.221 qpair failed and we were unable to recover it. 00:38:29.221 [2024-10-01 17:38:27.702718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.221 [2024-10-01 17:38:27.702726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.221 qpair failed and we were unable to recover it. 00:38:29.221 [2024-10-01 17:38:27.702914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.221 [2024-10-01 17:38:27.702921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.221 qpair failed and we were unable to recover it. 00:38:29.221 [2024-10-01 17:38:27.703117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.221 [2024-10-01 17:38:27.703123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.221 qpair failed and we were unable to recover it. 00:38:29.221 [2024-10-01 17:38:27.703311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.221 [2024-10-01 17:38:27.703318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.221 qpair failed and we were unable to recover it. 00:38:29.221 [2024-10-01 17:38:27.703686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.221 [2024-10-01 17:38:27.703694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.221 qpair failed and we were unable to recover it. 00:38:29.221 [2024-10-01 17:38:27.704021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.221 [2024-10-01 17:38:27.704029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.221 qpair failed and we were unable to recover it. 00:38:29.221 [2024-10-01 17:38:27.704394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.221 [2024-10-01 17:38:27.704401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.221 qpair failed and we were unable to recover it. 00:38:29.221 [2024-10-01 17:38:27.704716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.221 [2024-10-01 17:38:27.704723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.222 qpair failed and we were unable to recover it. 00:38:29.222 [2024-10-01 17:38:27.705030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.222 [2024-10-01 17:38:27.705037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.222 qpair failed and we were unable to recover it. 00:38:29.222 [2024-10-01 17:38:27.705340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.222 [2024-10-01 17:38:27.705347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.222 qpair failed and we were unable to recover it. 00:38:29.222 [2024-10-01 17:38:27.705557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.222 [2024-10-01 17:38:27.705566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.222 qpair failed and we were unable to recover it. 00:38:29.222 [2024-10-01 17:38:27.705867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.222 [2024-10-01 17:38:27.705875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.222 qpair failed and we were unable to recover it. 00:38:29.222 [2024-10-01 17:38:27.706204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.222 [2024-10-01 17:38:27.706211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.222 qpair failed and we were unable to recover it. 00:38:29.222 [2024-10-01 17:38:27.706507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.222 [2024-10-01 17:38:27.706515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.222 qpair failed and we were unable to recover it. 00:38:29.222 [2024-10-01 17:38:27.706828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.222 [2024-10-01 17:38:27.706835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.222 qpair failed and we were unable to recover it. 00:38:29.222 [2024-10-01 17:38:27.707138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.222 [2024-10-01 17:38:27.707145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.222 qpair failed and we were unable to recover it. 00:38:29.222 [2024-10-01 17:38:27.707542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.222 [2024-10-01 17:38:27.707549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.222 qpair failed and we were unable to recover it. 00:38:29.222 [2024-10-01 17:38:27.707834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.222 [2024-10-01 17:38:27.707841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.222 qpair failed and we were unable to recover it. 00:38:29.222 [2024-10-01 17:38:27.708173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.222 [2024-10-01 17:38:27.708180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.222 qpair failed and we were unable to recover it. 00:38:29.222 [2024-10-01 17:38:27.708492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.222 [2024-10-01 17:38:27.708507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.222 qpair failed and we were unable to recover it. 00:38:29.222 [2024-10-01 17:38:27.708854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.222 [2024-10-01 17:38:27.708861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.222 qpair failed and we were unable to recover it. 00:38:29.222 [2024-10-01 17:38:27.709218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.222 [2024-10-01 17:38:27.709225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.222 qpair failed and we were unable to recover it. 00:38:29.222 [2024-10-01 17:38:27.709541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.222 [2024-10-01 17:38:27.709549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.222 qpair failed and we were unable to recover it. 00:38:29.222 [2024-10-01 17:38:27.709877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.222 [2024-10-01 17:38:27.709884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.222 qpair failed and we were unable to recover it. 00:38:29.222 [2024-10-01 17:38:27.710206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.222 [2024-10-01 17:38:27.710214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.222 qpair failed and we were unable to recover it. 00:38:29.222 [2024-10-01 17:38:27.710517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.222 [2024-10-01 17:38:27.710524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.222 qpair failed and we were unable to recover it. 00:38:29.222 [2024-10-01 17:38:27.710809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.222 [2024-10-01 17:38:27.710816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.222 qpair failed and we were unable to recover it. 00:38:29.222 [2024-10-01 17:38:27.711132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.222 [2024-10-01 17:38:27.711140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.222 qpair failed and we were unable to recover it. 00:38:29.222 [2024-10-01 17:38:27.711444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.222 [2024-10-01 17:38:27.711452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.222 qpair failed and we were unable to recover it. 00:38:29.222 [2024-10-01 17:38:27.711741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.222 [2024-10-01 17:38:27.711749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.222 qpair failed and we were unable to recover it. 00:38:29.222 [2024-10-01 17:38:27.712036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.222 [2024-10-01 17:38:27.712044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.222 qpair failed and we were unable to recover it. 00:38:29.222 [2024-10-01 17:38:27.712362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.222 [2024-10-01 17:38:27.712369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.222 qpair failed and we were unable to recover it. 00:38:29.222 [2024-10-01 17:38:27.712678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.222 [2024-10-01 17:38:27.712685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.222 qpair failed and we were unable to recover it. 00:38:29.222 [2024-10-01 17:38:27.712954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.222 [2024-10-01 17:38:27.712960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.222 qpair failed and we were unable to recover it. 00:38:29.222 [2024-10-01 17:38:27.713278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.222 [2024-10-01 17:38:27.713286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.222 qpair failed and we were unable to recover it. 00:38:29.222 [2024-10-01 17:38:27.713591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.222 [2024-10-01 17:38:27.713598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.222 qpair failed and we were unable to recover it. 00:38:29.222 [2024-10-01 17:38:27.713907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.222 [2024-10-01 17:38:27.713914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.222 qpair failed and we were unable to recover it. 00:38:29.222 [2024-10-01 17:38:27.714228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.222 [2024-10-01 17:38:27.714235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.222 qpair failed and we were unable to recover it. 00:38:29.222 [2024-10-01 17:38:27.714521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.222 [2024-10-01 17:38:27.714528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.222 qpair failed and we were unable to recover it. 00:38:29.222 [2024-10-01 17:38:27.714830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.222 [2024-10-01 17:38:27.714837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.222 qpair failed and we were unable to recover it. 00:38:29.222 [2024-10-01 17:38:27.715147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.222 [2024-10-01 17:38:27.715154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.222 qpair failed and we were unable to recover it. 00:38:29.223 [2024-10-01 17:38:27.715442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.223 [2024-10-01 17:38:27.715448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.223 qpair failed and we were unable to recover it. 00:38:29.223 [2024-10-01 17:38:27.715642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.223 [2024-10-01 17:38:27.715649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.223 qpair failed and we were unable to recover it. 00:38:29.223 [2024-10-01 17:38:27.715859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.223 [2024-10-01 17:38:27.715866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.223 qpair failed and we were unable to recover it. 00:38:29.223 [2024-10-01 17:38:27.716165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.223 [2024-10-01 17:38:27.716171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.223 qpair failed and we were unable to recover it. 00:38:29.223 [2024-10-01 17:38:27.716498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.223 [2024-10-01 17:38:27.716505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.223 qpair failed and we were unable to recover it. 00:38:29.223 [2024-10-01 17:38:27.716812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.223 [2024-10-01 17:38:27.716818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.223 qpair failed and we were unable to recover it. 00:38:29.223 [2024-10-01 17:38:27.717100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.223 [2024-10-01 17:38:27.717107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.223 qpair failed and we were unable to recover it. 00:38:29.223 [2024-10-01 17:38:27.717414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.223 [2024-10-01 17:38:27.717421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.223 qpair failed and we were unable to recover it. 00:38:29.223 [2024-10-01 17:38:27.717584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.223 [2024-10-01 17:38:27.717592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.223 qpair failed and we were unable to recover it. 00:38:29.223 [2024-10-01 17:38:27.717951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.223 [2024-10-01 17:38:27.717961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.223 qpair failed and we were unable to recover it. 00:38:29.223 [2024-10-01 17:38:27.718270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.223 [2024-10-01 17:38:27.718278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.223 qpair failed and we were unable to recover it. 00:38:29.223 [2024-10-01 17:38:27.718582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.223 [2024-10-01 17:38:27.718589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.223 qpair failed and we were unable to recover it. 00:38:29.223 [2024-10-01 17:38:27.718880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.223 [2024-10-01 17:38:27.718888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.223 qpair failed and we were unable to recover it. 00:38:29.223 [2024-10-01 17:38:27.719218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.223 [2024-10-01 17:38:27.719225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.223 qpair failed and we were unable to recover it. 00:38:29.223 [2024-10-01 17:38:27.719437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.223 [2024-10-01 17:38:27.719444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.223 qpair failed and we were unable to recover it. 00:38:29.223 [2024-10-01 17:38:27.719800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.223 [2024-10-01 17:38:27.719807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.223 qpair failed and we were unable to recover it. 00:38:29.223 [2024-10-01 17:38:27.720098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.223 [2024-10-01 17:38:27.720105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.223 qpair failed and we were unable to recover it. 00:38:29.223 [2024-10-01 17:38:27.720411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.223 [2024-10-01 17:38:27.720418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.223 qpair failed and we were unable to recover it. 00:38:29.223 [2024-10-01 17:38:27.720708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.223 [2024-10-01 17:38:27.720716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.223 qpair failed and we were unable to recover it. 00:38:29.223 [2024-10-01 17:38:27.721033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.223 [2024-10-01 17:38:27.721041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.223 qpair failed and we were unable to recover it. 00:38:29.223 [2024-10-01 17:38:27.721356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.223 [2024-10-01 17:38:27.721363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.223 qpair failed and we were unable to recover it. 00:38:29.223 [2024-10-01 17:38:27.721657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.223 [2024-10-01 17:38:27.721665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.223 qpair failed and we were unable to recover it. 00:38:29.223 [2024-10-01 17:38:27.721992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.223 [2024-10-01 17:38:27.722003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.223 qpair failed and we were unable to recover it. 00:38:29.223 [2024-10-01 17:38:27.722286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.223 [2024-10-01 17:38:27.722293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.223 qpair failed and we were unable to recover it. 00:38:29.223 [2024-10-01 17:38:27.722503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.223 [2024-10-01 17:38:27.722511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.223 qpair failed and we were unable to recover it. 00:38:29.223 [2024-10-01 17:38:27.722824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.223 [2024-10-01 17:38:27.722830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.223 qpair failed and we were unable to recover it. 00:38:29.223 [2024-10-01 17:38:27.723024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.223 [2024-10-01 17:38:27.723031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.223 qpair failed and we were unable to recover it. 00:38:29.223 [2024-10-01 17:38:27.723369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.223 [2024-10-01 17:38:27.723376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.223 qpair failed and we were unable to recover it. 00:38:29.223 [2024-10-01 17:38:27.723687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.223 [2024-10-01 17:38:27.723695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.223 qpair failed and we were unable to recover it. 00:38:29.223 [2024-10-01 17:38:27.724006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.223 [2024-10-01 17:38:27.724014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.223 qpair failed and we were unable to recover it. 00:38:29.223 [2024-10-01 17:38:27.724392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.223 [2024-10-01 17:38:27.724399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.223 qpair failed and we were unable to recover it. 00:38:29.223 [2024-10-01 17:38:27.724707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.223 [2024-10-01 17:38:27.724714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.223 qpair failed and we were unable to recover it. 00:38:29.223 [2024-10-01 17:38:27.725003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.223 [2024-10-01 17:38:27.725010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.223 qpair failed and we were unable to recover it. 00:38:29.223 [2024-10-01 17:38:27.725289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.223 [2024-10-01 17:38:27.725296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.223 qpair failed and we were unable to recover it. 00:38:29.223 [2024-10-01 17:38:27.725610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.223 [2024-10-01 17:38:27.725617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.223 qpair failed and we were unable to recover it. 00:38:29.223 [2024-10-01 17:38:27.725932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.223 [2024-10-01 17:38:27.725939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.223 qpair failed and we were unable to recover it. 00:38:29.223 [2024-10-01 17:38:27.726244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.223 [2024-10-01 17:38:27.726251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.223 qpair failed and we were unable to recover it. 00:38:29.223 [2024-10-01 17:38:27.726542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.223 [2024-10-01 17:38:27.726549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.223 qpair failed and we were unable to recover it. 00:38:29.223 [2024-10-01 17:38:27.726864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.223 [2024-10-01 17:38:27.726871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.224 qpair failed and we were unable to recover it. 00:38:29.224 [2024-10-01 17:38:27.727160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.224 [2024-10-01 17:38:27.727168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.224 qpair failed and we were unable to recover it. 00:38:29.224 [2024-10-01 17:38:27.727490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.224 [2024-10-01 17:38:27.727498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.224 qpair failed and we were unable to recover it. 00:38:29.224 [2024-10-01 17:38:27.727784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.224 [2024-10-01 17:38:27.727792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.224 qpair failed and we were unable to recover it. 00:38:29.224 [2024-10-01 17:38:27.728021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.224 [2024-10-01 17:38:27.728029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.224 qpair failed and we were unable to recover it. 00:38:29.224 [2024-10-01 17:38:27.728358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.224 [2024-10-01 17:38:27.728365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.224 qpair failed and we were unable to recover it. 00:38:29.224 [2024-10-01 17:38:27.728665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.224 [2024-10-01 17:38:27.728673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.224 qpair failed and we were unable to recover it. 00:38:29.224 [2024-10-01 17:38:27.728966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.224 [2024-10-01 17:38:27.728973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.224 qpair failed and we were unable to recover it. 00:38:29.224 [2024-10-01 17:38:27.729345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.224 [2024-10-01 17:38:27.729352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.224 qpair failed and we were unable to recover it. 00:38:29.224 [2024-10-01 17:38:27.729655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.224 [2024-10-01 17:38:27.729662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.224 qpair failed and we were unable to recover it. 00:38:29.224 [2024-10-01 17:38:27.729952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.224 [2024-10-01 17:38:27.729959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.224 qpair failed and we were unable to recover it. 00:38:29.224 [2024-10-01 17:38:27.730159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.224 [2024-10-01 17:38:27.730169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.224 qpair failed and we were unable to recover it. 00:38:29.497 [2024-10-01 17:38:27.730482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.497 [2024-10-01 17:38:27.730491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.498 qpair failed and we were unable to recover it. 00:38:29.498 [2024-10-01 17:38:27.730777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.498 [2024-10-01 17:38:27.730786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.498 qpair failed and we were unable to recover it. 00:38:29.498 [2024-10-01 17:38:27.731094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.498 [2024-10-01 17:38:27.731102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.498 qpair failed and we were unable to recover it. 00:38:29.498 [2024-10-01 17:38:27.731265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.498 [2024-10-01 17:38:27.731273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.498 qpair failed and we were unable to recover it. 00:38:29.498 [2024-10-01 17:38:27.731551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.498 [2024-10-01 17:38:27.731559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.498 qpair failed and we were unable to recover it. 00:38:29.498 [2024-10-01 17:38:27.731871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.498 [2024-10-01 17:38:27.731877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.498 qpair failed and we were unable to recover it. 00:38:29.498 [2024-10-01 17:38:27.732203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.498 [2024-10-01 17:38:27.732211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.498 qpair failed and we were unable to recover it. 00:38:29.498 [2024-10-01 17:38:27.732533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.498 [2024-10-01 17:38:27.732539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.498 qpair failed and we were unable to recover it. 00:38:29.498 [2024-10-01 17:38:27.732847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.498 [2024-10-01 17:38:27.732854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.498 qpair failed and we were unable to recover it. 00:38:29.498 [2024-10-01 17:38:27.733044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.498 [2024-10-01 17:38:27.733052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.498 qpair failed and we were unable to recover it. 00:38:29.498 [2024-10-01 17:38:27.733326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.498 [2024-10-01 17:38:27.733334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.498 qpair failed and we were unable to recover it. 00:38:29.498 [2024-10-01 17:38:27.733643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.498 [2024-10-01 17:38:27.733650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.498 qpair failed and we were unable to recover it. 00:38:29.498 [2024-10-01 17:38:27.733980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.498 [2024-10-01 17:38:27.733988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.498 qpair failed and we were unable to recover it. 00:38:29.498 [2024-10-01 17:38:27.734299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.498 [2024-10-01 17:38:27.734306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.498 qpair failed and we were unable to recover it. 00:38:29.498 [2024-10-01 17:38:27.734601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.498 [2024-10-01 17:38:27.734608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.498 qpair failed and we were unable to recover it. 00:38:29.498 [2024-10-01 17:38:27.734769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.498 [2024-10-01 17:38:27.734777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.498 qpair failed and we were unable to recover it. 00:38:29.498 [2024-10-01 17:38:27.735065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.498 [2024-10-01 17:38:27.735072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.498 qpair failed and we were unable to recover it. 00:38:29.498 [2024-10-01 17:38:27.735389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.498 [2024-10-01 17:38:27.735396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.498 qpair failed and we were unable to recover it. 00:38:29.498 [2024-10-01 17:38:27.735692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.498 [2024-10-01 17:38:27.735698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.498 qpair failed and we were unable to recover it. 00:38:29.498 [2024-10-01 17:38:27.735988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.498 [2024-10-01 17:38:27.736000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.498 qpair failed and we were unable to recover it. 00:38:29.498 [2024-10-01 17:38:27.736311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.498 [2024-10-01 17:38:27.736319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.498 qpair failed and we were unable to recover it. 00:38:29.498 [2024-10-01 17:38:27.736654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.498 [2024-10-01 17:38:27.736661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.498 qpair failed and we were unable to recover it. 00:38:29.498 [2024-10-01 17:38:27.736970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.498 [2024-10-01 17:38:27.736977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.498 qpair failed and we were unable to recover it. 00:38:29.498 [2024-10-01 17:38:27.737283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.498 [2024-10-01 17:38:27.737298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.498 qpair failed and we were unable to recover it. 00:38:29.498 [2024-10-01 17:38:27.737613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.498 [2024-10-01 17:38:27.737619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.498 qpair failed and we were unable to recover it. 00:38:29.498 [2024-10-01 17:38:27.737934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.498 [2024-10-01 17:38:27.737941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.498 qpair failed and we were unable to recover it. 00:38:29.498 [2024-10-01 17:38:27.738107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.498 [2024-10-01 17:38:27.738116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.498 qpair failed and we were unable to recover it. 00:38:29.498 [2024-10-01 17:38:27.738455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.498 [2024-10-01 17:38:27.738462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.498 qpair failed and we were unable to recover it. 00:38:29.498 [2024-10-01 17:38:27.738751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.498 [2024-10-01 17:38:27.738758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.498 qpair failed and we were unable to recover it. 00:38:29.498 [2024-10-01 17:38:27.739062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.498 [2024-10-01 17:38:27.739070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.498 qpair failed and we were unable to recover it. 00:38:29.498 [2024-10-01 17:38:27.739275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.498 [2024-10-01 17:38:27.739282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.498 qpair failed and we were unable to recover it. 00:38:29.498 [2024-10-01 17:38:27.739613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.498 [2024-10-01 17:38:27.739620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.498 qpair failed and we were unable to recover it. 00:38:29.498 [2024-10-01 17:38:27.739940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.498 [2024-10-01 17:38:27.739946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.498 qpair failed and we were unable to recover it. 00:38:29.498 [2024-10-01 17:38:27.740266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.498 [2024-10-01 17:38:27.740273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.498 qpair failed and we were unable to recover it. 00:38:29.498 [2024-10-01 17:38:27.740585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.498 [2024-10-01 17:38:27.740592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.498 qpair failed and we were unable to recover it. 00:38:29.498 [2024-10-01 17:38:27.740911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.499 [2024-10-01 17:38:27.740917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-01 17:38:27.741124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.499 [2024-10-01 17:38:27.741132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-01 17:38:27.741451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.499 [2024-10-01 17:38:27.741458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-01 17:38:27.741759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.499 [2024-10-01 17:38:27.741766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-01 17:38:27.742064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.499 [2024-10-01 17:38:27.742070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-01 17:38:27.742364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.499 [2024-10-01 17:38:27.742371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-01 17:38:27.742560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.499 [2024-10-01 17:38:27.742569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-01 17:38:27.742755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.499 [2024-10-01 17:38:27.742762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-01 17:38:27.743040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.499 [2024-10-01 17:38:27.743047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-01 17:38:27.743338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.499 [2024-10-01 17:38:27.743346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-01 17:38:27.743667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.499 [2024-10-01 17:38:27.743674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-01 17:38:27.743878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.499 [2024-10-01 17:38:27.743886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-01 17:38:27.744194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.499 [2024-10-01 17:38:27.744202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-01 17:38:27.744520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.499 [2024-10-01 17:38:27.744527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-01 17:38:27.744854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.499 [2024-10-01 17:38:27.744860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-01 17:38:27.745152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.499 [2024-10-01 17:38:27.745159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-01 17:38:27.745482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.499 [2024-10-01 17:38:27.745489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-01 17:38:27.745794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.499 [2024-10-01 17:38:27.745801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-01 17:38:27.746106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.499 [2024-10-01 17:38:27.746115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-01 17:38:27.746328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.499 [2024-10-01 17:38:27.746335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-01 17:38:27.746648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.499 [2024-10-01 17:38:27.746655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-01 17:38:27.746966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.499 [2024-10-01 17:38:27.746974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-01 17:38:27.747185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.499 [2024-10-01 17:38:27.747193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-01 17:38:27.747504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.499 [2024-10-01 17:38:27.747512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-01 17:38:27.747697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.499 [2024-10-01 17:38:27.747704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-01 17:38:27.748021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.499 [2024-10-01 17:38:27.748028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-01 17:38:27.748348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.499 [2024-10-01 17:38:27.748356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-01 17:38:27.748646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.499 [2024-10-01 17:38:27.748653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-01 17:38:27.748942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.499 [2024-10-01 17:38:27.748949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-01 17:38:27.749259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.499 [2024-10-01 17:38:27.749267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-01 17:38:27.749572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.499 [2024-10-01 17:38:27.749579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-01 17:38:27.749872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.499 [2024-10-01 17:38:27.749881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-01 17:38:27.750070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.499 [2024-10-01 17:38:27.750077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-01 17:38:27.750405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.499 [2024-10-01 17:38:27.750412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-01 17:38:27.750750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.499 [2024-10-01 17:38:27.750757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-01 17:38:27.751060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.499 [2024-10-01 17:38:27.751067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-01 17:38:27.751361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.499 [2024-10-01 17:38:27.751369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-01 17:38:27.751692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.499 [2024-10-01 17:38:27.751699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-01 17:38:27.752002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.500 [2024-10-01 17:38:27.752010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-01 17:38:27.752286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.500 [2024-10-01 17:38:27.752294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-01 17:38:27.752430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.500 [2024-10-01 17:38:27.752438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-01 17:38:27.752744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.500 [2024-10-01 17:38:27.752751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-01 17:38:27.753059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.500 [2024-10-01 17:38:27.753066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-01 17:38:27.753396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.500 [2024-10-01 17:38:27.753403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-01 17:38:27.753692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.500 [2024-10-01 17:38:27.753699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-01 17:38:27.754058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.500 [2024-10-01 17:38:27.754065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-01 17:38:27.754361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.500 [2024-10-01 17:38:27.754368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-01 17:38:27.754744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.500 [2024-10-01 17:38:27.754750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-01 17:38:27.755036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.500 [2024-10-01 17:38:27.755044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-01 17:38:27.755364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.500 [2024-10-01 17:38:27.755370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-01 17:38:27.755676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.500 [2024-10-01 17:38:27.755683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-01 17:38:27.755972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.500 [2024-10-01 17:38:27.755979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-01 17:38:27.756278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.500 [2024-10-01 17:38:27.756286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-01 17:38:27.756590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.500 [2024-10-01 17:38:27.756597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-01 17:38:27.756888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.500 [2024-10-01 17:38:27.756895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-01 17:38:27.757198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.500 [2024-10-01 17:38:27.757205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-01 17:38:27.757587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.500 [2024-10-01 17:38:27.757595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-01 17:38:27.757905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.500 [2024-10-01 17:38:27.757912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-01 17:38:27.758219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.500 [2024-10-01 17:38:27.758227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-01 17:38:27.758541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.500 [2024-10-01 17:38:27.758547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-01 17:38:27.758854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.500 [2024-10-01 17:38:27.758871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-01 17:38:27.759195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.500 [2024-10-01 17:38:27.759202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-01 17:38:27.759501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.500 [2024-10-01 17:38:27.759508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-01 17:38:27.759792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.500 [2024-10-01 17:38:27.759800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-01 17:38:27.760011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.500 [2024-10-01 17:38:27.760018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-01 17:38:27.760294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.500 [2024-10-01 17:38:27.760300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-01 17:38:27.760489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.500 [2024-10-01 17:38:27.760498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-01 17:38:27.760739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.500 [2024-10-01 17:38:27.760746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-01 17:38:27.761090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.500 [2024-10-01 17:38:27.761097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-01 17:38:27.761390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.500 [2024-10-01 17:38:27.761398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-01 17:38:27.761566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.500 [2024-10-01 17:38:27.761573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-01 17:38:27.761843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.500 [2024-10-01 17:38:27.761852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-01 17:38:27.762169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.500 [2024-10-01 17:38:27.762177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-01 17:38:27.762471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.500 [2024-10-01 17:38:27.762479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-01 17:38:27.762736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.500 [2024-10-01 17:38:27.762743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-01 17:38:27.762939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.500 [2024-10-01 17:38:27.762946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-01 17:38:27.763324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.500 [2024-10-01 17:38:27.763332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.501 [2024-10-01 17:38:27.763491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.501 [2024-10-01 17:38:27.763500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.501 qpair failed and we were unable to recover it. 00:38:29.501 [2024-10-01 17:38:27.763797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.501 [2024-10-01 17:38:27.763804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.501 qpair failed and we were unable to recover it. 00:38:29.501 [2024-10-01 17:38:27.764086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.501 [2024-10-01 17:38:27.764093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.501 qpair failed and we were unable to recover it. 00:38:29.501 [2024-10-01 17:38:27.764388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.501 [2024-10-01 17:38:27.764395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.501 qpair failed and we were unable to recover it. 00:38:29.501 [2024-10-01 17:38:27.764689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.501 [2024-10-01 17:38:27.764696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.501 qpair failed and we were unable to recover it. 00:38:29.501 [2024-10-01 17:38:27.765006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.501 [2024-10-01 17:38:27.765013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.501 qpair failed and we were unable to recover it. 00:38:29.501 [2024-10-01 17:38:27.765325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.501 [2024-10-01 17:38:27.765332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.501 qpair failed and we were unable to recover it. 00:38:29.501 [2024-10-01 17:38:27.765657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.501 [2024-10-01 17:38:27.765664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.501 qpair failed and we were unable to recover it. 00:38:29.501 [2024-10-01 17:38:27.765938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.501 [2024-10-01 17:38:27.765945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.501 qpair failed and we were unable to recover it. 00:38:29.501 [2024-10-01 17:38:27.766260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.501 [2024-10-01 17:38:27.766268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.501 qpair failed and we were unable to recover it. 00:38:29.501 [2024-10-01 17:38:27.766650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.501 [2024-10-01 17:38:27.766658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.501 qpair failed and we were unable to recover it. 00:38:29.501 [2024-10-01 17:38:27.766946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.501 [2024-10-01 17:38:27.766953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.501 qpair failed and we were unable to recover it. 00:38:29.501 [2024-10-01 17:38:27.767272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.501 [2024-10-01 17:38:27.767279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.501 qpair failed and we were unable to recover it. 00:38:29.501 [2024-10-01 17:38:27.767587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.501 [2024-10-01 17:38:27.767594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.501 qpair failed and we were unable to recover it. 00:38:29.501 [2024-10-01 17:38:27.767886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.501 [2024-10-01 17:38:27.767894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.501 qpair failed and we were unable to recover it. 00:38:29.501 [2024-10-01 17:38:27.768223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.501 [2024-10-01 17:38:27.768232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.501 qpair failed and we were unable to recover it. 00:38:29.501 [2024-10-01 17:38:27.768425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.501 [2024-10-01 17:38:27.768433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.501 qpair failed and we were unable to recover it. 00:38:29.501 [2024-10-01 17:38:27.768746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.501 [2024-10-01 17:38:27.768754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.501 qpair failed and we were unable to recover it. 00:38:29.501 [2024-10-01 17:38:27.769082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.501 [2024-10-01 17:38:27.769089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.501 qpair failed and we were unable to recover it. 00:38:29.501 [2024-10-01 17:38:27.769383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.501 [2024-10-01 17:38:27.769390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.501 qpair failed and we were unable to recover it. 00:38:29.501 [2024-10-01 17:38:27.769696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.501 [2024-10-01 17:38:27.769703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.501 qpair failed and we were unable to recover it. 00:38:29.501 [2024-10-01 17:38:27.770024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.501 [2024-10-01 17:38:27.770032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.501 qpair failed and we were unable to recover it. 00:38:29.501 [2024-10-01 17:38:27.770244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.501 [2024-10-01 17:38:27.770250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.501 qpair failed and we were unable to recover it. 00:38:29.501 [2024-10-01 17:38:27.770545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.501 [2024-10-01 17:38:27.770552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.501 qpair failed and we were unable to recover it. 00:38:29.501 [2024-10-01 17:38:27.770894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.501 [2024-10-01 17:38:27.770901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.501 qpair failed and we were unable to recover it. 00:38:29.501 [2024-10-01 17:38:27.771153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.501 [2024-10-01 17:38:27.771160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.501 qpair failed and we were unable to recover it. 00:38:29.501 [2024-10-01 17:38:27.771463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.501 [2024-10-01 17:38:27.771469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.501 qpair failed and we were unable to recover it. 00:38:29.501 [2024-10-01 17:38:27.771762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.501 [2024-10-01 17:38:27.771770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.501 qpair failed and we were unable to recover it. 00:38:29.501 [2024-10-01 17:38:27.772071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.501 [2024-10-01 17:38:27.772078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.501 qpair failed and we were unable to recover it. 00:38:29.501 [2024-10-01 17:38:27.772377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.501 [2024-10-01 17:38:27.772384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.501 qpair failed and we were unable to recover it. 00:38:29.501 [2024-10-01 17:38:27.772721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.501 [2024-10-01 17:38:27.772728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.501 qpair failed and we were unable to recover it. 00:38:29.501 [2024-10-01 17:38:27.773029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.501 [2024-10-01 17:38:27.773037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.501 qpair failed and we were unable to recover it. 00:38:29.501 [2024-10-01 17:38:27.773380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.501 [2024-10-01 17:38:27.773386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.501 qpair failed and we were unable to recover it. 00:38:29.501 [2024-10-01 17:38:27.773667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.501 [2024-10-01 17:38:27.773674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.501 qpair failed and we were unable to recover it. 00:38:29.501 [2024-10-01 17:38:27.773964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.501 [2024-10-01 17:38:27.773972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.501 qpair failed and we were unable to recover it. 00:38:29.501 [2024-10-01 17:38:27.774312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.501 [2024-10-01 17:38:27.774320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.501 qpair failed and we were unable to recover it. 00:38:29.501 [2024-10-01 17:38:27.774500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.501 [2024-10-01 17:38:27.774507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.501 qpair failed and we were unable to recover it. 00:38:29.501 [2024-10-01 17:38:27.774818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.501 [2024-10-01 17:38:27.774825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.501 qpair failed and we were unable to recover it. 00:38:29.501 [2024-10-01 17:38:27.775212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.502 [2024-10-01 17:38:27.775220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.502 qpair failed and we were unable to recover it. 00:38:29.502 [2024-10-01 17:38:27.775554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.502 [2024-10-01 17:38:27.775561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.502 qpair failed and we were unable to recover it. 00:38:29.502 [2024-10-01 17:38:27.775869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.502 [2024-10-01 17:38:27.775876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.502 qpair failed and we were unable to recover it. 00:38:29.502 [2024-10-01 17:38:27.776177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.502 [2024-10-01 17:38:27.776183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.502 qpair failed and we were unable to recover it. 00:38:29.502 [2024-10-01 17:38:27.776545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.502 [2024-10-01 17:38:27.776553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.502 qpair failed and we were unable to recover it. 00:38:29.502 [2024-10-01 17:38:27.776808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.502 [2024-10-01 17:38:27.776816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.502 qpair failed and we were unable to recover it. 00:38:29.502 [2024-10-01 17:38:27.777128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.502 [2024-10-01 17:38:27.777135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.502 qpair failed and we were unable to recover it. 00:38:29.502 [2024-10-01 17:38:27.777335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.502 [2024-10-01 17:38:27.777342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.502 qpair failed and we were unable to recover it. 00:38:29.502 [2024-10-01 17:38:27.777698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.502 [2024-10-01 17:38:27.777705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.502 qpair failed and we were unable to recover it. 00:38:29.502 [2024-10-01 17:38:27.778008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.502 [2024-10-01 17:38:27.778015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.502 qpair failed and we were unable to recover it. 00:38:29.502 [2024-10-01 17:38:27.778312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.502 [2024-10-01 17:38:27.778319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.502 qpair failed and we were unable to recover it. 00:38:29.502 [2024-10-01 17:38:27.778600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.502 [2024-10-01 17:38:27.778607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.502 qpair failed and we were unable to recover it. 00:38:29.502 [2024-10-01 17:38:27.778908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.502 [2024-10-01 17:38:27.778915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.502 qpair failed and we were unable to recover it. 00:38:29.502 [2024-10-01 17:38:27.779073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.502 [2024-10-01 17:38:27.779081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.502 qpair failed and we were unable to recover it. 00:38:29.502 [2024-10-01 17:38:27.779443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.502 [2024-10-01 17:38:27.779450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.502 qpair failed and we were unable to recover it. 00:38:29.502 [2024-10-01 17:38:27.779766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.502 [2024-10-01 17:38:27.779773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.502 qpair failed and we were unable to recover it. 00:38:29.502 [2024-10-01 17:38:27.780068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.502 [2024-10-01 17:38:27.780075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.502 qpair failed and we were unable to recover it. 00:38:29.502 [2024-10-01 17:38:27.780445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.502 [2024-10-01 17:38:27.780452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.502 qpair failed and we were unable to recover it. 00:38:29.502 [2024-10-01 17:38:27.780754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.502 [2024-10-01 17:38:27.780761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.502 qpair failed and we were unable to recover it. 00:38:29.502 [2024-10-01 17:38:27.781081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.502 [2024-10-01 17:38:27.781088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.502 qpair failed and we were unable to recover it. 00:38:29.502 [2024-10-01 17:38:27.781292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.502 [2024-10-01 17:38:27.781299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.502 qpair failed and we were unable to recover it. 00:38:29.502 [2024-10-01 17:38:27.781514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.502 [2024-10-01 17:38:27.781521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.502 qpair failed and we were unable to recover it. 00:38:29.502 [2024-10-01 17:38:27.781840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.502 [2024-10-01 17:38:27.781847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.502 qpair failed and we were unable to recover it. 00:38:29.502 [2024-10-01 17:38:27.782156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.502 [2024-10-01 17:38:27.782163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.502 qpair failed and we were unable to recover it. 00:38:29.502 [2024-10-01 17:38:27.782356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.502 [2024-10-01 17:38:27.782363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.502 qpair failed and we were unable to recover it. 00:38:29.502 [2024-10-01 17:38:27.782656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.502 [2024-10-01 17:38:27.782663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.502 qpair failed and we were unable to recover it. 00:38:29.502 [2024-10-01 17:38:27.783000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.502 [2024-10-01 17:38:27.783007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.502 qpair failed and we were unable to recover it. 00:38:29.502 [2024-10-01 17:38:27.783362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.502 [2024-10-01 17:38:27.783368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.502 qpair failed and we were unable to recover it. 00:38:29.502 [2024-10-01 17:38:27.783667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.502 [2024-10-01 17:38:27.783674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.502 qpair failed and we were unable to recover it. 00:38:29.502 [2024-10-01 17:38:27.783962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.502 [2024-10-01 17:38:27.783970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.502 qpair failed and we were unable to recover it. 00:38:29.502 [2024-10-01 17:38:27.784271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.502 [2024-10-01 17:38:27.784278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.502 qpair failed and we were unable to recover it. 00:38:29.502 [2024-10-01 17:38:27.784576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.502 [2024-10-01 17:38:27.784583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.502 qpair failed and we were unable to recover it. 00:38:29.503 [2024-10-01 17:38:27.784856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.503 [2024-10-01 17:38:27.784863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.503 qpair failed and we were unable to recover it. 00:38:29.503 [2024-10-01 17:38:27.785155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.503 [2024-10-01 17:38:27.785162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.503 qpair failed and we were unable to recover it. 00:38:29.503 [2024-10-01 17:38:27.785294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.503 [2024-10-01 17:38:27.785301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.503 qpair failed and we were unable to recover it. 00:38:29.503 [2024-10-01 17:38:27.785571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.503 [2024-10-01 17:38:27.785577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.503 qpair failed and we were unable to recover it. 00:38:29.503 [2024-10-01 17:38:27.785870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.503 [2024-10-01 17:38:27.785879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.503 qpair failed and we were unable to recover it. 00:38:29.503 [2024-10-01 17:38:27.786209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.503 [2024-10-01 17:38:27.786217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.503 qpair failed and we were unable to recover it. 00:38:29.503 [2024-10-01 17:38:27.786497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.503 [2024-10-01 17:38:27.786504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.503 qpair failed and we were unable to recover it. 00:38:29.503 [2024-10-01 17:38:27.786869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.503 [2024-10-01 17:38:27.786876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.503 qpair failed and we were unable to recover it. 00:38:29.503 [2024-10-01 17:38:27.787168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.503 [2024-10-01 17:38:27.787175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.503 qpair failed and we were unable to recover it. 00:38:29.503 [2024-10-01 17:38:27.787488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.503 [2024-10-01 17:38:27.787495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.503 qpair failed and we were unable to recover it. 00:38:29.503 [2024-10-01 17:38:27.787663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.503 [2024-10-01 17:38:27.787670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.503 qpair failed and we were unable to recover it. 00:38:29.503 [2024-10-01 17:38:27.787774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.503 [2024-10-01 17:38:27.787780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.503 qpair failed and we were unable to recover it. 00:38:29.503 [2024-10-01 17:38:27.788076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.503 [2024-10-01 17:38:27.788091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.503 qpair failed and we were unable to recover it. 00:38:29.503 [2024-10-01 17:38:27.788413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.503 [2024-10-01 17:38:27.788420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.503 qpair failed and we were unable to recover it. 00:38:29.503 [2024-10-01 17:38:27.788732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.503 [2024-10-01 17:38:27.788739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.503 qpair failed and we were unable to recover it. 00:38:29.503 [2024-10-01 17:38:27.789056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.503 [2024-10-01 17:38:27.789063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.503 qpair failed and we were unable to recover it. 00:38:29.503 [2024-10-01 17:38:27.789456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.503 [2024-10-01 17:38:27.789463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.503 qpair failed and we were unable to recover it. 00:38:29.503 [2024-10-01 17:38:27.789651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.503 [2024-10-01 17:38:27.789658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.503 qpair failed and we were unable to recover it. 00:38:29.503 [2024-10-01 17:38:27.789925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.503 [2024-10-01 17:38:27.789933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.503 qpair failed and we were unable to recover it. 00:38:29.503 [2024-10-01 17:38:27.790262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.503 [2024-10-01 17:38:27.790270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.503 qpair failed and we were unable to recover it. 00:38:29.503 [2024-10-01 17:38:27.790570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.503 [2024-10-01 17:38:27.790576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.503 qpair failed and we were unable to recover it. 00:38:29.503 [2024-10-01 17:38:27.790776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.503 [2024-10-01 17:38:27.790783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.503 qpair failed and we were unable to recover it. 00:38:29.503 [2024-10-01 17:38:27.791062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.503 [2024-10-01 17:38:27.791070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.503 qpair failed and we were unable to recover it. 00:38:29.503 [2024-10-01 17:38:27.791285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.503 [2024-10-01 17:38:27.791291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.503 qpair failed and we were unable to recover it. 00:38:29.503 [2024-10-01 17:38:27.791602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.503 [2024-10-01 17:38:27.791608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.503 qpair failed and we were unable to recover it. 00:38:29.503 [2024-10-01 17:38:27.791947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.503 [2024-10-01 17:38:27.791954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.503 qpair failed and we were unable to recover it. 00:38:29.503 [2024-10-01 17:38:27.792281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.503 [2024-10-01 17:38:27.792288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.503 qpair failed and we were unable to recover it. 00:38:29.503 [2024-10-01 17:38:27.792472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.503 [2024-10-01 17:38:27.792479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.503 qpair failed and we were unable to recover it. 00:38:29.503 [2024-10-01 17:38:27.792799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.503 [2024-10-01 17:38:27.792806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.503 qpair failed and we were unable to recover it. 00:38:29.503 [2024-10-01 17:38:27.792992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.503 [2024-10-01 17:38:27.793004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.503 qpair failed and we were unable to recover it. 00:38:29.503 [2024-10-01 17:38:27.793303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.503 [2024-10-01 17:38:27.793311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.503 qpair failed and we were unable to recover it. 00:38:29.503 [2024-10-01 17:38:27.793627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.503 [2024-10-01 17:38:27.793634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.503 qpair failed and we were unable to recover it. 00:38:29.503 [2024-10-01 17:38:27.793943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.503 [2024-10-01 17:38:27.793949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.503 qpair failed and we were unable to recover it. 00:38:29.503 [2024-10-01 17:38:27.794272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.503 [2024-10-01 17:38:27.794279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.503 qpair failed and we were unable to recover it. 00:38:29.503 [2024-10-01 17:38:27.794580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.503 [2024-10-01 17:38:27.794587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.503 qpair failed and we were unable to recover it. 00:38:29.503 [2024-10-01 17:38:27.794904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.503 [2024-10-01 17:38:27.794910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.503 qpair failed and we were unable to recover it. 00:38:29.503 [2024-10-01 17:38:27.795071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.503 [2024-10-01 17:38:27.795079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.503 qpair failed and we were unable to recover it. 00:38:29.503 [2024-10-01 17:38:27.795350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.503 [2024-10-01 17:38:27.795358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.503 qpair failed and we were unable to recover it. 00:38:29.503 [2024-10-01 17:38:27.795696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.503 [2024-10-01 17:38:27.795703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.504 qpair failed and we were unable to recover it. 00:38:29.504 [2024-10-01 17:38:27.795897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.504 [2024-10-01 17:38:27.795904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.504 qpair failed and we were unable to recover it. 00:38:29.504 [2024-10-01 17:38:27.796178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.504 [2024-10-01 17:38:27.796186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.504 qpair failed and we were unable to recover it. 00:38:29.504 [2024-10-01 17:38:27.796492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.504 [2024-10-01 17:38:27.796500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.504 qpair failed and we were unable to recover it. 00:38:29.504 [2024-10-01 17:38:27.796826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.504 [2024-10-01 17:38:27.796833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.504 qpair failed and we were unable to recover it. 00:38:29.504 [2024-10-01 17:38:27.797139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.504 [2024-10-01 17:38:27.797146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.504 qpair failed and we were unable to recover it. 00:38:29.504 [2024-10-01 17:38:27.797455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.504 [2024-10-01 17:38:27.797463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.504 qpair failed and we were unable to recover it. 00:38:29.504 [2024-10-01 17:38:27.797774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.504 [2024-10-01 17:38:27.797782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.504 qpair failed and we were unable to recover it. 00:38:29.504 [2024-10-01 17:38:27.798088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.504 [2024-10-01 17:38:27.798095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.504 qpair failed and we were unable to recover it. 00:38:29.504 [2024-10-01 17:38:27.798505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.504 [2024-10-01 17:38:27.798512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.504 qpair failed and we were unable to recover it. 00:38:29.504 [2024-10-01 17:38:27.798815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.504 [2024-10-01 17:38:27.798822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.504 qpair failed and we were unable to recover it. 00:38:29.504 [2024-10-01 17:38:27.799155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.504 [2024-10-01 17:38:27.799162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.504 qpair failed and we were unable to recover it. 00:38:29.504 [2024-10-01 17:38:27.799453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.504 [2024-10-01 17:38:27.799459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.504 qpair failed and we were unable to recover it. 00:38:29.504 [2024-10-01 17:38:27.799749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.504 [2024-10-01 17:38:27.799756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.504 qpair failed and we were unable to recover it. 00:38:29.504 [2024-10-01 17:38:27.800063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.504 [2024-10-01 17:38:27.800070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.504 qpair failed and we were unable to recover it. 00:38:29.504 [2024-10-01 17:38:27.800376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.504 [2024-10-01 17:38:27.800382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.504 qpair failed and we were unable to recover it. 00:38:29.504 [2024-10-01 17:38:27.800667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.504 [2024-10-01 17:38:27.800674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.504 qpair failed and we were unable to recover it. 00:38:29.504 [2024-10-01 17:38:27.800966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.504 [2024-10-01 17:38:27.800973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.504 qpair failed and we were unable to recover it. 00:38:29.504 [2024-10-01 17:38:27.801353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.504 [2024-10-01 17:38:27.801361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.504 qpair failed and we were unable to recover it. 00:38:29.504 [2024-10-01 17:38:27.801672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.504 [2024-10-01 17:38:27.801680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.504 qpair failed and we were unable to recover it. 00:38:29.504 [2024-10-01 17:38:27.802000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.504 [2024-10-01 17:38:27.802009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.504 qpair failed and we were unable to recover it. 00:38:29.504 [2024-10-01 17:38:27.802319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.504 [2024-10-01 17:38:27.802326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.504 qpair failed and we were unable to recover it. 00:38:29.504 [2024-10-01 17:38:27.802630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.504 [2024-10-01 17:38:27.802637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.504 qpair failed and we were unable to recover it. 00:38:29.504 [2024-10-01 17:38:27.802708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.504 [2024-10-01 17:38:27.802715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.504 qpair failed and we were unable to recover it. 00:38:29.504 [2024-10-01 17:38:27.802992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.504 [2024-10-01 17:38:27.803006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.504 qpair failed and we were unable to recover it. 00:38:29.504 [2024-10-01 17:38:27.803214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.504 [2024-10-01 17:38:27.803220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.504 qpair failed and we were unable to recover it. 00:38:29.504 [2024-10-01 17:38:27.803339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.504 [2024-10-01 17:38:27.803345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.504 qpair failed and we were unable to recover it. 00:38:29.504 [2024-10-01 17:38:27.803569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.504 [2024-10-01 17:38:27.803576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.504 qpair failed and we were unable to recover it. 00:38:29.504 [2024-10-01 17:38:27.803888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.504 [2024-10-01 17:38:27.803895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.504 qpair failed and we were unable to recover it. 00:38:29.504 [2024-10-01 17:38:27.804233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.504 [2024-10-01 17:38:27.804240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.504 qpair failed and we were unable to recover it. 00:38:29.504 [2024-10-01 17:38:27.804533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.504 [2024-10-01 17:38:27.804541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.504 qpair failed and we were unable to recover it. 00:38:29.504 [2024-10-01 17:38:27.804845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.504 [2024-10-01 17:38:27.804852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.504 qpair failed and we were unable to recover it. 00:38:29.504 [2024-10-01 17:38:27.805160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.504 [2024-10-01 17:38:27.805167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.504 qpair failed and we were unable to recover it. 00:38:29.504 [2024-10-01 17:38:27.805474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.504 [2024-10-01 17:38:27.805480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.504 qpair failed and we were unable to recover it. 00:38:29.504 [2024-10-01 17:38:27.805771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.504 [2024-10-01 17:38:27.805779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.504 qpair failed and we were unable to recover it. 00:38:29.504 [2024-10-01 17:38:27.806084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.504 [2024-10-01 17:38:27.806091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.504 qpair failed and we were unable to recover it. 00:38:29.504 [2024-10-01 17:38:27.806355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.504 [2024-10-01 17:38:27.806362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.504 qpair failed and we were unable to recover it. 00:38:29.504 [2024-10-01 17:38:27.806674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.504 [2024-10-01 17:38:27.806682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.504 qpair failed and we were unable to recover it. 00:38:29.504 [2024-10-01 17:38:27.806999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.504 [2024-10-01 17:38:27.807007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.504 qpair failed and we were unable to recover it. 00:38:29.504 [2024-10-01 17:38:27.807202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.505 [2024-10-01 17:38:27.807208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.505 qpair failed and we were unable to recover it. 00:38:29.505 [2024-10-01 17:38:27.807508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.505 [2024-10-01 17:38:27.807515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.505 qpair failed and we were unable to recover it. 00:38:29.505 [2024-10-01 17:38:27.807811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.505 [2024-10-01 17:38:27.807817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.505 qpair failed and we were unable to recover it. 00:38:29.505 [2024-10-01 17:38:27.808132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.505 [2024-10-01 17:38:27.808139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.505 qpair failed and we were unable to recover it. 00:38:29.505 [2024-10-01 17:38:27.808449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.505 [2024-10-01 17:38:27.808455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.505 qpair failed and we were unable to recover it. 00:38:29.505 [2024-10-01 17:38:27.808596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.505 [2024-10-01 17:38:27.808603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.505 qpair failed and we were unable to recover it. 00:38:29.505 [2024-10-01 17:38:27.808888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.505 [2024-10-01 17:38:27.808896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.505 qpair failed and we were unable to recover it. 00:38:29.505 [2024-10-01 17:38:27.809203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.505 [2024-10-01 17:38:27.809212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.505 qpair failed and we were unable to recover it. 00:38:29.505 [2024-10-01 17:38:27.809494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.505 [2024-10-01 17:38:27.809500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.505 qpair failed and we were unable to recover it. 00:38:29.505 [2024-10-01 17:38:27.809804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.505 [2024-10-01 17:38:27.809811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.505 qpair failed and we were unable to recover it. 00:38:29.505 [2024-10-01 17:38:27.810094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.505 [2024-10-01 17:38:27.810102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.505 qpair failed and we were unable to recover it. 00:38:29.505 [2024-10-01 17:38:27.810424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.505 [2024-10-01 17:38:27.810431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.505 qpair failed and we were unable to recover it. 00:38:29.505 [2024-10-01 17:38:27.810744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.505 [2024-10-01 17:38:27.810752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.505 qpair failed and we were unable to recover it. 00:38:29.505 [2024-10-01 17:38:27.811071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.505 [2024-10-01 17:38:27.811078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.505 qpair failed and we were unable to recover it. 00:38:29.505 [2024-10-01 17:38:27.811403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.505 [2024-10-01 17:38:27.811410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.505 qpair failed and we were unable to recover it. 00:38:29.505 [2024-10-01 17:38:27.811707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.505 [2024-10-01 17:38:27.811714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.505 qpair failed and we were unable to recover it. 00:38:29.505 [2024-10-01 17:38:27.812027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.505 [2024-10-01 17:38:27.812034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.505 qpair failed and we were unable to recover it. 00:38:29.505 [2024-10-01 17:38:27.812358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.505 [2024-10-01 17:38:27.812365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.505 qpair failed and we were unable to recover it. 00:38:29.505 [2024-10-01 17:38:27.812653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.505 [2024-10-01 17:38:27.812662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.505 qpair failed and we were unable to recover it. 00:38:29.505 [2024-10-01 17:38:27.812949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.505 [2024-10-01 17:38:27.812957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.505 qpair failed and we were unable to recover it. 00:38:29.505 [2024-10-01 17:38:27.813269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.505 [2024-10-01 17:38:27.813277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.505 qpair failed and we were unable to recover it. 00:38:29.505 [2024-10-01 17:38:27.813581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.505 [2024-10-01 17:38:27.813588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.505 qpair failed and we were unable to recover it. 00:38:29.505 [2024-10-01 17:38:27.813885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.505 [2024-10-01 17:38:27.813893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.505 qpair failed and we were unable to recover it. 00:38:29.505 [2024-10-01 17:38:27.814209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.505 [2024-10-01 17:38:27.814217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.505 qpair failed and we were unable to recover it. 00:38:29.505 [2024-10-01 17:38:27.814525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.505 [2024-10-01 17:38:27.814532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.505 qpair failed and we were unable to recover it. 00:38:29.505 [2024-10-01 17:38:27.814838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.505 [2024-10-01 17:38:27.814846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.505 qpair failed and we were unable to recover it. 00:38:29.505 [2024-10-01 17:38:27.815039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.505 [2024-10-01 17:38:27.815046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.505 qpair failed and we were unable to recover it. 00:38:29.505 [2024-10-01 17:38:27.815317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.505 [2024-10-01 17:38:27.815324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.505 qpair failed and we were unable to recover it. 00:38:29.505 [2024-10-01 17:38:27.815633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.505 [2024-10-01 17:38:27.815640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.505 qpair failed and we were unable to recover it. 00:38:29.505 [2024-10-01 17:38:27.815953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.505 [2024-10-01 17:38:27.815961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.505 qpair failed and we were unable to recover it. 00:38:29.505 [2024-10-01 17:38:27.816245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.505 [2024-10-01 17:38:27.816253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.505 qpair failed and we were unable to recover it. 00:38:29.505 [2024-10-01 17:38:27.816537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.505 [2024-10-01 17:38:27.816544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.505 qpair failed and we were unable to recover it. 00:38:29.505 [2024-10-01 17:38:27.816852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.505 [2024-10-01 17:38:27.816859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.505 qpair failed and we were unable to recover it. 00:38:29.505 [2024-10-01 17:38:27.817170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.505 [2024-10-01 17:38:27.817176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.505 qpair failed and we were unable to recover it. 00:38:29.505 [2024-10-01 17:38:27.817470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.505 [2024-10-01 17:38:27.817478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.505 qpair failed and we were unable to recover it. 00:38:29.505 [2024-10-01 17:38:27.817822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.505 [2024-10-01 17:38:27.817829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.505 qpair failed and we were unable to recover it. 00:38:29.505 [2024-10-01 17:38:27.818198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.505 [2024-10-01 17:38:27.818205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.505 qpair failed and we were unable to recover it. 00:38:29.505 [2024-10-01 17:38:27.818513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.505 [2024-10-01 17:38:27.818520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.505 qpair failed and we were unable to recover it. 00:38:29.505 [2024-10-01 17:38:27.818850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.506 [2024-10-01 17:38:27.818856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.506 qpair failed and we were unable to recover it. 00:38:29.506 [2024-10-01 17:38:27.819103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.506 [2024-10-01 17:38:27.819111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.506 qpair failed and we were unable to recover it. 00:38:29.506 [2024-10-01 17:38:27.819323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.506 [2024-10-01 17:38:27.819330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.506 qpair failed and we were unable to recover it. 00:38:29.506 [2024-10-01 17:38:27.819594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.506 [2024-10-01 17:38:27.819601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.506 qpair failed and we were unable to recover it. 00:38:29.506 [2024-10-01 17:38:27.820001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.506 [2024-10-01 17:38:27.820008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.506 qpair failed and we were unable to recover it. 00:38:29.506 [2024-10-01 17:38:27.820166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.506 [2024-10-01 17:38:27.820173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.506 qpair failed and we were unable to recover it. 00:38:29.506 [2024-10-01 17:38:27.820479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.506 [2024-10-01 17:38:27.820486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.506 qpair failed and we were unable to recover it. 00:38:29.506 [2024-10-01 17:38:27.820794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.506 [2024-10-01 17:38:27.820801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.506 qpair failed and we were unable to recover it. 00:38:29.506 [2024-10-01 17:38:27.821113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.506 [2024-10-01 17:38:27.821120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.506 qpair failed and we were unable to recover it. 00:38:29.506 [2024-10-01 17:38:27.821478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.506 [2024-10-01 17:38:27.821487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.506 qpair failed and we were unable to recover it. 00:38:29.506 [2024-10-01 17:38:27.821802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.506 [2024-10-01 17:38:27.821810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.506 qpair failed and we were unable to recover it. 00:38:29.506 [2024-10-01 17:38:27.822118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.506 [2024-10-01 17:38:27.822125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.506 qpair failed and we were unable to recover it. 00:38:29.506 [2024-10-01 17:38:27.822427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.506 [2024-10-01 17:38:27.822435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.506 qpair failed and we were unable to recover it. 00:38:29.506 [2024-10-01 17:38:27.822720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.506 [2024-10-01 17:38:27.822727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.506 qpair failed and we were unable to recover it. 00:38:29.506 [2024-10-01 17:38:27.823037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.506 [2024-10-01 17:38:27.823044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.506 qpair failed and we were unable to recover it. 00:38:29.506 [2024-10-01 17:38:27.823432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.506 [2024-10-01 17:38:27.823439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.506 qpair failed and we were unable to recover it. 00:38:29.506 [2024-10-01 17:38:27.823720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.506 [2024-10-01 17:38:27.823727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.506 qpair failed and we were unable to recover it. 00:38:29.506 [2024-10-01 17:38:27.824052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.506 [2024-10-01 17:38:27.824060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.506 qpair failed and we were unable to recover it. 00:38:29.506 [2024-10-01 17:38:27.824377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.506 [2024-10-01 17:38:27.824385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.506 qpair failed and we were unable to recover it. 00:38:29.506 [2024-10-01 17:38:27.824688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.506 [2024-10-01 17:38:27.824696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.506 qpair failed and we were unable to recover it. 00:38:29.506 [2024-10-01 17:38:27.824982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.506 [2024-10-01 17:38:27.824990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.506 qpair failed and we were unable to recover it. 00:38:29.506 [2024-10-01 17:38:27.825301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.506 [2024-10-01 17:38:27.825309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.506 qpair failed and we were unable to recover it. 00:38:29.506 [2024-10-01 17:38:27.825612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.506 [2024-10-01 17:38:27.825620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.506 qpair failed and we were unable to recover it. 00:38:29.506 [2024-10-01 17:38:27.825793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.506 [2024-10-01 17:38:27.825801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.506 qpair failed and we were unable to recover it. 00:38:29.506 [2024-10-01 17:38:27.826136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.506 [2024-10-01 17:38:27.826143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.506 qpair failed and we were unable to recover it. 00:38:29.506 [2024-10-01 17:38:27.826451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.506 [2024-10-01 17:38:27.826458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.506 qpair failed and we were unable to recover it. 00:38:29.506 [2024-10-01 17:38:27.826752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.506 [2024-10-01 17:38:27.826760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.506 qpair failed and we were unable to recover it. 00:38:29.506 [2024-10-01 17:38:27.827057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.506 [2024-10-01 17:38:27.827065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.506 qpair failed and we were unable to recover it. 00:38:29.506 [2024-10-01 17:38:27.827390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.506 [2024-10-01 17:38:27.827397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.506 qpair failed and we were unable to recover it. 00:38:29.506 [2024-10-01 17:38:27.827579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.506 [2024-10-01 17:38:27.827587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.506 qpair failed and we were unable to recover it. 00:38:29.506 [2024-10-01 17:38:27.827882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.506 [2024-10-01 17:38:27.827888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.506 qpair failed and we were unable to recover it. 00:38:29.506 [2024-10-01 17:38:27.828081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.506 [2024-10-01 17:38:27.828088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.506 qpair failed and we were unable to recover it. 00:38:29.506 [2024-10-01 17:38:27.828379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.506 [2024-10-01 17:38:27.828394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.506 qpair failed and we were unable to recover it. 00:38:29.506 [2024-10-01 17:38:27.828691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.506 [2024-10-01 17:38:27.828698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.506 qpair failed and we were unable to recover it. 00:38:29.506 [2024-10-01 17:38:27.828991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.506 [2024-10-01 17:38:27.829001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.506 qpair failed and we were unable to recover it. 00:38:29.506 [2024-10-01 17:38:27.829201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.506 [2024-10-01 17:38:27.829208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.506 qpair failed and we were unable to recover it. 00:38:29.506 [2024-10-01 17:38:27.829497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.506 [2024-10-01 17:38:27.829504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.506 qpair failed and we were unable to recover it. 00:38:29.506 [2024-10-01 17:38:27.829834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.506 [2024-10-01 17:38:27.829840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.506 qpair failed and we were unable to recover it. 00:38:29.506 [2024-10-01 17:38:27.830145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.506 [2024-10-01 17:38:27.830153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.506 qpair failed and we were unable to recover it. 00:38:29.507 [2024-10-01 17:38:27.830474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.507 [2024-10-01 17:38:27.830481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.507 qpair failed and we were unable to recover it. 00:38:29.507 [2024-10-01 17:38:27.830645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.507 [2024-10-01 17:38:27.830653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.507 qpair failed and we were unable to recover it. 00:38:29.507 [2024-10-01 17:38:27.831013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.507 [2024-10-01 17:38:27.831021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.507 qpair failed and we were unable to recover it. 00:38:29.507 [2024-10-01 17:38:27.831285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.507 [2024-10-01 17:38:27.831293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.507 qpair failed and we were unable to recover it. 00:38:29.507 [2024-10-01 17:38:27.831577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.507 [2024-10-01 17:38:27.831585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.507 qpair failed and we were unable to recover it. 00:38:29.507 [2024-10-01 17:38:27.831914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.507 [2024-10-01 17:38:27.831922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.507 qpair failed and we were unable to recover it. 00:38:29.507 [2024-10-01 17:38:27.832205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.507 [2024-10-01 17:38:27.832212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.507 qpair failed and we were unable to recover it. 00:38:29.507 [2024-10-01 17:38:27.832532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.507 [2024-10-01 17:38:27.832539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.507 qpair failed and we were unable to recover it. 00:38:29.507 [2024-10-01 17:38:27.832937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.507 [2024-10-01 17:38:27.832944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.507 qpair failed and we were unable to recover it. 00:38:29.507 [2024-10-01 17:38:27.833251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.507 [2024-10-01 17:38:27.833258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.507 qpair failed and we were unable to recover it. 00:38:29.507 [2024-10-01 17:38:27.833584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.507 [2024-10-01 17:38:27.833593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.507 qpair failed and we were unable to recover it. 00:38:29.507 [2024-10-01 17:38:27.833900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.507 [2024-10-01 17:38:27.833908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.507 qpair failed and we were unable to recover it. 00:38:29.507 [2024-10-01 17:38:27.834230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.507 [2024-10-01 17:38:27.834240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.507 qpair failed and we were unable to recover it. 00:38:29.507 [2024-10-01 17:38:27.834523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.507 [2024-10-01 17:38:27.834530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.507 qpair failed and we were unable to recover it. 00:38:29.507 [2024-10-01 17:38:27.834864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.507 [2024-10-01 17:38:27.834872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.507 qpair failed and we were unable to recover it. 00:38:29.507 [2024-10-01 17:38:27.834974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.507 [2024-10-01 17:38:27.834982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.507 qpair failed and we were unable to recover it. 00:38:29.507 [2024-10-01 17:38:27.835178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.507 [2024-10-01 17:38:27.835185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.507 qpair failed and we were unable to recover it. 00:38:29.507 [2024-10-01 17:38:27.835497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.507 [2024-10-01 17:38:27.835505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.507 qpair failed and we were unable to recover it. 00:38:29.507 [2024-10-01 17:38:27.835839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.507 [2024-10-01 17:38:27.835847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.507 qpair failed and we were unable to recover it. 00:38:29.507 [2024-10-01 17:38:27.836148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.507 [2024-10-01 17:38:27.836154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.507 qpair failed and we were unable to recover it. 00:38:29.507 [2024-10-01 17:38:27.836459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.507 [2024-10-01 17:38:27.836466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.507 qpair failed and we were unable to recover it. 00:38:29.507 [2024-10-01 17:38:27.836740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.507 [2024-10-01 17:38:27.836747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.507 qpair failed and we were unable to recover it. 00:38:29.507 [2024-10-01 17:38:27.837096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.507 [2024-10-01 17:38:27.837103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.507 qpair failed and we were unable to recover it. 00:38:29.507 [2024-10-01 17:38:27.837189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.507 [2024-10-01 17:38:27.837195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.507 qpair failed and we were unable to recover it. 00:38:29.507 [2024-10-01 17:38:27.837465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.507 [2024-10-01 17:38:27.837472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.507 qpair failed and we were unable to recover it. 00:38:29.507 [2024-10-01 17:38:27.837785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.507 [2024-10-01 17:38:27.837792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.507 qpair failed and we were unable to recover it. 00:38:29.507 [2024-10-01 17:38:27.838061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.507 [2024-10-01 17:38:27.838069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.507 qpair failed and we were unable to recover it. 00:38:29.507 [2024-10-01 17:38:27.838415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.507 [2024-10-01 17:38:27.838422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.507 qpair failed and we were unable to recover it. 00:38:29.507 [2024-10-01 17:38:27.838624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.507 [2024-10-01 17:38:27.838631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.507 qpair failed and we were unable to recover it. 00:38:29.507 [2024-10-01 17:38:27.838904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.507 [2024-10-01 17:38:27.838911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.507 qpair failed and we were unable to recover it. 00:38:29.507 [2024-10-01 17:38:27.839216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.507 [2024-10-01 17:38:27.839223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.507 qpair failed and we were unable to recover it. 00:38:29.507 [2024-10-01 17:38:27.839519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.507 [2024-10-01 17:38:27.839526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.507 qpair failed and we were unable to recover it. 00:38:29.507 [2024-10-01 17:38:27.839873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.507 [2024-10-01 17:38:27.839881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.507 qpair failed and we were unable to recover it. 00:38:29.507 [2024-10-01 17:38:27.840098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.507 [2024-10-01 17:38:27.840106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.507 qpair failed and we were unable to recover it. 00:38:29.507 [2024-10-01 17:38:27.840419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.507 [2024-10-01 17:38:27.840425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.508 qpair failed and we were unable to recover it. 00:38:29.508 [2024-10-01 17:38:27.840781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.508 [2024-10-01 17:38:27.840788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.508 qpair failed and we were unable to recover it. 00:38:29.508 [2024-10-01 17:38:27.841012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.508 [2024-10-01 17:38:27.841019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.508 qpair failed and we were unable to recover it. 00:38:29.508 [2024-10-01 17:38:27.841251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.508 [2024-10-01 17:38:27.841258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.508 qpair failed and we were unable to recover it. 00:38:29.508 [2024-10-01 17:38:27.841552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.508 [2024-10-01 17:38:27.841559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.508 qpair failed and we were unable to recover it. 00:38:29.508 [2024-10-01 17:38:27.841890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.508 [2024-10-01 17:38:27.841896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.508 qpair failed and we were unable to recover it. 00:38:29.508 [2024-10-01 17:38:27.842211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.508 [2024-10-01 17:38:27.842219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.508 qpair failed and we were unable to recover it. 00:38:29.508 [2024-10-01 17:38:27.842538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.508 [2024-10-01 17:38:27.842546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.508 qpair failed and we were unable to recover it. 00:38:29.508 [2024-10-01 17:38:27.842845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.508 [2024-10-01 17:38:27.842852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.508 qpair failed and we were unable to recover it. 00:38:29.508 [2024-10-01 17:38:27.843085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.508 [2024-10-01 17:38:27.843092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.508 qpair failed and we were unable to recover it. 00:38:29.508 [2024-10-01 17:38:27.843384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.508 [2024-10-01 17:38:27.843391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.508 qpair failed and we were unable to recover it. 00:38:29.508 [2024-10-01 17:38:27.843693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.508 [2024-10-01 17:38:27.843701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.508 qpair failed and we were unable to recover it. 00:38:29.508 [2024-10-01 17:38:27.844004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.508 [2024-10-01 17:38:27.844011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.508 qpair failed and we were unable to recover it. 00:38:29.508 [2024-10-01 17:38:27.844298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.508 [2024-10-01 17:38:27.844306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.508 qpair failed and we were unable to recover it. 00:38:29.508 [2024-10-01 17:38:27.844606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.508 [2024-10-01 17:38:27.844613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.508 qpair failed and we were unable to recover it. 00:38:29.508 [2024-10-01 17:38:27.844826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.508 [2024-10-01 17:38:27.844833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.508 qpair failed and we were unable to recover it. 00:38:29.508 [2024-10-01 17:38:27.845046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.508 [2024-10-01 17:38:27.845056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.508 qpair failed and we were unable to recover it. 00:38:29.508 [2024-10-01 17:38:27.845358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.508 [2024-10-01 17:38:27.845365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.508 qpair failed and we were unable to recover it. 00:38:29.508 [2024-10-01 17:38:27.845684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.508 [2024-10-01 17:38:27.845692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.508 qpair failed and we were unable to recover it. 00:38:29.508 [2024-10-01 17:38:27.845899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.508 [2024-10-01 17:38:27.845906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.508 qpair failed and we were unable to recover it. 00:38:29.508 [2024-10-01 17:38:27.846163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.508 [2024-10-01 17:38:27.846171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.508 qpair failed and we were unable to recover it. 00:38:29.508 [2024-10-01 17:38:27.846343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.508 [2024-10-01 17:38:27.846349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.508 qpair failed and we were unable to recover it. 00:38:29.508 [2024-10-01 17:38:27.846539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.508 [2024-10-01 17:38:27.846546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.508 qpair failed and we were unable to recover it. 00:38:29.508 [2024-10-01 17:38:27.846821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.508 [2024-10-01 17:38:27.846835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.508 qpair failed and we were unable to recover it. 00:38:29.508 [2024-10-01 17:38:27.847154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.508 [2024-10-01 17:38:27.847162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.508 qpair failed and we were unable to recover it. 00:38:29.508 [2024-10-01 17:38:27.847455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.508 [2024-10-01 17:38:27.847462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.508 qpair failed and we were unable to recover it. 00:38:29.508 [2024-10-01 17:38:27.847649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.508 [2024-10-01 17:38:27.847657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.508 qpair failed and we were unable to recover it. 00:38:29.508 [2024-10-01 17:38:27.847960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.508 [2024-10-01 17:38:27.847967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.508 qpair failed and we were unable to recover it. 00:38:29.508 [2024-10-01 17:38:27.848312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.508 [2024-10-01 17:38:27.848319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.508 qpair failed and we were unable to recover it. 00:38:29.508 [2024-10-01 17:38:27.848658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.508 [2024-10-01 17:38:27.848665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.508 qpair failed and we were unable to recover it. 00:38:29.508 [2024-10-01 17:38:27.848958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.508 [2024-10-01 17:38:27.848966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.508 qpair failed and we were unable to recover it. 00:38:29.508 [2024-10-01 17:38:27.849146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.508 [2024-10-01 17:38:27.849154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.508 qpair failed and we were unable to recover it. 00:38:29.508 [2024-10-01 17:38:27.849478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.508 [2024-10-01 17:38:27.849486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.508 qpair failed and we were unable to recover it. 00:38:29.508 [2024-10-01 17:38:27.849676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.508 [2024-10-01 17:38:27.849684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.508 qpair failed and we were unable to recover it. 00:38:29.508 [2024-10-01 17:38:27.849987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.508 [2024-10-01 17:38:27.849997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.508 qpair failed and we were unable to recover it. 00:38:29.508 [2024-10-01 17:38:27.850301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.508 [2024-10-01 17:38:27.850308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.508 qpair failed and we were unable to recover it. 00:38:29.508 [2024-10-01 17:38:27.850509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.508 [2024-10-01 17:38:27.850516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.508 qpair failed and we were unable to recover it. 00:38:29.508 [2024-10-01 17:38:27.850806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.508 [2024-10-01 17:38:27.850814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.508 qpair failed and we were unable to recover it. 00:38:29.508 [2024-10-01 17:38:27.851103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.508 [2024-10-01 17:38:27.851111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.508 qpair failed and we were unable to recover it. 00:38:29.508 [2024-10-01 17:38:27.851411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.509 [2024-10-01 17:38:27.851419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.509 qpair failed and we were unable to recover it. 00:38:29.509 [2024-10-01 17:38:27.851622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.509 [2024-10-01 17:38:27.851629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.509 qpair failed and we were unable to recover it. 00:38:29.509 [2024-10-01 17:38:27.851959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.509 [2024-10-01 17:38:27.851966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.509 qpair failed and we were unable to recover it. 00:38:29.509 [2024-10-01 17:38:27.852312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.509 [2024-10-01 17:38:27.852320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.509 qpair failed and we were unable to recover it. 00:38:29.509 [2024-10-01 17:38:27.852513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.509 [2024-10-01 17:38:27.852520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.509 qpair failed and we were unable to recover it. 00:38:29.509 [2024-10-01 17:38:27.852819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.509 [2024-10-01 17:38:27.852826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.509 qpair failed and we were unable to recover it. 00:38:29.509 [2024-10-01 17:38:27.853145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.509 [2024-10-01 17:38:27.853152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.509 qpair failed and we were unable to recover it. 00:38:29.509 [2024-10-01 17:38:27.853414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.509 [2024-10-01 17:38:27.853422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.509 qpair failed and we were unable to recover it. 00:38:29.509 [2024-10-01 17:38:27.853747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.509 [2024-10-01 17:38:27.853755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.509 qpair failed and we were unable to recover it. 00:38:29.509 [2024-10-01 17:38:27.854048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.509 [2024-10-01 17:38:27.854057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.509 qpair failed and we were unable to recover it. 00:38:29.509 [2024-10-01 17:38:27.854388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.509 [2024-10-01 17:38:27.854395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.509 qpair failed and we were unable to recover it. 00:38:29.509 [2024-10-01 17:38:27.854695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.509 [2024-10-01 17:38:27.854702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.509 qpair failed and we were unable to recover it. 00:38:29.509 [2024-10-01 17:38:27.854797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.509 [2024-10-01 17:38:27.854804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.509 qpair failed and we were unable to recover it. 00:38:29.509 [2024-10-01 17:38:27.855189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.509 [2024-10-01 17:38:27.855196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.509 qpair failed and we were unable to recover it. 00:38:29.509 [2024-10-01 17:38:27.855519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.509 [2024-10-01 17:38:27.855526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.509 qpair failed and we were unable to recover it. 00:38:29.509 [2024-10-01 17:38:27.855741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.509 [2024-10-01 17:38:27.855748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.509 qpair failed and we were unable to recover it. 00:38:29.509 [2024-10-01 17:38:27.856063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.509 [2024-10-01 17:38:27.856070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.509 qpair failed and we were unable to recover it. 00:38:29.509 [2024-10-01 17:38:27.856461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.509 [2024-10-01 17:38:27.856470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.509 qpair failed and we were unable to recover it. 00:38:29.509 [2024-10-01 17:38:27.856775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.509 [2024-10-01 17:38:27.856781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.509 qpair failed and we were unable to recover it. 00:38:29.509 [2024-10-01 17:38:27.857052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.509 [2024-10-01 17:38:27.857059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.509 qpair failed and we were unable to recover it. 00:38:29.509 [2024-10-01 17:38:27.857288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.509 [2024-10-01 17:38:27.857295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.509 qpair failed and we were unable to recover it. 00:38:29.509 [2024-10-01 17:38:27.857590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.509 [2024-10-01 17:38:27.857597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.509 qpair failed and we were unable to recover it. 00:38:29.509 [2024-10-01 17:38:27.857903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.509 [2024-10-01 17:38:27.857910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.509 qpair failed and we were unable to recover it. 00:38:29.509 [2024-10-01 17:38:27.858102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.509 [2024-10-01 17:38:27.858109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.509 qpair failed and we were unable to recover it. 00:38:29.509 [2024-10-01 17:38:27.858382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.509 [2024-10-01 17:38:27.858389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.509 qpair failed and we were unable to recover it. 00:38:29.509 [2024-10-01 17:38:27.858674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.509 [2024-10-01 17:38:27.858681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.509 qpair failed and we were unable to recover it. 00:38:29.509 [2024-10-01 17:38:27.859012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.509 [2024-10-01 17:38:27.859019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.509 qpair failed and we were unable to recover it. 00:38:29.509 [2024-10-01 17:38:27.859365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.509 [2024-10-01 17:38:27.859373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.509 qpair failed and we were unable to recover it. 00:38:29.509 [2024-10-01 17:38:27.859686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.509 [2024-10-01 17:38:27.859693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.509 qpair failed and we were unable to recover it. 00:38:29.509 [2024-10-01 17:38:27.859987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.509 [2024-10-01 17:38:27.860003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.509 qpair failed and we were unable to recover it. 00:38:29.509 [2024-10-01 17:38:27.860313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.509 [2024-10-01 17:38:27.860320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.509 qpair failed and we were unable to recover it. 00:38:29.509 [2024-10-01 17:38:27.860512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.509 [2024-10-01 17:38:27.860519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.509 qpair failed and we were unable to recover it. 00:38:29.509 [2024-10-01 17:38:27.860881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.509 [2024-10-01 17:38:27.860888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.509 qpair failed and we were unable to recover it. 00:38:29.509 [2024-10-01 17:38:27.861166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.509 [2024-10-01 17:38:27.861174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.509 qpair failed and we were unable to recover it. 00:38:29.509 [2024-10-01 17:38:27.861387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.509 [2024-10-01 17:38:27.861395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.509 qpair failed and we were unable to recover it. 00:38:29.509 [2024-10-01 17:38:27.861720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.509 [2024-10-01 17:38:27.861726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.509 qpair failed and we were unable to recover it. 00:38:29.509 [2024-10-01 17:38:27.861929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.509 [2024-10-01 17:38:27.861936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.509 qpair failed and we were unable to recover it. 00:38:29.509 [2024-10-01 17:38:27.862240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.509 [2024-10-01 17:38:27.862247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.509 qpair failed and we were unable to recover it. 00:38:29.509 [2024-10-01 17:38:27.862581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.509 [2024-10-01 17:38:27.862589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.510 qpair failed and we were unable to recover it. 00:38:29.510 [2024-10-01 17:38:27.862797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.510 [2024-10-01 17:38:27.862804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.510 qpair failed and we were unable to recover it. 00:38:29.510 [2024-10-01 17:38:27.863125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.510 [2024-10-01 17:38:27.863133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.510 qpair failed and we were unable to recover it. 00:38:29.510 [2024-10-01 17:38:27.863347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.510 [2024-10-01 17:38:27.863354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.510 qpair failed and we were unable to recover it. 00:38:29.510 [2024-10-01 17:38:27.863654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.510 [2024-10-01 17:38:27.863660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.510 qpair failed and we were unable to recover it. 00:38:29.510 [2024-10-01 17:38:27.863950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.510 [2024-10-01 17:38:27.863957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.510 qpair failed and we were unable to recover it. 00:38:29.510 [2024-10-01 17:38:27.864284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.510 [2024-10-01 17:38:27.864291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.510 qpair failed and we were unable to recover it. 00:38:29.510 [2024-10-01 17:38:27.864500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.510 [2024-10-01 17:38:27.864508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.510 qpair failed and we were unable to recover it. 00:38:29.510 [2024-10-01 17:38:27.864832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.510 [2024-10-01 17:38:27.864839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.510 qpair failed and we were unable to recover it. 00:38:29.510 [2024-10-01 17:38:27.865082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.510 [2024-10-01 17:38:27.865089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.510 qpair failed and we were unable to recover it. 00:38:29.510 [2024-10-01 17:38:27.865464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.510 [2024-10-01 17:38:27.865471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.510 qpair failed and we were unable to recover it. 00:38:29.510 [2024-10-01 17:38:27.865792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.510 [2024-10-01 17:38:27.865800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.510 qpair failed and we were unable to recover it. 00:38:29.510 [2024-10-01 17:38:27.866107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.510 [2024-10-01 17:38:27.866114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.510 qpair failed and we were unable to recover it. 00:38:29.510 [2024-10-01 17:38:27.866468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.510 [2024-10-01 17:38:27.866476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.510 qpair failed and we were unable to recover it. 00:38:29.510 [2024-10-01 17:38:27.866742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.510 [2024-10-01 17:38:27.866749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.510 qpair failed and we were unable to recover it. 00:38:29.510 [2024-10-01 17:38:27.867037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.510 [2024-10-01 17:38:27.867045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.510 qpair failed and we were unable to recover it. 00:38:29.510 [2024-10-01 17:38:27.867377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.510 [2024-10-01 17:38:27.867384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.510 qpair failed and we were unable to recover it. 00:38:29.510 [2024-10-01 17:38:27.867682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.510 [2024-10-01 17:38:27.867688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.510 qpair failed and we were unable to recover it. 00:38:29.510 [2024-10-01 17:38:27.867890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.510 [2024-10-01 17:38:27.867896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.510 qpair failed and we were unable to recover it. 00:38:29.510 [2024-10-01 17:38:27.868189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.510 [2024-10-01 17:38:27.868198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.510 qpair failed and we were unable to recover it. 00:38:29.510 [2024-10-01 17:38:27.868488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.510 [2024-10-01 17:38:27.868495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.510 qpair failed and we were unable to recover it. 00:38:29.510 [2024-10-01 17:38:27.868806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.510 [2024-10-01 17:38:27.868812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.510 qpair failed and we were unable to recover it. 00:38:29.510 [2024-10-01 17:38:27.869082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.510 [2024-10-01 17:38:27.869089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.510 qpair failed and we were unable to recover it. 00:38:29.510 [2024-10-01 17:38:27.869425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.510 [2024-10-01 17:38:27.869433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.510 qpair failed and we were unable to recover it. 00:38:29.510 [2024-10-01 17:38:27.869769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.510 [2024-10-01 17:38:27.869776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.510 qpair failed and we were unable to recover it. 00:38:29.510 [2024-10-01 17:38:27.870104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.510 [2024-10-01 17:38:27.870111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.510 qpair failed and we were unable to recover it. 00:38:29.510 [2024-10-01 17:38:27.870440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.510 [2024-10-01 17:38:27.870447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.510 qpair failed and we were unable to recover it. 00:38:29.510 [2024-10-01 17:38:27.870627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.510 [2024-10-01 17:38:27.870633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.510 qpair failed and we were unable to recover it. 00:38:29.510 [2024-10-01 17:38:27.870915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.510 [2024-10-01 17:38:27.870921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.510 qpair failed and we were unable to recover it. 00:38:29.510 [2024-10-01 17:38:27.871251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.510 [2024-10-01 17:38:27.871258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.510 qpair failed and we were unable to recover it. 00:38:29.510 [2024-10-01 17:38:27.871589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.510 [2024-10-01 17:38:27.871597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.510 qpair failed and we were unable to recover it. 00:38:29.510 [2024-10-01 17:38:27.871880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.510 [2024-10-01 17:38:27.871888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.510 qpair failed and we were unable to recover it. 00:38:29.510 [2024-10-01 17:38:27.872204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.510 [2024-10-01 17:38:27.872212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.510 qpair failed and we were unable to recover it. 00:38:29.510 [2024-10-01 17:38:27.872473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.510 [2024-10-01 17:38:27.872481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.510 qpair failed and we were unable to recover it. 00:38:29.510 [2024-10-01 17:38:27.872780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.510 [2024-10-01 17:38:27.872788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.510 qpair failed and we were unable to recover it. 00:38:29.510 [2024-10-01 17:38:27.873086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.510 [2024-10-01 17:38:27.873094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.510 qpair failed and we were unable to recover it. 00:38:29.510 [2024-10-01 17:38:27.873431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.510 [2024-10-01 17:38:27.873439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.510 qpair failed and we were unable to recover it. 00:38:29.510 [2024-10-01 17:38:27.873743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.510 [2024-10-01 17:38:27.873752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.510 qpair failed and we were unable to recover it. 00:38:29.510 [2024-10-01 17:38:27.874067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.510 [2024-10-01 17:38:27.874074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.510 qpair failed and we were unable to recover it. 00:38:29.510 [2024-10-01 17:38:27.874413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.511 [2024-10-01 17:38:27.874420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.511 qpair failed and we were unable to recover it. 00:38:29.511 [2024-10-01 17:38:27.874746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.511 [2024-10-01 17:38:27.874754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.511 qpair failed and we were unable to recover it. 00:38:29.511 [2024-10-01 17:38:27.875057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.511 [2024-10-01 17:38:27.875065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.511 qpair failed and we were unable to recover it. 00:38:29.511 [2024-10-01 17:38:27.875381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.511 [2024-10-01 17:38:27.875388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.511 qpair failed and we were unable to recover it. 00:38:29.511 [2024-10-01 17:38:27.875694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.511 [2024-10-01 17:38:27.875702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.511 qpair failed and we were unable to recover it. 00:38:29.511 [2024-10-01 17:38:27.875908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.511 [2024-10-01 17:38:27.875916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.511 qpair failed and we were unable to recover it. 00:38:29.511 [2024-10-01 17:38:27.876207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.511 [2024-10-01 17:38:27.876216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.511 qpair failed and we were unable to recover it. 00:38:29.511 [2024-10-01 17:38:27.876527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.511 [2024-10-01 17:38:27.876535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.511 qpair failed and we were unable to recover it. 00:38:29.511 [2024-10-01 17:38:27.876864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.511 [2024-10-01 17:38:27.876871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.511 qpair failed and we were unable to recover it. 00:38:29.511 [2024-10-01 17:38:27.877157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.511 [2024-10-01 17:38:27.877164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.511 qpair failed and we were unable to recover it. 00:38:29.511 [2024-10-01 17:38:27.877431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.511 [2024-10-01 17:38:27.877440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.511 qpair failed and we were unable to recover it. 00:38:29.511 [2024-10-01 17:38:27.877742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.511 [2024-10-01 17:38:27.877750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.511 qpair failed and we were unable to recover it. 00:38:29.511 [2024-10-01 17:38:27.878042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.511 [2024-10-01 17:38:27.878050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.511 qpair failed and we were unable to recover it. 00:38:29.511 [2024-10-01 17:38:27.878332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.511 [2024-10-01 17:38:27.878340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.511 qpair failed and we were unable to recover it. 00:38:29.511 [2024-10-01 17:38:27.878676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.511 [2024-10-01 17:38:27.878684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.511 qpair failed and we were unable to recover it. 00:38:29.511 [2024-10-01 17:38:27.878992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.511 [2024-10-01 17:38:27.879010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.511 qpair failed and we were unable to recover it. 00:38:29.511 [2024-10-01 17:38:27.879307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.511 [2024-10-01 17:38:27.879314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.511 qpair failed and we were unable to recover it. 00:38:29.511 [2024-10-01 17:38:27.879646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.511 [2024-10-01 17:38:27.879653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.511 qpair failed and we were unable to recover it. 00:38:29.511 [2024-10-01 17:38:27.879924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.511 [2024-10-01 17:38:27.879931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.511 qpair failed and we were unable to recover it. 00:38:29.511 [2024-10-01 17:38:27.880236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.511 [2024-10-01 17:38:27.880244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.511 qpair failed and we were unable to recover it. 00:38:29.511 [2024-10-01 17:38:27.880586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.511 [2024-10-01 17:38:27.880595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.511 qpair failed and we were unable to recover it. 00:38:29.511 [2024-10-01 17:38:27.880897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.511 [2024-10-01 17:38:27.880904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.511 qpair failed and we were unable to recover it. 00:38:29.511 [2024-10-01 17:38:27.881206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.511 [2024-10-01 17:38:27.881214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.511 qpair failed and we were unable to recover it. 00:38:29.511 [2024-10-01 17:38:27.881536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.511 [2024-10-01 17:38:27.881543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.511 qpair failed and we were unable to recover it. 00:38:29.511 [2024-10-01 17:38:27.881843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.511 [2024-10-01 17:38:27.881858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.511 qpair failed and we were unable to recover it. 00:38:29.511 [2024-10-01 17:38:27.882170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.511 [2024-10-01 17:38:27.882177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.511 qpair failed and we were unable to recover it. 00:38:29.511 [2024-10-01 17:38:27.882508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.511 [2024-10-01 17:38:27.882517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.511 qpair failed and we were unable to recover it. 00:38:29.511 [2024-10-01 17:38:27.882833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.511 [2024-10-01 17:38:27.882840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.511 qpair failed and we were unable to recover it. 00:38:29.511 [2024-10-01 17:38:27.883156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.511 [2024-10-01 17:38:27.883163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.511 qpair failed and we were unable to recover it. 00:38:29.511 [2024-10-01 17:38:27.883492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.511 [2024-10-01 17:38:27.883499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.511 qpair failed and we were unable to recover it. 00:38:29.511 [2024-10-01 17:38:27.883811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.511 [2024-10-01 17:38:27.883819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.511 qpair failed and we were unable to recover it. 00:38:29.511 [2024-10-01 17:38:27.884117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.511 [2024-10-01 17:38:27.884125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.511 qpair failed and we were unable to recover it. 00:38:29.511 [2024-10-01 17:38:27.884335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.511 [2024-10-01 17:38:27.884342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.511 qpair failed and we were unable to recover it. 00:38:29.511 [2024-10-01 17:38:27.884646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.511 [2024-10-01 17:38:27.884653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.511 qpair failed and we were unable to recover it. 00:38:29.511 [2024-10-01 17:38:27.885073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.512 [2024-10-01 17:38:27.885081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.512 qpair failed and we were unable to recover it. 00:38:29.512 [2024-10-01 17:38:27.885395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.512 [2024-10-01 17:38:27.885402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.512 qpair failed and we were unable to recover it. 00:38:29.512 [2024-10-01 17:38:27.885713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.512 [2024-10-01 17:38:27.885720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.512 qpair failed and we were unable to recover it. 00:38:29.512 [2024-10-01 17:38:27.885917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.512 [2024-10-01 17:38:27.885924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.512 qpair failed and we were unable to recover it. 00:38:29.512 [2024-10-01 17:38:27.886178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.512 [2024-10-01 17:38:27.886186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.512 qpair failed and we were unable to recover it. 00:38:29.512 [2024-10-01 17:38:27.886503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.512 [2024-10-01 17:38:27.886511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.512 qpair failed and we were unable to recover it. 00:38:29.512 [2024-10-01 17:38:27.886727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.512 [2024-10-01 17:38:27.886734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.512 qpair failed and we were unable to recover it. 00:38:29.512 [2024-10-01 17:38:27.887026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.512 [2024-10-01 17:38:27.887034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.512 qpair failed and we were unable to recover it. 00:38:29.512 [2024-10-01 17:38:27.887389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.512 [2024-10-01 17:38:27.887397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.512 qpair failed and we were unable to recover it. 00:38:29.512 [2024-10-01 17:38:27.887583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.512 [2024-10-01 17:38:27.887591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.512 qpair failed and we were unable to recover it. 00:38:29.512 [2024-10-01 17:38:27.887915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.512 [2024-10-01 17:38:27.887921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.512 qpair failed and we were unable to recover it. 00:38:29.512 [2024-10-01 17:38:27.888236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.512 [2024-10-01 17:38:27.888244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.512 qpair failed and we were unable to recover it. 00:38:29.512 [2024-10-01 17:38:27.888431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.512 [2024-10-01 17:38:27.888437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.512 qpair failed and we were unable to recover it. 00:38:29.512 [2024-10-01 17:38:27.888750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.512 [2024-10-01 17:38:27.888757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.512 qpair failed and we were unable to recover it. 00:38:29.512 [2024-10-01 17:38:27.889067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.512 [2024-10-01 17:38:27.889075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.512 qpair failed and we were unable to recover it. 00:38:29.512 [2024-10-01 17:38:27.889422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.512 [2024-10-01 17:38:27.889429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.512 qpair failed and we were unable to recover it. 00:38:29.512 [2024-10-01 17:38:27.889746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.512 [2024-10-01 17:38:27.889753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.512 qpair failed and we were unable to recover it. 00:38:29.512 [2024-10-01 17:38:27.890079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.512 [2024-10-01 17:38:27.890087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.512 qpair failed and we were unable to recover it. 00:38:29.512 [2024-10-01 17:38:27.890408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.512 [2024-10-01 17:38:27.890415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.512 qpair failed and we were unable to recover it. 00:38:29.512 [2024-10-01 17:38:27.890713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.512 [2024-10-01 17:38:27.890720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.512 qpair failed and we were unable to recover it. 00:38:29.512 [2024-10-01 17:38:27.891041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.512 [2024-10-01 17:38:27.891049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.512 qpair failed and we were unable to recover it. 00:38:29.512 [2024-10-01 17:38:27.891407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.512 [2024-10-01 17:38:27.891415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.512 qpair failed and we were unable to recover it. 00:38:29.512 [2024-10-01 17:38:27.891759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.512 [2024-10-01 17:38:27.891766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.512 qpair failed and we were unable to recover it. 00:38:29.512 [2024-10-01 17:38:27.892050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.512 [2024-10-01 17:38:27.892058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.512 qpair failed and we were unable to recover it. 00:38:29.512 [2024-10-01 17:38:27.892368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.512 [2024-10-01 17:38:27.892376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.512 qpair failed and we were unable to recover it. 00:38:29.512 [2024-10-01 17:38:27.892523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.512 [2024-10-01 17:38:27.892529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.512 qpair failed and we were unable to recover it. 00:38:29.512 [2024-10-01 17:38:27.892794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.512 [2024-10-01 17:38:27.892801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.512 qpair failed and we were unable to recover it. 00:38:29.512 [2024-10-01 17:38:27.893166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.512 [2024-10-01 17:38:27.893173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.512 qpair failed and we were unable to recover it. 00:38:29.512 [2024-10-01 17:38:27.893542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.512 [2024-10-01 17:38:27.893549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.512 qpair failed and we were unable to recover it. 00:38:29.512 [2024-10-01 17:38:27.893860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.512 [2024-10-01 17:38:27.893867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.512 qpair failed and we were unable to recover it. 00:38:29.512 [2024-10-01 17:38:27.894298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.512 [2024-10-01 17:38:27.894306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.512 qpair failed and we were unable to recover it. 00:38:29.512 [2024-10-01 17:38:27.894592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.512 [2024-10-01 17:38:27.894599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.512 qpair failed and we were unable to recover it. 00:38:29.512 [2024-10-01 17:38:27.894931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.512 [2024-10-01 17:38:27.894939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.512 qpair failed and we were unable to recover it. 00:38:29.512 [2024-10-01 17:38:27.895150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.512 [2024-10-01 17:38:27.895157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.512 qpair failed and we were unable to recover it. 00:38:29.512 [2024-10-01 17:38:27.895496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.512 [2024-10-01 17:38:27.895504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.512 qpair failed and we were unable to recover it. 00:38:29.512 [2024-10-01 17:38:27.895734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.512 [2024-10-01 17:38:27.895740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.512 qpair failed and we were unable to recover it. 00:38:29.512 [2024-10-01 17:38:27.896065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.512 [2024-10-01 17:38:27.896072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.512 qpair failed and we were unable to recover it. 00:38:29.512 [2024-10-01 17:38:27.896276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.512 [2024-10-01 17:38:27.896283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.512 qpair failed and we were unable to recover it. 00:38:29.512 [2024-10-01 17:38:27.896632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.513 [2024-10-01 17:38:27.896640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.513 qpair failed and we were unable to recover it. 00:38:29.513 [2024-10-01 17:38:27.896974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.513 [2024-10-01 17:38:27.896980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.513 qpair failed and we were unable to recover it. 00:38:29.513 [2024-10-01 17:38:27.897203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.513 [2024-10-01 17:38:27.897210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.513 qpair failed and we were unable to recover it. 00:38:29.513 [2024-10-01 17:38:27.897517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.513 [2024-10-01 17:38:27.897524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.513 qpair failed and we were unable to recover it. 00:38:29.513 [2024-10-01 17:38:27.897823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.513 [2024-10-01 17:38:27.897830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.513 qpair failed and we were unable to recover it. 00:38:29.513 [2024-10-01 17:38:27.898129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.513 [2024-10-01 17:38:27.898137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.513 qpair failed and we were unable to recover it. 00:38:29.513 [2024-10-01 17:38:27.898442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.513 [2024-10-01 17:38:27.898450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.513 qpair failed and we were unable to recover it. 00:38:29.513 [2024-10-01 17:38:27.898666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.513 [2024-10-01 17:38:27.898673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.513 qpair failed and we were unable to recover it. 00:38:29.513 [2024-10-01 17:38:27.898946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.513 [2024-10-01 17:38:27.898953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.513 qpair failed and we were unable to recover it. 00:38:29.513 [2024-10-01 17:38:27.899280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.513 [2024-10-01 17:38:27.899288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.513 qpair failed and we were unable to recover it. 00:38:29.513 [2024-10-01 17:38:27.899491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.513 [2024-10-01 17:38:27.899498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.513 qpair failed and we were unable to recover it. 00:38:29.513 [2024-10-01 17:38:27.899777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.513 [2024-10-01 17:38:27.899784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.513 qpair failed and we were unable to recover it. 00:38:29.513 [2024-10-01 17:38:27.900142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.513 [2024-10-01 17:38:27.900151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.513 qpair failed and we were unable to recover it. 00:38:29.513 [2024-10-01 17:38:27.900423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.513 [2024-10-01 17:38:27.900430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.513 qpair failed and we were unable to recover it. 00:38:29.513 [2024-10-01 17:38:27.900765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.513 [2024-10-01 17:38:27.900771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.513 qpair failed and we were unable to recover it. 00:38:29.513 [2024-10-01 17:38:27.900872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.513 [2024-10-01 17:38:27.900881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.513 qpair failed and we were unable to recover it. 00:38:29.513 [2024-10-01 17:38:27.901204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.513 [2024-10-01 17:38:27.901212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.513 qpair failed and we were unable to recover it. 00:38:29.513 [2024-10-01 17:38:27.901567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.513 [2024-10-01 17:38:27.901573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.513 qpair failed and we were unable to recover it. 00:38:29.513 [2024-10-01 17:38:27.901750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.513 [2024-10-01 17:38:27.901757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.513 qpair failed and we were unable to recover it. 00:38:29.513 [2024-10-01 17:38:27.902085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.513 [2024-10-01 17:38:27.902094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.513 qpair failed and we were unable to recover it. 00:38:29.513 [2024-10-01 17:38:27.902409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.513 [2024-10-01 17:38:27.902416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.513 qpair failed and we were unable to recover it. 00:38:29.513 [2024-10-01 17:38:27.902612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.513 [2024-10-01 17:38:27.902620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.513 qpair failed and we were unable to recover it. 00:38:29.513 [2024-10-01 17:38:27.902918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.513 [2024-10-01 17:38:27.902925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.513 qpair failed and we were unable to recover it. 00:38:29.513 [2024-10-01 17:38:27.903294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.513 [2024-10-01 17:38:27.903301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.513 qpair failed and we were unable to recover it. 00:38:29.513 [2024-10-01 17:38:27.903630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.513 [2024-10-01 17:38:27.903639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.513 qpair failed and we were unable to recover it. 00:38:29.513 [2024-10-01 17:38:27.903966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.513 [2024-10-01 17:38:27.903974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.513 qpair failed and we were unable to recover it. 00:38:29.513 [2024-10-01 17:38:27.904344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.513 [2024-10-01 17:38:27.904351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.513 qpair failed and we were unable to recover it. 00:38:29.513 [2024-10-01 17:38:27.904709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.513 [2024-10-01 17:38:27.904717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.513 qpair failed and we were unable to recover it. 00:38:29.513 [2024-10-01 17:38:27.905024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.513 [2024-10-01 17:38:27.905032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.513 qpair failed and we were unable to recover it. 00:38:29.513 [2024-10-01 17:38:27.905255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.513 [2024-10-01 17:38:27.905262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.513 qpair failed and we were unable to recover it. 00:38:29.513 [2024-10-01 17:38:27.905645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.513 [2024-10-01 17:38:27.905653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.513 qpair failed and we were unable to recover it. 00:38:29.513 [2024-10-01 17:38:27.905958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.513 [2024-10-01 17:38:27.905964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.513 qpair failed and we were unable to recover it. 00:38:29.513 [2024-10-01 17:38:27.906241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.513 [2024-10-01 17:38:27.906249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.513 qpair failed and we were unable to recover it. 00:38:29.513 [2024-10-01 17:38:27.906531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.513 [2024-10-01 17:38:27.906537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.513 qpair failed and we were unable to recover it. 00:38:29.513 [2024-10-01 17:38:27.906710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.513 [2024-10-01 17:38:27.906717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.513 qpair failed and we were unable to recover it. 00:38:29.513 [2024-10-01 17:38:27.907038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.513 [2024-10-01 17:38:27.907045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.513 qpair failed and we were unable to recover it. 00:38:29.513 [2024-10-01 17:38:27.907376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.513 [2024-10-01 17:38:27.907384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.513 qpair failed and we were unable to recover it. 00:38:29.513 [2024-10-01 17:38:27.907745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.513 [2024-10-01 17:38:27.907752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.513 qpair failed and we were unable to recover it. 00:38:29.513 [2024-10-01 17:38:27.908046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.513 [2024-10-01 17:38:27.908053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.513 qpair failed and we were unable to recover it. 00:38:29.514 [2024-10-01 17:38:27.908377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.514 [2024-10-01 17:38:27.908384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.514 qpair failed and we were unable to recover it. 00:38:29.514 [2024-10-01 17:38:27.908686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.514 [2024-10-01 17:38:27.908693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.514 qpair failed and we were unable to recover it. 00:38:29.514 [2024-10-01 17:38:27.908912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.514 [2024-10-01 17:38:27.908919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.514 qpair failed and we were unable to recover it. 00:38:29.514 [2024-10-01 17:38:27.909276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.514 [2024-10-01 17:38:27.909283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.514 qpair failed and we were unable to recover it. 00:38:29.514 [2024-10-01 17:38:27.909608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.514 [2024-10-01 17:38:27.909615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.514 qpair failed and we were unable to recover it. 00:38:29.514 [2024-10-01 17:38:27.909927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.514 [2024-10-01 17:38:27.909934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.514 qpair failed and we were unable to recover it. 00:38:29.514 [2024-10-01 17:38:27.910247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.514 [2024-10-01 17:38:27.910255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.514 qpair failed and we were unable to recover it. 00:38:29.514 [2024-10-01 17:38:27.910447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.514 [2024-10-01 17:38:27.910454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.514 qpair failed and we were unable to recover it. 00:38:29.514 [2024-10-01 17:38:27.910740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.514 [2024-10-01 17:38:27.910747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.514 qpair failed and we were unable to recover it. 00:38:29.514 [2024-10-01 17:38:27.911023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.514 [2024-10-01 17:38:27.911030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.514 qpair failed and we were unable to recover it. 00:38:29.514 [2024-10-01 17:38:27.911412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.514 [2024-10-01 17:38:27.911419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.514 qpair failed and we were unable to recover it. 00:38:29.514 [2024-10-01 17:38:27.911789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.514 [2024-10-01 17:38:27.911797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.514 qpair failed and we were unable to recover it. 00:38:29.514 [2024-10-01 17:38:27.912091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.514 [2024-10-01 17:38:27.912099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.514 qpair failed and we were unable to recover it. 00:38:29.514 [2024-10-01 17:38:27.912309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.514 [2024-10-01 17:38:27.912316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.514 qpair failed and we were unable to recover it. 00:38:29.514 [2024-10-01 17:38:27.912504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.514 [2024-10-01 17:38:27.912511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.514 qpair failed and we were unable to recover it. 00:38:29.514 [2024-10-01 17:38:27.912783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.514 [2024-10-01 17:38:27.912791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.514 qpair failed and we were unable to recover it. 00:38:29.514 [2024-10-01 17:38:27.913031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.514 [2024-10-01 17:38:27.913040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.514 qpair failed and we were unable to recover it. 00:38:29.514 [2024-10-01 17:38:27.913406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.514 [2024-10-01 17:38:27.913414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.514 qpair failed and we were unable to recover it. 00:38:29.514 [2024-10-01 17:38:27.913725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.514 [2024-10-01 17:38:27.913733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.514 qpair failed and we were unable to recover it. 00:38:29.514 [2024-10-01 17:38:27.914031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.514 [2024-10-01 17:38:27.914039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.514 qpair failed and we were unable to recover it. 00:38:29.514 [2024-10-01 17:38:27.914260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.514 [2024-10-01 17:38:27.914267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.514 qpair failed and we were unable to recover it. 00:38:29.514 [2024-10-01 17:38:27.914578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.514 [2024-10-01 17:38:27.914585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.514 qpair failed and we were unable to recover it. 00:38:29.514 [2024-10-01 17:38:27.914691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.514 [2024-10-01 17:38:27.914697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.514 qpair failed and we were unable to recover it. 00:38:29.514 [2024-10-01 17:38:27.914874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.514 [2024-10-01 17:38:27.914881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.514 qpair failed and we were unable to recover it. 00:38:29.514 [2024-10-01 17:38:27.915094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.514 [2024-10-01 17:38:27.915101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.514 qpair failed and we were unable to recover it. 00:38:29.514 [2024-10-01 17:38:27.915370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.514 [2024-10-01 17:38:27.915378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.514 qpair failed and we were unable to recover it. 00:38:29.514 [2024-10-01 17:38:27.915692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.514 [2024-10-01 17:38:27.915699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.514 qpair failed and we were unable to recover it. 00:38:29.514 [2024-10-01 17:38:27.915907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.514 [2024-10-01 17:38:27.915914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.514 qpair failed and we were unable to recover it. 00:38:29.514 [2024-10-01 17:38:27.916120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.514 [2024-10-01 17:38:27.916128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.514 qpair failed and we were unable to recover it. 00:38:29.514 [2024-10-01 17:38:27.916415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.514 [2024-10-01 17:38:27.916421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.514 qpair failed and we were unable to recover it. 00:38:29.514 [2024-10-01 17:38:27.916646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.514 [2024-10-01 17:38:27.916653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.514 qpair failed and we were unable to recover it. 00:38:29.514 [2024-10-01 17:38:27.916980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.514 [2024-10-01 17:38:27.916986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.514 qpair failed and we were unable to recover it. 00:38:29.514 [2024-10-01 17:38:27.917219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.514 [2024-10-01 17:38:27.917226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.514 qpair failed and we were unable to recover it. 00:38:29.514 [2024-10-01 17:38:27.917419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.514 [2024-10-01 17:38:27.917427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.514 qpair failed and we were unable to recover it. 00:38:29.514 [2024-10-01 17:38:27.917693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.514 [2024-10-01 17:38:27.917700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.514 qpair failed and we were unable to recover it. 00:38:29.514 [2024-10-01 17:38:27.917925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.514 [2024-10-01 17:38:27.917931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.514 qpair failed and we were unable to recover it. 00:38:29.514 [2024-10-01 17:38:27.918127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.514 [2024-10-01 17:38:27.918134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.514 qpair failed and we were unable to recover it. 00:38:29.514 [2024-10-01 17:38:27.918407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.514 [2024-10-01 17:38:27.918414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.514 qpair failed and we were unable to recover it. 00:38:29.514 [2024-10-01 17:38:27.918753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.515 [2024-10-01 17:38:27.918760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.515 qpair failed and we were unable to recover it. 00:38:29.515 [2024-10-01 17:38:27.919066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.515 [2024-10-01 17:38:27.919073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.515 qpair failed and we were unable to recover it. 00:38:29.515 [2024-10-01 17:38:27.919328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.515 [2024-10-01 17:38:27.919335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.515 qpair failed and we were unable to recover it. 00:38:29.515 [2024-10-01 17:38:27.919650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.515 [2024-10-01 17:38:27.919657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.515 qpair failed and we were unable to recover it. 00:38:29.515 [2024-10-01 17:38:27.919868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.515 [2024-10-01 17:38:27.919875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.515 qpair failed and we were unable to recover it. 00:38:29.515 [2024-10-01 17:38:27.920177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.515 [2024-10-01 17:38:27.920185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.515 qpair failed and we were unable to recover it. 00:38:29.515 [2024-10-01 17:38:27.920499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.515 [2024-10-01 17:38:27.920507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.515 qpair failed and we were unable to recover it. 00:38:29.515 [2024-10-01 17:38:27.920819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.515 [2024-10-01 17:38:27.920826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.515 qpair failed and we were unable to recover it. 00:38:29.515 [2024-10-01 17:38:27.921020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.515 [2024-10-01 17:38:27.921027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.515 qpair failed and we were unable to recover it. 00:38:29.515 [2024-10-01 17:38:27.921366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.515 [2024-10-01 17:38:27.921373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.515 qpair failed and we were unable to recover it. 00:38:29.515 [2024-10-01 17:38:27.921608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.515 [2024-10-01 17:38:27.921615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.515 qpair failed and we were unable to recover it. 00:38:29.515 [2024-10-01 17:38:27.921813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.515 [2024-10-01 17:38:27.921820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.515 qpair failed and we were unable to recover it. 00:38:29.515 [2024-10-01 17:38:27.922158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.515 [2024-10-01 17:38:27.922165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.515 qpair failed and we were unable to recover it. 00:38:29.515 [2024-10-01 17:38:27.922502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.515 [2024-10-01 17:38:27.922509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.515 qpair failed and we were unable to recover it. 00:38:29.515 [2024-10-01 17:38:27.922762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.515 [2024-10-01 17:38:27.922769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.515 qpair failed and we were unable to recover it. 00:38:29.515 [2024-10-01 17:38:27.923074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.515 [2024-10-01 17:38:27.923081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.515 qpair failed and we were unable to recover it. 00:38:29.515 [2024-10-01 17:38:27.923390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.515 [2024-10-01 17:38:27.923397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.515 qpair failed and we were unable to recover it. 00:38:29.515 [2024-10-01 17:38:27.923693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.515 [2024-10-01 17:38:27.923700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.515 qpair failed and we were unable to recover it. 00:38:29.515 [2024-10-01 17:38:27.924012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.515 [2024-10-01 17:38:27.924028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.515 qpair failed and we were unable to recover it. 00:38:29.515 [2024-10-01 17:38:27.924244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.515 [2024-10-01 17:38:27.924251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.515 qpair failed and we were unable to recover it. 00:38:29.515 [2024-10-01 17:38:27.924476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.515 [2024-10-01 17:38:27.924484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.515 qpair failed and we were unable to recover it. 00:38:29.515 [2024-10-01 17:38:27.924653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.515 [2024-10-01 17:38:27.924661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.515 qpair failed and we were unable to recover it. 00:38:29.515 [2024-10-01 17:38:27.924878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.515 [2024-10-01 17:38:27.924885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.515 qpair failed and we were unable to recover it. 00:38:29.515 [2024-10-01 17:38:27.925136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.515 [2024-10-01 17:38:27.925143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.515 qpair failed and we were unable to recover it. 00:38:29.515 [2024-10-01 17:38:27.925446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.515 [2024-10-01 17:38:27.925453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.515 qpair failed and we were unable to recover it. 00:38:29.515 [2024-10-01 17:38:27.925805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.515 [2024-10-01 17:38:27.925812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.515 qpair failed and we were unable to recover it. 00:38:29.515 [2024-10-01 17:38:27.926117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.515 [2024-10-01 17:38:27.926124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.515 qpair failed and we were unable to recover it. 00:38:29.515 [2024-10-01 17:38:27.926458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.515 [2024-10-01 17:38:27.926465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.515 qpair failed and we were unable to recover it. 00:38:29.515 [2024-10-01 17:38:27.926766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.515 [2024-10-01 17:38:27.926772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.515 qpair failed and we were unable to recover it. 00:38:29.515 [2024-10-01 17:38:27.927068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.515 [2024-10-01 17:38:27.927075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.515 qpair failed and we were unable to recover it. 00:38:29.515 [2024-10-01 17:38:27.927397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.515 [2024-10-01 17:38:27.927404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.515 qpair failed and we were unable to recover it. 00:38:29.515 [2024-10-01 17:38:27.927635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.515 [2024-10-01 17:38:27.927642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.515 qpair failed and we were unable to recover it. 00:38:29.515 [2024-10-01 17:38:27.927955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.515 [2024-10-01 17:38:27.927962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.515 qpair failed and we were unable to recover it. 00:38:29.515 [2024-10-01 17:38:27.928317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.515 [2024-10-01 17:38:27.928324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.515 qpair failed and we were unable to recover it. 00:38:29.515 [2024-10-01 17:38:27.928622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.515 [2024-10-01 17:38:27.928629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.515 qpair failed and we were unable to recover it. 00:38:29.515 [2024-10-01 17:38:27.928957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.515 [2024-10-01 17:38:27.928963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.515 qpair failed and we were unable to recover it. 00:38:29.515 [2024-10-01 17:38:27.929180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.515 [2024-10-01 17:38:27.929187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.515 qpair failed and we were unable to recover it. 00:38:29.515 [2024-10-01 17:38:27.929375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.515 [2024-10-01 17:38:27.929381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.515 qpair failed and we were unable to recover it. 00:38:29.515 [2024-10-01 17:38:27.929690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.515 [2024-10-01 17:38:27.929696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.515 qpair failed and we were unable to recover it. 00:38:29.516 [2024-10-01 17:38:27.929989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.516 [2024-10-01 17:38:27.930000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.516 qpair failed and we were unable to recover it. 00:38:29.516 [2024-10-01 17:38:27.930327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.516 [2024-10-01 17:38:27.930334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.516 qpair failed and we were unable to recover it. 00:38:29.516 [2024-10-01 17:38:27.930623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.516 [2024-10-01 17:38:27.930630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.516 qpair failed and we were unable to recover it. 00:38:29.516 [2024-10-01 17:38:27.930958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.516 [2024-10-01 17:38:27.930965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.516 qpair failed and we were unable to recover it. 00:38:29.516 [2024-10-01 17:38:27.931256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.516 [2024-10-01 17:38:27.931264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.516 qpair failed and we were unable to recover it. 00:38:29.516 [2024-10-01 17:38:27.931566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.516 [2024-10-01 17:38:27.931574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.516 qpair failed and we were unable to recover it. 00:38:29.516 [2024-10-01 17:38:27.931925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.516 [2024-10-01 17:38:27.931933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.516 qpair failed and we were unable to recover it. 00:38:29.516 [2024-10-01 17:38:27.932299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.516 [2024-10-01 17:38:27.932307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.516 qpair failed and we were unable to recover it. 00:38:29.516 [2024-10-01 17:38:27.932617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.516 [2024-10-01 17:38:27.932624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.516 qpair failed and we were unable to recover it. 00:38:29.516 [2024-10-01 17:38:27.932836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.516 [2024-10-01 17:38:27.932843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.516 qpair failed and we were unable to recover it. 00:38:29.516 [2024-10-01 17:38:27.933029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.516 [2024-10-01 17:38:27.933037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.516 qpair failed and we were unable to recover it. 00:38:29.516 [2024-10-01 17:38:27.933370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.516 [2024-10-01 17:38:27.933377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.516 qpair failed and we were unable to recover it. 00:38:29.516 [2024-10-01 17:38:27.933466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.516 [2024-10-01 17:38:27.933472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.516 qpair failed and we were unable to recover it. 00:38:29.516 [2024-10-01 17:38:27.933747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.516 [2024-10-01 17:38:27.933754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.516 qpair failed and we were unable to recover it. 00:38:29.516 [2024-10-01 17:38:27.933965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.516 [2024-10-01 17:38:27.933972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.516 qpair failed and we were unable to recover it. 00:38:29.516 [2024-10-01 17:38:27.934274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.516 [2024-10-01 17:38:27.934281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.516 qpair failed and we were unable to recover it. 00:38:29.516 [2024-10-01 17:38:27.934592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.516 [2024-10-01 17:38:27.934598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.516 qpair failed and we were unable to recover it. 00:38:29.516 [2024-10-01 17:38:27.934819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.516 [2024-10-01 17:38:27.934826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.516 qpair failed and we were unable to recover it. 00:38:29.516 [2024-10-01 17:38:27.935170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.516 [2024-10-01 17:38:27.935177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.516 qpair failed and we were unable to recover it. 00:38:29.516 [2024-10-01 17:38:27.935477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.516 [2024-10-01 17:38:27.935485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.516 qpair failed and we were unable to recover it. 00:38:29.516 [2024-10-01 17:38:27.935651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.516 [2024-10-01 17:38:27.935658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.516 qpair failed and we were unable to recover it. 00:38:29.516 [2024-10-01 17:38:27.935858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.516 [2024-10-01 17:38:27.935865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.516 qpair failed and we were unable to recover it. 00:38:29.516 [2024-10-01 17:38:27.936176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.516 [2024-10-01 17:38:27.936183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.516 qpair failed and we were unable to recover it. 00:38:29.516 [2024-10-01 17:38:27.936533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.516 [2024-10-01 17:38:27.936540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.516 qpair failed and we were unable to recover it. 00:38:29.516 [2024-10-01 17:38:27.936836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.516 [2024-10-01 17:38:27.936843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.516 qpair failed and we were unable to recover it. 00:38:29.516 [2024-10-01 17:38:27.937166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.516 [2024-10-01 17:38:27.937173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.516 qpair failed and we were unable to recover it. 00:38:29.516 [2024-10-01 17:38:27.937381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.516 [2024-10-01 17:38:27.937388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.516 qpair failed and we were unable to recover it. 00:38:29.516 [2024-10-01 17:38:27.937626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.516 [2024-10-01 17:38:27.937632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.516 qpair failed and we were unable to recover it. 00:38:29.516 [2024-10-01 17:38:27.938022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.516 [2024-10-01 17:38:27.938029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.516 qpair failed and we were unable to recover it. 00:38:29.516 [2024-10-01 17:38:27.938377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.516 [2024-10-01 17:38:27.938385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.516 qpair failed and we were unable to recover it. 00:38:29.516 [2024-10-01 17:38:27.938693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.516 [2024-10-01 17:38:27.938701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.516 qpair failed and we were unable to recover it. 00:38:29.516 [2024-10-01 17:38:27.939006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.516 [2024-10-01 17:38:27.939014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.516 qpair failed and we were unable to recover it. 00:38:29.516 [2024-10-01 17:38:27.939342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.516 [2024-10-01 17:38:27.939348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.516 qpair failed and we were unable to recover it. 00:38:29.516 [2024-10-01 17:38:27.939683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.516 [2024-10-01 17:38:27.939690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.516 qpair failed and we were unable to recover it. 00:38:29.516 [2024-10-01 17:38:27.939878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.517 [2024-10-01 17:38:27.939884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.517 qpair failed and we were unable to recover it. 00:38:29.517 [2024-10-01 17:38:27.940122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.517 [2024-10-01 17:38:27.940130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.517 qpair failed and we were unable to recover it. 00:38:29.517 [2024-10-01 17:38:27.940428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.517 [2024-10-01 17:38:27.940435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.517 qpair failed and we were unable to recover it. 00:38:29.517 [2024-10-01 17:38:27.940640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.517 [2024-10-01 17:38:27.940647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.517 qpair failed and we were unable to recover it. 00:38:29.517 [2024-10-01 17:38:27.940990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.517 [2024-10-01 17:38:27.940999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.517 qpair failed and we were unable to recover it. 00:38:29.517 [2024-10-01 17:38:27.941230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.517 [2024-10-01 17:38:27.941237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.517 qpair failed and we were unable to recover it. 00:38:29.517 [2024-10-01 17:38:27.941290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.517 [2024-10-01 17:38:27.941297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.517 qpair failed and we were unable to recover it. 00:38:29.517 [2024-10-01 17:38:27.941580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.517 [2024-10-01 17:38:27.941587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.517 qpair failed and we were unable to recover it. 00:38:29.517 [2024-10-01 17:38:27.941886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.517 [2024-10-01 17:38:27.941892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.517 qpair failed and we were unable to recover it. 00:38:29.517 [2024-10-01 17:38:27.942203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.517 [2024-10-01 17:38:27.942210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.517 qpair failed and we were unable to recover it. 00:38:29.517 [2024-10-01 17:38:27.942522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.517 [2024-10-01 17:38:27.942529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.517 qpair failed and we were unable to recover it. 00:38:29.517 [2024-10-01 17:38:27.942803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.517 [2024-10-01 17:38:27.942810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.517 qpair failed and we were unable to recover it. 00:38:29.517 [2024-10-01 17:38:27.943136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.517 [2024-10-01 17:38:27.943143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.517 qpair failed and we were unable to recover it. 00:38:29.517 [2024-10-01 17:38:27.943445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.517 [2024-10-01 17:38:27.943452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.517 qpair failed and we were unable to recover it. 00:38:29.517 [2024-10-01 17:38:27.943774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.517 [2024-10-01 17:38:27.943780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.517 qpair failed and we were unable to recover it. 00:38:29.517 [2024-10-01 17:38:27.944068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.517 [2024-10-01 17:38:27.944076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.517 qpair failed and we were unable to recover it. 00:38:29.517 [2024-10-01 17:38:27.944396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.517 [2024-10-01 17:38:27.944404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.517 qpair failed and we were unable to recover it. 00:38:29.517 [2024-10-01 17:38:27.944697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.517 [2024-10-01 17:38:27.944704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.517 qpair failed and we were unable to recover it. 00:38:29.517 [2024-10-01 17:38:27.945000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.517 [2024-10-01 17:38:27.945008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.517 qpair failed and we were unable to recover it. 00:38:29.517 [2024-10-01 17:38:27.945319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.517 [2024-10-01 17:38:27.945326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.517 qpair failed and we were unable to recover it. 00:38:29.517 [2024-10-01 17:38:27.945637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.517 [2024-10-01 17:38:27.945644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.517 qpair failed and we were unable to recover it. 00:38:29.517 [2024-10-01 17:38:27.945968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.517 [2024-10-01 17:38:27.945974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.517 qpair failed and we were unable to recover it. 00:38:29.517 [2024-10-01 17:38:27.946208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.517 [2024-10-01 17:38:27.946215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.517 qpair failed and we were unable to recover it. 00:38:29.517 [2024-10-01 17:38:27.946541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.517 [2024-10-01 17:38:27.946547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.517 qpair failed and we were unable to recover it. 00:38:29.517 [2024-10-01 17:38:27.946836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.517 [2024-10-01 17:38:27.946843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.517 qpair failed and we were unable to recover it. 00:38:29.517 [2024-10-01 17:38:27.947056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.517 [2024-10-01 17:38:27.947065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.517 qpair failed and we were unable to recover it. 00:38:29.517 [2024-10-01 17:38:27.947346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.517 [2024-10-01 17:38:27.947353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.517 qpair failed and we were unable to recover it. 00:38:29.517 [2024-10-01 17:38:27.947564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.517 [2024-10-01 17:38:27.947571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.517 qpair failed and we were unable to recover it. 00:38:29.517 [2024-10-01 17:38:27.947906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.517 [2024-10-01 17:38:27.947913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.517 qpair failed and we were unable to recover it. 00:38:29.517 [2024-10-01 17:38:27.948202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.517 [2024-10-01 17:38:27.948209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.517 qpair failed and we were unable to recover it. 00:38:29.517 [2024-10-01 17:38:27.948517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.517 [2024-10-01 17:38:27.948524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.517 qpair failed and we were unable to recover it. 00:38:29.517 [2024-10-01 17:38:27.948840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.517 [2024-10-01 17:38:27.948847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.517 qpair failed and we were unable to recover it. 00:38:29.517 [2024-10-01 17:38:27.949202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.517 [2024-10-01 17:38:27.949209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.517 qpair failed and we were unable to recover it. 00:38:29.517 [2024-10-01 17:38:27.949551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.517 [2024-10-01 17:38:27.949558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.517 qpair failed and we were unable to recover it. 00:38:29.517 [2024-10-01 17:38:27.949899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.517 [2024-10-01 17:38:27.949906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.517 qpair failed and we were unable to recover it. 00:38:29.517 [2024-10-01 17:38:27.950210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.517 [2024-10-01 17:38:27.950217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.517 qpair failed and we were unable to recover it. 00:38:29.517 [2024-10-01 17:38:27.950385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.517 [2024-10-01 17:38:27.950393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.517 qpair failed and we were unable to recover it. 00:38:29.517 [2024-10-01 17:38:27.950761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.517 [2024-10-01 17:38:27.950769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.517 qpair failed and we were unable to recover it. 00:38:29.517 [2024-10-01 17:38:27.951094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.517 [2024-10-01 17:38:27.951101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.517 qpair failed and we were unable to recover it. 00:38:29.518 [2024-10-01 17:38:27.951411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.518 [2024-10-01 17:38:27.951425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.518 qpair failed and we were unable to recover it. 00:38:29.518 [2024-10-01 17:38:27.951735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.518 [2024-10-01 17:38:27.951742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.518 qpair failed and we were unable to recover it. 00:38:29.518 [2024-10-01 17:38:27.952029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.518 [2024-10-01 17:38:27.952036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.518 qpair failed and we were unable to recover it. 00:38:29.518 [2024-10-01 17:38:27.952448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.518 [2024-10-01 17:38:27.952455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.518 qpair failed and we were unable to recover it. 00:38:29.518 [2024-10-01 17:38:27.952641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.518 [2024-10-01 17:38:27.952648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.518 qpair failed and we were unable to recover it. 00:38:29.518 [2024-10-01 17:38:27.953007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.518 [2024-10-01 17:38:27.953015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.518 qpair failed and we were unable to recover it. 00:38:29.518 [2024-10-01 17:38:27.953317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.518 [2024-10-01 17:38:27.953324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.518 qpair failed and we were unable to recover it. 00:38:29.518 [2024-10-01 17:38:27.953655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.518 [2024-10-01 17:38:27.953662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.518 qpair failed and we were unable to recover it. 00:38:29.518 [2024-10-01 17:38:27.953988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.518 [2024-10-01 17:38:27.953997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.518 qpair failed and we were unable to recover it. 00:38:29.518 [2024-10-01 17:38:27.954305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.518 [2024-10-01 17:38:27.954312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.518 qpair failed and we were unable to recover it. 00:38:29.518 [2024-10-01 17:38:27.954610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.518 [2024-10-01 17:38:27.954617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.518 qpair failed and we were unable to recover it. 00:38:29.518 [2024-10-01 17:38:27.954988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.518 [2024-10-01 17:38:27.954997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.518 qpair failed and we were unable to recover it. 00:38:29.518 [2024-10-01 17:38:27.955297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.518 [2024-10-01 17:38:27.955305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.518 qpair failed and we were unable to recover it. 00:38:29.518 [2024-10-01 17:38:27.955632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.518 [2024-10-01 17:38:27.955639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.518 qpair failed and we were unable to recover it. 00:38:29.518 [2024-10-01 17:38:27.955975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.518 [2024-10-01 17:38:27.955982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.518 qpair failed and we were unable to recover it. 00:38:29.518 [2024-10-01 17:38:27.956309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.518 [2024-10-01 17:38:27.956316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.518 qpair failed and we were unable to recover it. 00:38:29.518 [2024-10-01 17:38:27.956514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.518 [2024-10-01 17:38:27.956520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.518 qpair failed and we were unable to recover it. 00:38:29.518 [2024-10-01 17:38:27.956922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.518 [2024-10-01 17:38:27.956928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.518 qpair failed and we were unable to recover it. 00:38:29.518 [2024-10-01 17:38:27.957283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.518 [2024-10-01 17:38:27.957290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.518 qpair failed and we were unable to recover it. 00:38:29.518 [2024-10-01 17:38:27.957577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.518 [2024-10-01 17:38:27.957584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.518 qpair failed and we were unable to recover it. 00:38:29.518 [2024-10-01 17:38:27.957897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.518 [2024-10-01 17:38:27.957905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.518 qpair failed and we were unable to recover it. 00:38:29.518 [2024-10-01 17:38:27.958108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.518 [2024-10-01 17:38:27.958115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.518 qpair failed and we were unable to recover it. 00:38:29.518 [2024-10-01 17:38:27.958385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.518 [2024-10-01 17:38:27.958393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.518 qpair failed and we were unable to recover it. 00:38:29.518 [2024-10-01 17:38:27.958600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.518 [2024-10-01 17:38:27.958607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.518 qpair failed and we were unable to recover it. 00:38:29.518 [2024-10-01 17:38:27.958905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.518 [2024-10-01 17:38:27.958911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.518 qpair failed and we were unable to recover it. 00:38:29.518 [2024-10-01 17:38:27.959204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.518 [2024-10-01 17:38:27.959211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.518 qpair failed and we were unable to recover it. 00:38:29.518 [2024-10-01 17:38:27.959511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.518 [2024-10-01 17:38:27.959520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.518 qpair failed and we were unable to recover it. 00:38:29.518 [2024-10-01 17:38:27.959873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.518 [2024-10-01 17:38:27.959880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.518 qpair failed and we were unable to recover it. 00:38:29.518 [2024-10-01 17:38:27.960179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.518 [2024-10-01 17:38:27.960186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.518 qpair failed and we were unable to recover it. 00:38:29.518 [2024-10-01 17:38:27.960487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.518 [2024-10-01 17:38:27.960494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.518 qpair failed and we were unable to recover it. 00:38:29.518 [2024-10-01 17:38:27.960811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.518 [2024-10-01 17:38:27.960820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.518 qpair failed and we were unable to recover it. 00:38:29.518 [2024-10-01 17:38:27.961162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.518 [2024-10-01 17:38:27.961169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.518 qpair failed and we were unable to recover it. 00:38:29.518 [2024-10-01 17:38:27.961469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.518 [2024-10-01 17:38:27.961476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.518 qpair failed and we were unable to recover it. 00:38:29.518 [2024-10-01 17:38:27.961646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.518 [2024-10-01 17:38:27.961654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.518 qpair failed and we were unable to recover it. 00:38:29.518 [2024-10-01 17:38:27.961947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.518 [2024-10-01 17:38:27.961953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.518 qpair failed and we were unable to recover it. 00:38:29.518 [2024-10-01 17:38:27.962315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.518 [2024-10-01 17:38:27.962322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.518 qpair failed and we were unable to recover it. 00:38:29.518 [2024-10-01 17:38:27.962631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.518 [2024-10-01 17:38:27.962637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.518 qpair failed and we were unable to recover it. 00:38:29.518 [2024-10-01 17:38:27.962946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.518 [2024-10-01 17:38:27.962953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.518 qpair failed and we were unable to recover it. 00:38:29.518 [2024-10-01 17:38:27.963307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.518 [2024-10-01 17:38:27.963314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.519 qpair failed and we were unable to recover it. 00:38:29.519 [2024-10-01 17:38:27.963623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.519 [2024-10-01 17:38:27.963630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.519 qpair failed and we were unable to recover it. 00:38:29.519 [2024-10-01 17:38:27.963813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.519 [2024-10-01 17:38:27.963821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.519 qpair failed and we were unable to recover it. 00:38:29.519 [2024-10-01 17:38:27.964123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.519 [2024-10-01 17:38:27.964130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.519 qpair failed and we were unable to recover it. 00:38:29.519 [2024-10-01 17:38:27.964423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.519 [2024-10-01 17:38:27.964430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.519 qpair failed and we were unable to recover it. 00:38:29.519 [2024-10-01 17:38:27.964636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.519 [2024-10-01 17:38:27.964643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.519 qpair failed and we were unable to recover it. 00:38:29.519 [2024-10-01 17:38:27.964906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.519 [2024-10-01 17:38:27.964913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.519 qpair failed and we were unable to recover it. 00:38:29.519 [2024-10-01 17:38:27.965185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.519 [2024-10-01 17:38:27.965192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.519 qpair failed and we were unable to recover it. 00:38:29.519 [2024-10-01 17:38:27.965508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.519 [2024-10-01 17:38:27.965515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.519 qpair failed and we were unable to recover it. 00:38:29.519 [2024-10-01 17:38:27.965821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.519 [2024-10-01 17:38:27.965828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.519 qpair failed and we were unable to recover it. 00:38:29.519 [2024-10-01 17:38:27.966130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.519 [2024-10-01 17:38:27.966137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.519 qpair failed and we were unable to recover it. 00:38:29.519 [2024-10-01 17:38:27.966441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.519 [2024-10-01 17:38:27.966448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.519 qpair failed and we were unable to recover it. 00:38:29.519 [2024-10-01 17:38:27.966751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.519 [2024-10-01 17:38:27.966758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.519 qpair failed and we were unable to recover it. 00:38:29.519 [2024-10-01 17:38:27.967049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.519 [2024-10-01 17:38:27.967057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.519 qpair failed and we were unable to recover it. 00:38:29.519 [2024-10-01 17:38:27.967377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.519 [2024-10-01 17:38:27.967384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.519 qpair failed and we were unable to recover it. 00:38:29.519 [2024-10-01 17:38:27.967589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.519 [2024-10-01 17:38:27.967596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.519 qpair failed and we were unable to recover it. 00:38:29.519 [2024-10-01 17:38:27.967784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.519 [2024-10-01 17:38:27.967791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.519 qpair failed and we were unable to recover it. 00:38:29.519 [2024-10-01 17:38:27.968170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.519 [2024-10-01 17:38:27.968177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.519 qpair failed and we were unable to recover it. 00:38:29.519 [2024-10-01 17:38:27.968475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.519 [2024-10-01 17:38:27.968482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.519 qpair failed and we were unable to recover it. 00:38:29.519 [2024-10-01 17:38:27.968792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.519 [2024-10-01 17:38:27.968798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.519 qpair failed and we were unable to recover it. 00:38:29.519 [2024-10-01 17:38:27.968964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.519 [2024-10-01 17:38:27.968971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.519 qpair failed and we were unable to recover it. 00:38:29.519 [2024-10-01 17:38:27.969322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.519 [2024-10-01 17:38:27.969329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.519 qpair failed and we were unable to recover it. 00:38:29.519 [2024-10-01 17:38:27.969629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.519 [2024-10-01 17:38:27.969636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.519 qpair failed and we were unable to recover it. 00:38:29.519 [2024-10-01 17:38:27.969947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.519 [2024-10-01 17:38:27.969953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.519 qpair failed and we were unable to recover it. 00:38:29.519 [2024-10-01 17:38:27.970128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.519 [2024-10-01 17:38:27.970135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.519 qpair failed and we were unable to recover it. 00:38:29.519 [2024-10-01 17:38:27.970466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.519 [2024-10-01 17:38:27.970473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.519 qpair failed and we were unable to recover it. 00:38:29.519 [2024-10-01 17:38:27.970777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.519 [2024-10-01 17:38:27.970783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.519 qpair failed and we were unable to recover it. 00:38:29.519 [2024-10-01 17:38:27.971066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.519 [2024-10-01 17:38:27.971073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.519 qpair failed and we were unable to recover it. 00:38:29.519 [2024-10-01 17:38:27.971241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.519 [2024-10-01 17:38:27.971251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.519 qpair failed and we were unable to recover it. 00:38:29.519 [2024-10-01 17:38:27.971558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.519 [2024-10-01 17:38:27.971565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.519 qpair failed and we were unable to recover it. 00:38:29.519 [2024-10-01 17:38:27.971858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.519 [2024-10-01 17:38:27.971866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.519 qpair failed and we were unable to recover it. 00:38:29.519 [2024-10-01 17:38:27.972166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.519 [2024-10-01 17:38:27.972173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.519 qpair failed and we were unable to recover it. 00:38:29.519 [2024-10-01 17:38:27.972482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.519 [2024-10-01 17:38:27.972489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.519 qpair failed and we were unable to recover it. 00:38:29.519 [2024-10-01 17:38:27.972816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.519 [2024-10-01 17:38:27.972823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.519 qpair failed and we were unable to recover it. 00:38:29.519 [2024-10-01 17:38:27.973020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.519 [2024-10-01 17:38:27.973027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.519 qpair failed and we were unable to recover it. 00:38:29.519 [2024-10-01 17:38:27.973368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.519 [2024-10-01 17:38:27.973375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.519 qpair failed and we were unable to recover it. 00:38:29.519 [2024-10-01 17:38:27.973713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.519 [2024-10-01 17:38:27.973721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.519 qpair failed and we were unable to recover it. 00:38:29.519 [2024-10-01 17:38:27.974035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.519 [2024-10-01 17:38:27.974042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.519 qpair failed and we were unable to recover it. 00:38:29.519 [2024-10-01 17:38:27.974383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.519 [2024-10-01 17:38:27.974389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.519 qpair failed and we were unable to recover it. 00:38:29.520 [2024-10-01 17:38:27.974692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.520 [2024-10-01 17:38:27.974699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.520 qpair failed and we were unable to recover it. 00:38:29.520 [2024-10-01 17:38:27.975032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.520 [2024-10-01 17:38:27.975039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.520 qpair failed and we were unable to recover it. 00:38:29.520 [2024-10-01 17:38:27.975428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.520 [2024-10-01 17:38:27.975436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.520 qpair failed and we were unable to recover it. 00:38:29.520 [2024-10-01 17:38:27.975745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.520 [2024-10-01 17:38:27.975752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.520 qpair failed and we were unable to recover it. 00:38:29.520 [2024-10-01 17:38:27.976036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.520 [2024-10-01 17:38:27.976043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.520 qpair failed and we were unable to recover it. 00:38:29.520 [2024-10-01 17:38:27.976374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.520 [2024-10-01 17:38:27.976380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.520 qpair failed and we were unable to recover it. 00:38:29.520 [2024-10-01 17:38:27.976672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.520 [2024-10-01 17:38:27.976680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.520 qpair failed and we were unable to recover it. 00:38:29.520 [2024-10-01 17:38:27.976985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.520 [2024-10-01 17:38:27.976993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.520 qpair failed and we were unable to recover it. 00:38:29.520 [2024-10-01 17:38:27.977292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.520 [2024-10-01 17:38:27.977299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.520 qpair failed and we were unable to recover it. 00:38:29.520 [2024-10-01 17:38:27.977580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.520 [2024-10-01 17:38:27.977587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.520 qpair failed and we were unable to recover it. 00:38:29.520 [2024-10-01 17:38:27.977913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.520 [2024-10-01 17:38:27.977921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.520 qpair failed and we were unable to recover it. 00:38:29.520 [2024-10-01 17:38:27.978207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.520 [2024-10-01 17:38:27.978216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.520 qpair failed and we were unable to recover it. 00:38:29.520 [2024-10-01 17:38:27.978523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.520 [2024-10-01 17:38:27.978531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.520 qpair failed and we were unable to recover it. 00:38:29.520 [2024-10-01 17:38:27.978859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.520 [2024-10-01 17:38:27.978867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.520 qpair failed and we were unable to recover it. 00:38:29.520 [2024-10-01 17:38:27.979220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.520 [2024-10-01 17:38:27.979228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.520 qpair failed and we were unable to recover it. 00:38:29.520 [2024-10-01 17:38:27.979554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.520 [2024-10-01 17:38:27.979562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.520 qpair failed and we were unable to recover it. 00:38:29.520 [2024-10-01 17:38:27.979874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.520 [2024-10-01 17:38:27.979881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.520 qpair failed and we were unable to recover it. 00:38:29.520 [2024-10-01 17:38:27.980262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.520 [2024-10-01 17:38:27.980270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.520 qpair failed and we were unable to recover it. 00:38:29.520 [2024-10-01 17:38:27.980556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.520 [2024-10-01 17:38:27.980565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.520 qpair failed and we were unable to recover it. 00:38:29.520 [2024-10-01 17:38:27.980881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.520 [2024-10-01 17:38:27.980889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.520 qpair failed and we were unable to recover it. 00:38:29.520 [2024-10-01 17:38:27.981221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.520 [2024-10-01 17:38:27.981229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.520 qpair failed and we were unable to recover it. 00:38:29.520 [2024-10-01 17:38:27.981541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.520 [2024-10-01 17:38:27.981549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.520 qpair failed and we were unable to recover it. 00:38:29.520 [2024-10-01 17:38:27.981864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.520 [2024-10-01 17:38:27.981872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.520 qpair failed and we were unable to recover it. 00:38:29.520 [2024-10-01 17:38:27.982182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.520 [2024-10-01 17:38:27.982190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.520 qpair failed and we were unable to recover it. 00:38:29.520 [2024-10-01 17:38:27.982506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.520 [2024-10-01 17:38:27.982514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.520 qpair failed and we were unable to recover it. 00:38:29.520 [2024-10-01 17:38:27.982801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.520 [2024-10-01 17:38:27.982810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.520 qpair failed and we were unable to recover it. 00:38:29.520 [2024-10-01 17:38:27.983107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.520 [2024-10-01 17:38:27.983115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.520 qpair failed and we were unable to recover it. 00:38:29.520 [2024-10-01 17:38:27.983424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.520 [2024-10-01 17:38:27.983431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.520 qpair failed and we were unable to recover it. 00:38:29.520 [2024-10-01 17:38:27.983737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.520 [2024-10-01 17:38:27.983744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.520 qpair failed and we were unable to recover it. 00:38:29.520 [2024-10-01 17:38:27.984068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.520 [2024-10-01 17:38:27.984078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.520 qpair failed and we were unable to recover it. 00:38:29.520 [2024-10-01 17:38:27.984376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.520 [2024-10-01 17:38:27.984383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.520 qpair failed and we were unable to recover it. 00:38:29.520 [2024-10-01 17:38:27.984683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.520 [2024-10-01 17:38:27.984690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.520 qpair failed and we were unable to recover it. 00:38:29.520 [2024-10-01 17:38:27.985002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.520 [2024-10-01 17:38:27.985010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.520 qpair failed and we were unable to recover it. 00:38:29.520 [2024-10-01 17:38:27.985227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.520 [2024-10-01 17:38:27.985233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.520 qpair failed and we were unable to recover it. 00:38:29.520 [2024-10-01 17:38:27.985541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.520 [2024-10-01 17:38:27.985548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.521 qpair failed and we were unable to recover it. 00:38:29.521 [2024-10-01 17:38:27.985877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.521 [2024-10-01 17:38:27.985884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.521 qpair failed and we were unable to recover it. 00:38:29.521 [2024-10-01 17:38:27.986170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.521 [2024-10-01 17:38:27.986177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.521 qpair failed and we were unable to recover it. 00:38:29.521 [2024-10-01 17:38:27.986504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.521 [2024-10-01 17:38:27.986512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.521 qpair failed and we were unable to recover it. 00:38:29.521 [2024-10-01 17:38:27.986801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.521 [2024-10-01 17:38:27.986809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.521 qpair failed and we were unable to recover it. 00:38:29.521 [2024-10-01 17:38:27.987111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.521 [2024-10-01 17:38:27.987119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.521 qpair failed and we were unable to recover it. 00:38:29.521 [2024-10-01 17:38:27.987430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.521 [2024-10-01 17:38:27.987437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.521 qpair failed and we were unable to recover it. 00:38:29.521 [2024-10-01 17:38:27.987741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.521 [2024-10-01 17:38:27.987748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.521 qpair failed and we were unable to recover it. 00:38:29.521 [2024-10-01 17:38:27.988035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.521 [2024-10-01 17:38:27.988042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.521 qpair failed and we were unable to recover it. 00:38:29.521 [2024-10-01 17:38:27.988366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.521 [2024-10-01 17:38:27.988373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.521 qpair failed and we were unable to recover it. 00:38:29.521 [2024-10-01 17:38:27.988581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.521 [2024-10-01 17:38:27.988588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.521 qpair failed and we were unable to recover it. 00:38:29.521 [2024-10-01 17:38:27.988781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.521 [2024-10-01 17:38:27.988789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.521 qpair failed and we were unable to recover it. 00:38:29.521 [2024-10-01 17:38:27.989088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.521 [2024-10-01 17:38:27.989095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.521 qpair failed and we were unable to recover it. 00:38:29.521 [2024-10-01 17:38:27.989408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.521 [2024-10-01 17:38:27.989416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.521 qpair failed and we were unable to recover it. 00:38:29.521 [2024-10-01 17:38:27.989680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.521 [2024-10-01 17:38:27.989686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.521 qpair failed and we were unable to recover it. 00:38:29.521 [2024-10-01 17:38:27.989978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.521 [2024-10-01 17:38:27.989985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.521 qpair failed and we were unable to recover it. 00:38:29.521 [2024-10-01 17:38:27.990241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.521 [2024-10-01 17:38:27.990248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.521 qpair failed and we were unable to recover it. 00:38:29.521 [2024-10-01 17:38:27.990555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.521 [2024-10-01 17:38:27.990562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.521 qpair failed and we were unable to recover it. 00:38:29.521 [2024-10-01 17:38:27.990878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.521 [2024-10-01 17:38:27.990885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.521 qpair failed and we were unable to recover it. 00:38:29.521 [2024-10-01 17:38:27.991160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.521 [2024-10-01 17:38:27.991167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.521 qpair failed and we were unable to recover it. 00:38:29.521 [2024-10-01 17:38:27.991472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.521 [2024-10-01 17:38:27.991479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.521 qpair failed and we were unable to recover it. 00:38:29.521 [2024-10-01 17:38:27.991787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.521 [2024-10-01 17:38:27.991795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.521 qpair failed and we were unable to recover it. 00:38:29.521 [2024-10-01 17:38:27.992102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.521 [2024-10-01 17:38:27.992109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.521 qpair failed and we were unable to recover it. 00:38:29.521 [2024-10-01 17:38:27.992420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.521 [2024-10-01 17:38:27.992427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.521 qpair failed and we were unable to recover it. 00:38:29.521 [2024-10-01 17:38:27.992754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.521 [2024-10-01 17:38:27.992760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.521 qpair failed and we were unable to recover it. 00:38:29.521 [2024-10-01 17:38:27.993078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.521 [2024-10-01 17:38:27.993085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.521 qpair failed and we were unable to recover it. 00:38:29.521 [2024-10-01 17:38:27.993448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.521 [2024-10-01 17:38:27.993455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.521 qpair failed and we were unable to recover it. 00:38:29.521 [2024-10-01 17:38:27.993745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.521 [2024-10-01 17:38:27.993753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.521 qpair failed and we were unable to recover it. 00:38:29.521 [2024-10-01 17:38:27.994042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.521 [2024-10-01 17:38:27.994050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.521 qpair failed and we were unable to recover it. 00:38:29.521 [2024-10-01 17:38:27.994373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.521 [2024-10-01 17:38:27.994380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.521 qpair failed and we were unable to recover it. 00:38:29.521 [2024-10-01 17:38:27.994689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.521 [2024-10-01 17:38:27.994696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.521 qpair failed and we were unable to recover it. 00:38:29.521 [2024-10-01 17:38:27.994984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.521 [2024-10-01 17:38:27.994990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.521 qpair failed and we were unable to recover it. 00:38:29.521 [2024-10-01 17:38:27.995307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.521 [2024-10-01 17:38:27.995314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.521 qpair failed and we were unable to recover it. 00:38:29.521 [2024-10-01 17:38:27.995625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.521 [2024-10-01 17:38:27.995632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.521 qpair failed and we were unable to recover it. 00:38:29.521 [2024-10-01 17:38:27.995939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.521 [2024-10-01 17:38:27.995946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.521 qpair failed and we were unable to recover it. 00:38:29.521 [2024-10-01 17:38:27.996244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.521 [2024-10-01 17:38:27.996253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.521 qpair failed and we were unable to recover it. 00:38:29.521 [2024-10-01 17:38:27.996402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.521 [2024-10-01 17:38:27.996409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.521 qpair failed and we were unable to recover it. 00:38:29.521 [2024-10-01 17:38:27.996668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.521 [2024-10-01 17:38:27.996675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.521 qpair failed and we were unable to recover it. 00:38:29.521 [2024-10-01 17:38:27.996977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.521 [2024-10-01 17:38:27.996984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.521 qpair failed and we were unable to recover it. 00:38:29.521 [2024-10-01 17:38:27.997279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.522 [2024-10-01 17:38:27.997293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.522 qpair failed and we were unable to recover it. 00:38:29.522 [2024-10-01 17:38:27.997573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.522 [2024-10-01 17:38:27.997580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.522 qpair failed and we were unable to recover it. 00:38:29.522 [2024-10-01 17:38:27.997882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.522 [2024-10-01 17:38:27.997889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.522 qpair failed and we were unable to recover it. 00:38:29.522 [2024-10-01 17:38:27.998218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.522 [2024-10-01 17:38:27.998225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.522 qpair failed and we were unable to recover it. 00:38:29.522 [2024-10-01 17:38:27.998509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.522 [2024-10-01 17:38:27.998516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.522 qpair failed and we were unable to recover it. 00:38:29.522 [2024-10-01 17:38:27.998806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.522 [2024-10-01 17:38:27.998812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.522 qpair failed and we were unable to recover it. 00:38:29.522 [2024-10-01 17:38:27.999122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.522 [2024-10-01 17:38:27.999129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.522 qpair failed and we were unable to recover it. 00:38:29.522 [2024-10-01 17:38:27.999438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.522 [2024-10-01 17:38:27.999446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.522 qpair failed and we were unable to recover it. 00:38:29.522 [2024-10-01 17:38:27.999611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.522 [2024-10-01 17:38:27.999618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.522 qpair failed and we were unable to recover it. 00:38:29.522 [2024-10-01 17:38:27.999965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.522 [2024-10-01 17:38:27.999972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.522 qpair failed and we were unable to recover it. 00:38:29.522 [2024-10-01 17:38:28.000257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.522 [2024-10-01 17:38:28.000265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.522 qpair failed and we were unable to recover it. 00:38:29.522 [2024-10-01 17:38:28.000579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.522 [2024-10-01 17:38:28.000586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.522 qpair failed and we were unable to recover it. 00:38:29.522 [2024-10-01 17:38:28.000876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.522 [2024-10-01 17:38:28.000883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.522 qpair failed and we were unable to recover it. 00:38:29.522 [2024-10-01 17:38:28.001202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.522 [2024-10-01 17:38:28.001209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.522 qpair failed and we were unable to recover it. 00:38:29.522 [2024-10-01 17:38:28.001517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.522 [2024-10-01 17:38:28.001524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.522 qpair failed and we were unable to recover it. 00:38:29.522 [2024-10-01 17:38:28.001832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.522 [2024-10-01 17:38:28.001838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.522 qpair failed and we were unable to recover it. 00:38:29.522 [2024-10-01 17:38:28.002147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.522 [2024-10-01 17:38:28.002155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.522 qpair failed and we were unable to recover it. 00:38:29.522 [2024-10-01 17:38:28.002356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.522 [2024-10-01 17:38:28.002363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.522 qpair failed and we were unable to recover it. 00:38:29.522 [2024-10-01 17:38:28.002688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.522 [2024-10-01 17:38:28.002695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.522 qpair failed and we were unable to recover it. 00:38:29.522 [2024-10-01 17:38:28.003000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.522 [2024-10-01 17:38:28.003008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.522 qpair failed and we were unable to recover it. 00:38:29.522 [2024-10-01 17:38:28.003284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.522 [2024-10-01 17:38:28.003291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.522 qpair failed and we were unable to recover it. 00:38:29.522 [2024-10-01 17:38:28.003603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.522 [2024-10-01 17:38:28.003610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.522 qpair failed and we were unable to recover it. 00:38:29.522 [2024-10-01 17:38:28.003916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.522 [2024-10-01 17:38:28.003922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.522 qpair failed and we were unable to recover it. 00:38:29.522 [2024-10-01 17:38:28.004220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.522 [2024-10-01 17:38:28.004235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.522 qpair failed and we were unable to recover it. 00:38:29.522 [2024-10-01 17:38:28.004536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.522 [2024-10-01 17:38:28.004543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.522 qpair failed and we were unable to recover it. 00:38:29.522 [2024-10-01 17:38:28.004890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.522 [2024-10-01 17:38:28.004897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.522 qpair failed and we were unable to recover it. 00:38:29.522 [2024-10-01 17:38:28.005245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.522 [2024-10-01 17:38:28.005253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.522 qpair failed and we were unable to recover it. 00:38:29.522 [2024-10-01 17:38:28.005562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.522 [2024-10-01 17:38:28.005569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.522 qpair failed and we were unable to recover it. 00:38:29.522 [2024-10-01 17:38:28.005872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.522 [2024-10-01 17:38:28.005879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.522 qpair failed and we were unable to recover it. 00:38:29.522 [2024-10-01 17:38:28.006230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.522 [2024-10-01 17:38:28.006237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.522 qpair failed and we were unable to recover it. 00:38:29.522 [2024-10-01 17:38:28.006560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.522 [2024-10-01 17:38:28.006566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.522 qpair failed and we were unable to recover it. 00:38:29.522 [2024-10-01 17:38:28.006878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.522 [2024-10-01 17:38:28.006884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.522 qpair failed and we were unable to recover it. 00:38:29.522 [2024-10-01 17:38:28.007160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.522 [2024-10-01 17:38:28.007167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.522 qpair failed and we were unable to recover it. 00:38:29.522 [2024-10-01 17:38:28.007464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.522 [2024-10-01 17:38:28.007472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.522 qpair failed and we were unable to recover it. 00:38:29.522 [2024-10-01 17:38:28.007802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.522 [2024-10-01 17:38:28.007809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.522 qpair failed and we were unable to recover it. 00:38:29.522 [2024-10-01 17:38:28.008116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.523 [2024-10-01 17:38:28.008123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.523 qpair failed and we were unable to recover it. 00:38:29.523 [2024-10-01 17:38:28.008432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.523 [2024-10-01 17:38:28.008441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.523 qpair failed and we were unable to recover it. 00:38:29.523 [2024-10-01 17:38:28.008770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.523 [2024-10-01 17:38:28.008777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.523 qpair failed and we were unable to recover it. 00:38:29.523 [2024-10-01 17:38:28.009060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.523 [2024-10-01 17:38:28.009067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.523 qpair failed and we were unable to recover it. 00:38:29.523 [2024-10-01 17:38:28.009387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.523 [2024-10-01 17:38:28.009395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.523 qpair failed and we were unable to recover it. 00:38:29.523 [2024-10-01 17:38:28.009703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.523 [2024-10-01 17:38:28.009711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.523 qpair failed and we were unable to recover it. 00:38:29.523 [2024-10-01 17:38:28.010020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.523 [2024-10-01 17:38:28.010027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.523 qpair failed and we were unable to recover it. 00:38:29.523 [2024-10-01 17:38:28.010317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.523 [2024-10-01 17:38:28.010323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.523 qpair failed and we were unable to recover it. 00:38:29.523 [2024-10-01 17:38:28.010520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.523 [2024-10-01 17:38:28.010527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.523 qpair failed and we were unable to recover it. 00:38:29.523 [2024-10-01 17:38:28.010823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.523 [2024-10-01 17:38:28.010830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.523 qpair failed and we were unable to recover it. 00:38:29.523 [2024-10-01 17:38:28.011120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.523 [2024-10-01 17:38:28.011127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.523 qpair failed and we were unable to recover it. 00:38:29.523 [2024-10-01 17:38:28.011436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.523 [2024-10-01 17:38:28.011443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.523 qpair failed and we were unable to recover it. 00:38:29.523 [2024-10-01 17:38:28.011748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.523 [2024-10-01 17:38:28.011756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.523 qpair failed and we were unable to recover it. 00:38:29.523 [2024-10-01 17:38:28.012055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.523 [2024-10-01 17:38:28.012062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.523 qpair failed and we were unable to recover it. 00:38:29.523 [2024-10-01 17:38:28.012269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.523 [2024-10-01 17:38:28.012276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.523 qpair failed and we were unable to recover it. 00:38:29.523 [2024-10-01 17:38:28.012584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.523 [2024-10-01 17:38:28.012591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.523 qpair failed and we were unable to recover it. 00:38:29.523 [2024-10-01 17:38:28.012908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.523 [2024-10-01 17:38:28.012915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.523 qpair failed and we were unable to recover it. 00:38:29.523 [2024-10-01 17:38:28.013217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.523 [2024-10-01 17:38:28.013230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.523 qpair failed and we were unable to recover it. 00:38:29.523 [2024-10-01 17:38:28.013512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.523 [2024-10-01 17:38:28.013519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.523 qpair failed and we were unable to recover it. 00:38:29.523 [2024-10-01 17:38:28.013825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.523 [2024-10-01 17:38:28.013832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.523 qpair failed and we were unable to recover it. 00:38:29.523 [2024-10-01 17:38:28.014105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.523 [2024-10-01 17:38:28.014112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.523 qpair failed and we were unable to recover it. 00:38:29.523 [2024-10-01 17:38:28.014517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.523 [2024-10-01 17:38:28.014525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.523 qpair failed and we were unable to recover it. 00:38:29.523 [2024-10-01 17:38:28.014828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.523 [2024-10-01 17:38:28.014835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.523 qpair failed and we were unable to recover it. 00:38:29.523 [2024-10-01 17:38:28.015143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.523 [2024-10-01 17:38:28.015150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.523 qpair failed and we were unable to recover it. 00:38:29.523 [2024-10-01 17:38:28.015463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.523 [2024-10-01 17:38:28.015470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.523 qpair failed and we were unable to recover it. 00:38:29.523 [2024-10-01 17:38:28.015757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.523 [2024-10-01 17:38:28.015763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.523 qpair failed and we were unable to recover it. 00:38:29.523 [2024-10-01 17:38:28.015954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.523 [2024-10-01 17:38:28.015961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.523 qpair failed and we were unable to recover it. 00:38:29.523 [2024-10-01 17:38:28.016185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.523 [2024-10-01 17:38:28.016193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.523 qpair failed and we were unable to recover it. 00:38:29.523 [2024-10-01 17:38:28.016486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.523 [2024-10-01 17:38:28.016493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.523 qpair failed and we were unable to recover it. 00:38:29.523 [2024-10-01 17:38:28.016783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.523 [2024-10-01 17:38:28.016791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.523 qpair failed and we were unable to recover it. 00:38:29.523 [2024-10-01 17:38:28.017506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.523 [2024-10-01 17:38:28.017525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.523 qpair failed and we were unable to recover it. 00:38:29.523 [2024-10-01 17:38:28.017810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.523 [2024-10-01 17:38:28.017818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.523 qpair failed and we were unable to recover it. 00:38:29.523 [2024-10-01 17:38:28.018612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.524 [2024-10-01 17:38:28.018629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.524 qpair failed and we were unable to recover it. 00:38:29.524 [2024-10-01 17:38:28.018984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.524 [2024-10-01 17:38:28.018991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.524 qpair failed and we were unable to recover it. 00:38:29.524 [2024-10-01 17:38:28.019284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.524 [2024-10-01 17:38:28.019292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.524 qpair failed and we were unable to recover it. 00:38:29.524 [2024-10-01 17:38:28.019616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.524 [2024-10-01 17:38:28.019623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.524 qpair failed and we were unable to recover it. 00:38:29.524 [2024-10-01 17:38:28.019932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.524 [2024-10-01 17:38:28.019939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.524 qpair failed and we were unable to recover it. 00:38:29.524 [2024-10-01 17:38:28.020247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.524 [2024-10-01 17:38:28.020256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.524 qpair failed and we were unable to recover it. 00:38:29.524 [2024-10-01 17:38:28.020546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.524 [2024-10-01 17:38:28.020553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.524 qpair failed and we were unable to recover it. 00:38:29.524 [2024-10-01 17:38:28.020820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.524 [2024-10-01 17:38:28.020827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.524 qpair failed and we were unable to recover it. 00:38:29.524 [2024-10-01 17:38:28.021139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.524 [2024-10-01 17:38:28.021147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.524 qpair failed and we were unable to recover it. 00:38:29.524 [2024-10-01 17:38:28.021459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.524 [2024-10-01 17:38:28.021469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.524 qpair failed and we were unable to recover it. 00:38:29.524 [2024-10-01 17:38:28.021657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.524 [2024-10-01 17:38:28.021664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.524 qpair failed and we were unable to recover it. 00:38:29.524 [2024-10-01 17:38:28.022496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.524 [2024-10-01 17:38:28.022512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.524 qpair failed and we were unable to recover it. 00:38:29.524 [2024-10-01 17:38:28.022795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.524 [2024-10-01 17:38:28.022803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.524 qpair failed and we were unable to recover it. 00:38:29.524 [2024-10-01 17:38:28.023525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.524 [2024-10-01 17:38:28.023541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.524 qpair failed and we were unable to recover it. 00:38:29.524 [2024-10-01 17:38:28.023826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.524 [2024-10-01 17:38:28.023841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.524 qpair failed and we were unable to recover it. 00:38:29.524 [2024-10-01 17:38:28.024520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.524 [2024-10-01 17:38:28.024536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.524 qpair failed and we were unable to recover it. 00:38:29.524 [2024-10-01 17:38:28.024818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.524 [2024-10-01 17:38:28.024826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.524 qpair failed and we were unable to recover it. 00:38:29.524 [2024-10-01 17:38:28.025753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.524 [2024-10-01 17:38:28.025773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.524 qpair failed and we were unable to recover it. 00:38:29.524 [2024-10-01 17:38:28.026057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.524 [2024-10-01 17:38:28.026066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.524 qpair failed and we were unable to recover it. 00:38:29.524 [2024-10-01 17:38:28.026377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.524 [2024-10-01 17:38:28.026387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.524 qpair failed and we were unable to recover it. 00:38:29.524 [2024-10-01 17:38:28.026604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.524 [2024-10-01 17:38:28.026619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.524 qpair failed and we were unable to recover it. 00:38:29.524 [2024-10-01 17:38:28.026955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.524 [2024-10-01 17:38:28.026967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.524 qpair failed and we were unable to recover it. 00:38:29.524 [2024-10-01 17:38:28.027304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.524 [2024-10-01 17:38:28.027312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.524 qpair failed and we were unable to recover it. 00:38:29.524 [2024-10-01 17:38:28.027624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.524 [2024-10-01 17:38:28.027631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.524 qpair failed and we were unable to recover it. 00:38:29.524 [2024-10-01 17:38:28.027915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.524 [2024-10-01 17:38:28.027922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.524 qpair failed and we were unable to recover it. 00:38:29.524 [2024-10-01 17:38:28.028214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.524 [2024-10-01 17:38:28.028221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.524 qpair failed and we were unable to recover it. 00:38:29.524 [2024-10-01 17:38:28.028541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.524 [2024-10-01 17:38:28.028548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.524 qpair failed and we were unable to recover it. 00:38:29.524 [2024-10-01 17:38:28.028859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.524 [2024-10-01 17:38:28.028865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.524 qpair failed and we were unable to recover it. 00:38:29.524 [2024-10-01 17:38:28.029176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.524 [2024-10-01 17:38:28.029184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.524 qpair failed and we were unable to recover it. 00:38:29.524 [2024-10-01 17:38:28.029507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.524 [2024-10-01 17:38:28.029514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.524 qpair failed and we were unable to recover it. 00:38:29.524 [2024-10-01 17:38:28.029696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.524 [2024-10-01 17:38:28.029704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.524 qpair failed and we were unable to recover it. 00:38:29.524 [2024-10-01 17:38:28.029866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.524 [2024-10-01 17:38:28.029873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.524 qpair failed and we were unable to recover it. 00:38:29.524 [2024-10-01 17:38:28.030201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.524 [2024-10-01 17:38:28.030208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.524 qpair failed and we were unable to recover it. 00:38:29.524 [2024-10-01 17:38:28.030409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.524 [2024-10-01 17:38:28.030416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.524 qpair failed and we were unable to recover it. 00:38:29.524 [2024-10-01 17:38:28.030711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.524 [2024-10-01 17:38:28.030717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.524 qpair failed and we were unable to recover it. 00:38:29.799 [2024-10-01 17:38:28.031031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.799 [2024-10-01 17:38:28.031039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.799 qpair failed and we were unable to recover it. 00:38:29.799 [2024-10-01 17:38:28.031255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.799 [2024-10-01 17:38:28.031264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.799 qpair failed and we were unable to recover it. 00:38:29.799 [2024-10-01 17:38:28.031617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.799 [2024-10-01 17:38:28.031623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.799 qpair failed and we were unable to recover it. 00:38:29.799 [2024-10-01 17:38:28.031923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.799 [2024-10-01 17:38:28.031931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.799 qpair failed and we were unable to recover it. 00:38:29.799 [2024-10-01 17:38:28.032240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.799 [2024-10-01 17:38:28.032247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.799 qpair failed and we were unable to recover it. 00:38:29.799 [2024-10-01 17:38:28.032547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.799 [2024-10-01 17:38:28.032554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.799 qpair failed and we were unable to recover it. 00:38:29.799 [2024-10-01 17:38:28.032881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.799 [2024-10-01 17:38:28.032888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.799 qpair failed and we were unable to recover it. 00:38:29.799 [2024-10-01 17:38:28.033133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.799 [2024-10-01 17:38:28.033140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.799 qpair failed and we were unable to recover it. 00:38:29.799 [2024-10-01 17:38:28.033359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.799 [2024-10-01 17:38:28.033367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.799 qpair failed and we were unable to recover it. 00:38:29.799 [2024-10-01 17:38:28.033674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.799 [2024-10-01 17:38:28.033680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.799 qpair failed and we were unable to recover it. 00:38:29.799 [2024-10-01 17:38:28.033992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.799 [2024-10-01 17:38:28.034004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.799 qpair failed and we were unable to recover it. 00:38:29.799 [2024-10-01 17:38:28.034315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.799 [2024-10-01 17:38:28.034322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.799 qpair failed and we were unable to recover it. 00:38:29.799 [2024-10-01 17:38:28.034617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.799 [2024-10-01 17:38:28.034624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.799 qpair failed and we were unable to recover it. 00:38:29.799 [2024-10-01 17:38:28.034803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.799 [2024-10-01 17:38:28.034811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.799 qpair failed and we were unable to recover it. 00:38:29.799 [2024-10-01 17:38:28.035095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.799 [2024-10-01 17:38:28.035103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.799 qpair failed and we were unable to recover it. 00:38:29.799 [2024-10-01 17:38:28.035486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.799 [2024-10-01 17:38:28.035494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.799 qpair failed and we were unable to recover it. 00:38:29.799 [2024-10-01 17:38:28.035691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.799 [2024-10-01 17:38:28.035698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.799 qpair failed and we were unable to recover it. 00:38:29.799 [2024-10-01 17:38:28.036027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.799 [2024-10-01 17:38:28.036035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.799 qpair failed and we were unable to recover it. 00:38:29.799 [2024-10-01 17:38:28.036335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.799 [2024-10-01 17:38:28.036350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.799 qpair failed and we were unable to recover it. 00:38:29.799 [2024-10-01 17:38:28.036688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.799 [2024-10-01 17:38:28.036695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.799 qpair failed and we were unable to recover it. 00:38:29.799 [2024-10-01 17:38:28.037011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.799 [2024-10-01 17:38:28.037018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.799 qpair failed and we were unable to recover it. 00:38:29.799 [2024-10-01 17:38:28.037293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.799 [2024-10-01 17:38:28.037300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.799 qpair failed and we were unable to recover it. 00:38:29.799 [2024-10-01 17:38:28.037588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.799 [2024-10-01 17:38:28.037595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.799 qpair failed and we were unable to recover it. 00:38:29.799 [2024-10-01 17:38:28.037903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.799 [2024-10-01 17:38:28.037910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.799 qpair failed and we were unable to recover it. 00:38:29.799 [2024-10-01 17:38:28.038213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.799 [2024-10-01 17:38:28.038220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.799 qpair failed and we were unable to recover it. 00:38:29.799 [2024-10-01 17:38:28.038436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.799 [2024-10-01 17:38:28.038443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.799 qpair failed and we were unable to recover it. 00:38:29.799 [2024-10-01 17:38:28.038737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.799 [2024-10-01 17:38:28.038744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.799 qpair failed and we were unable to recover it. 00:38:29.799 [2024-10-01 17:38:28.039054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.799 [2024-10-01 17:38:28.039061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.799 qpair failed and we were unable to recover it. 00:38:29.799 [2024-10-01 17:38:28.039287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.799 [2024-10-01 17:38:28.039294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.799 qpair failed and we were unable to recover it. 00:38:29.799 [2024-10-01 17:38:28.039468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.799 [2024-10-01 17:38:28.039475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.799 qpair failed and we were unable to recover it. 00:38:29.799 [2024-10-01 17:38:28.039756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.800 [2024-10-01 17:38:28.039763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.800 qpair failed and we were unable to recover it. 00:38:29.800 [2024-10-01 17:38:28.040048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.800 [2024-10-01 17:38:28.040056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.800 qpair failed and we were unable to recover it. 00:38:29.800 [2024-10-01 17:38:28.040443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.800 [2024-10-01 17:38:28.040450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.800 qpair failed and we were unable to recover it. 00:38:29.800 [2024-10-01 17:38:28.040758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.800 [2024-10-01 17:38:28.040765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.800 qpair failed and we were unable to recover it. 00:38:29.800 [2024-10-01 17:38:28.041044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.800 [2024-10-01 17:38:28.041051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.800 qpair failed and we were unable to recover it. 00:38:29.800 [2024-10-01 17:38:28.041362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.800 [2024-10-01 17:38:28.041369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.800 qpair failed and we were unable to recover it. 00:38:29.800 [2024-10-01 17:38:28.041728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.800 [2024-10-01 17:38:28.041734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.800 qpair failed and we were unable to recover it. 00:38:29.800 [2024-10-01 17:38:28.041927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.800 [2024-10-01 17:38:28.041934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.800 qpair failed and we were unable to recover it. 00:38:29.800 [2024-10-01 17:38:28.042302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.800 [2024-10-01 17:38:28.042309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.800 qpair failed and we were unable to recover it. 00:38:29.800 [2024-10-01 17:38:28.042504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.800 [2024-10-01 17:38:28.042511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.800 qpair failed and we were unable to recover it. 00:38:29.800 [2024-10-01 17:38:28.042827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.800 [2024-10-01 17:38:28.042833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.800 qpair failed and we were unable to recover it. 00:38:29.800 [2024-10-01 17:38:28.043147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.800 [2024-10-01 17:38:28.043154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.800 qpair failed and we were unable to recover it. 00:38:29.800 [2024-10-01 17:38:28.043441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.800 [2024-10-01 17:38:28.043449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.800 qpair failed and we were unable to recover it. 00:38:29.800 [2024-10-01 17:38:28.043752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.800 [2024-10-01 17:38:28.043759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.800 qpair failed and we were unable to recover it. 00:38:29.800 [2024-10-01 17:38:28.043968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.800 [2024-10-01 17:38:28.043976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.800 qpair failed and we were unable to recover it. 00:38:29.800 [2024-10-01 17:38:28.044182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.800 [2024-10-01 17:38:28.044189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.800 qpair failed and we were unable to recover it. 00:38:29.800 [2024-10-01 17:38:28.044344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.800 [2024-10-01 17:38:28.044351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.800 qpair failed and we were unable to recover it. 00:38:29.800 [2024-10-01 17:38:28.044653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.800 [2024-10-01 17:38:28.044660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.800 qpair failed and we were unable to recover it. 00:38:29.800 [2024-10-01 17:38:28.044961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.800 [2024-10-01 17:38:28.044969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.800 qpair failed and we were unable to recover it. 00:38:29.800 [2024-10-01 17:38:28.045290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.800 [2024-10-01 17:38:28.045297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.800 qpair failed and we were unable to recover it. 00:38:29.800 [2024-10-01 17:38:28.045601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.800 [2024-10-01 17:38:28.045608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.800 qpair failed and we were unable to recover it. 00:38:29.800 [2024-10-01 17:38:28.045912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.800 [2024-10-01 17:38:28.045918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.800 qpair failed and we were unable to recover it. 00:38:29.800 [2024-10-01 17:38:28.046217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.800 [2024-10-01 17:38:28.046224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.800 qpair failed and we were unable to recover it. 00:38:29.800 [2024-10-01 17:38:28.046545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.800 [2024-10-01 17:38:28.046551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.800 qpair failed and we were unable to recover it. 00:38:29.800 [2024-10-01 17:38:28.046871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.800 [2024-10-01 17:38:28.046877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.800 qpair failed and we were unable to recover it. 00:38:29.800 [2024-10-01 17:38:28.047194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.800 [2024-10-01 17:38:28.047201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.800 qpair failed and we were unable to recover it. 00:38:29.800 [2024-10-01 17:38:28.047493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.800 [2024-10-01 17:38:28.047500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.800 qpair failed and we were unable to recover it. 00:38:29.800 [2024-10-01 17:38:28.047793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.800 [2024-10-01 17:38:28.047800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.800 qpair failed and we were unable to recover it. 00:38:29.800 [2024-10-01 17:38:28.048122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.800 [2024-10-01 17:38:28.048129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.800 qpair failed and we were unable to recover it. 00:38:29.800 [2024-10-01 17:38:28.048464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.800 [2024-10-01 17:38:28.048471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.800 qpair failed and we were unable to recover it. 00:38:29.800 [2024-10-01 17:38:28.048854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.800 [2024-10-01 17:38:28.048862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.800 qpair failed and we were unable to recover it. 00:38:29.800 [2024-10-01 17:38:28.049174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.800 [2024-10-01 17:38:28.049181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.800 qpair failed and we were unable to recover it. 00:38:29.800 [2024-10-01 17:38:28.049526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.800 [2024-10-01 17:38:28.049534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.800 qpair failed and we were unable to recover it. 00:38:29.800 [2024-10-01 17:38:28.049827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.800 [2024-10-01 17:38:28.049835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.800 qpair failed and we were unable to recover it. 00:38:29.800 [2024-10-01 17:38:28.050134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.800 [2024-10-01 17:38:28.050141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.800 qpair failed and we were unable to recover it. 00:38:29.800 [2024-10-01 17:38:28.050444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.800 [2024-10-01 17:38:28.050451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.800 qpair failed and we were unable to recover it. 00:38:29.800 [2024-10-01 17:38:28.050756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.800 [2024-10-01 17:38:28.050763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.800 qpair failed and we were unable to recover it. 00:38:29.800 [2024-10-01 17:38:28.051076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.800 [2024-10-01 17:38:28.051083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.800 qpair failed and we were unable to recover it. 00:38:29.800 [2024-10-01 17:38:28.051455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.801 [2024-10-01 17:38:28.051461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.801 qpair failed and we were unable to recover it. 00:38:29.801 [2024-10-01 17:38:28.051778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.801 [2024-10-01 17:38:28.051785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.801 qpair failed and we were unable to recover it. 00:38:29.801 [2024-10-01 17:38:28.052108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.801 [2024-10-01 17:38:28.052115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.801 qpair failed and we were unable to recover it. 00:38:29.801 [2024-10-01 17:38:28.052509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.801 [2024-10-01 17:38:28.052516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.801 qpair failed and we were unable to recover it. 00:38:29.801 [2024-10-01 17:38:28.052893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.801 [2024-10-01 17:38:28.052901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.801 qpair failed and we were unable to recover it. 00:38:29.801 [2024-10-01 17:38:28.053185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.801 [2024-10-01 17:38:28.053192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.801 qpair failed and we were unable to recover it. 00:38:29.801 [2024-10-01 17:38:28.053502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.801 [2024-10-01 17:38:28.053518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.801 qpair failed and we were unable to recover it. 00:38:29.801 [2024-10-01 17:38:28.053808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.801 [2024-10-01 17:38:28.053814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.801 qpair failed and we were unable to recover it. 00:38:29.801 [2024-10-01 17:38:28.054108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.801 [2024-10-01 17:38:28.054115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.801 qpair failed and we were unable to recover it. 00:38:29.801 [2024-10-01 17:38:28.054434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.801 [2024-10-01 17:38:28.054441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.801 qpair failed and we were unable to recover it. 00:38:29.801 [2024-10-01 17:38:28.054760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.801 [2024-10-01 17:38:28.054768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.801 qpair failed and we were unable to recover it. 00:38:29.801 [2024-10-01 17:38:28.055046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.801 [2024-10-01 17:38:28.055053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.801 qpair failed and we were unable to recover it. 00:38:29.801 [2024-10-01 17:38:28.055458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.801 [2024-10-01 17:38:28.055464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.801 qpair failed and we were unable to recover it. 00:38:29.801 [2024-10-01 17:38:28.055753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.801 [2024-10-01 17:38:28.055761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.801 qpair failed and we were unable to recover it. 00:38:29.801 [2024-10-01 17:38:28.055997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.801 [2024-10-01 17:38:28.056005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.801 qpair failed and we were unable to recover it. 00:38:29.801 [2024-10-01 17:38:28.056310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.801 [2024-10-01 17:38:28.056317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.801 qpair failed and we were unable to recover it. 00:38:29.801 [2024-10-01 17:38:28.056524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.801 [2024-10-01 17:38:28.056530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.801 qpair failed and we were unable to recover it. 00:38:29.801 [2024-10-01 17:38:28.056858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.801 [2024-10-01 17:38:28.056865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.801 qpair failed and we were unable to recover it. 00:38:29.801 [2024-10-01 17:38:28.057072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.801 [2024-10-01 17:38:28.057079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.801 qpair failed and we were unable to recover it. 00:38:29.801 [2024-10-01 17:38:28.057348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.801 [2024-10-01 17:38:28.057355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.801 qpair failed and we were unable to recover it. 00:38:29.801 [2024-10-01 17:38:28.057552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.801 [2024-10-01 17:38:28.057559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.801 qpair failed and we were unable to recover it. 00:38:29.801 [2024-10-01 17:38:28.057841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.801 [2024-10-01 17:38:28.057848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.801 qpair failed and we were unable to recover it. 00:38:29.801 [2024-10-01 17:38:28.058123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.801 [2024-10-01 17:38:28.058131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.801 qpair failed and we were unable to recover it. 00:38:29.801 [2024-10-01 17:38:28.058482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.801 [2024-10-01 17:38:28.058489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.801 qpair failed and we were unable to recover it. 00:38:29.801 [2024-10-01 17:38:28.058780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.801 [2024-10-01 17:38:28.058787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.801 qpair failed and we were unable to recover it. 00:38:29.801 [2024-10-01 17:38:28.059041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.801 [2024-10-01 17:38:28.059048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.801 qpair failed and we were unable to recover it. 00:38:29.801 [2024-10-01 17:38:28.059456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.801 [2024-10-01 17:38:28.059463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.801 qpair failed and we were unable to recover it. 00:38:29.801 [2024-10-01 17:38:28.059754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.801 [2024-10-01 17:38:28.059761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.801 qpair failed and we were unable to recover it. 00:38:29.801 [2024-10-01 17:38:28.060093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.801 [2024-10-01 17:38:28.060101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.801 qpair failed and we were unable to recover it. 00:38:29.801 [2024-10-01 17:38:28.060418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.801 [2024-10-01 17:38:28.060425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.801 qpair failed and we were unable to recover it. 00:38:29.801 [2024-10-01 17:38:28.060738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.801 [2024-10-01 17:38:28.060745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.801 qpair failed and we were unable to recover it. 00:38:29.801 [2024-10-01 17:38:28.061125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.801 [2024-10-01 17:38:28.061132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.801 qpair failed and we were unable to recover it. 00:38:29.801 [2024-10-01 17:38:28.061395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.801 [2024-10-01 17:38:28.061402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.801 qpair failed and we were unable to recover it. 00:38:29.801 [2024-10-01 17:38:28.061717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.801 [2024-10-01 17:38:28.061723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.801 qpair failed and we were unable to recover it. 00:38:29.801 [2024-10-01 17:38:28.061998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.801 [2024-10-01 17:38:28.062005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.801 qpair failed and we were unable to recover it. 00:38:29.801 [2024-10-01 17:38:28.062272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.801 [2024-10-01 17:38:28.062279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.801 qpair failed and we were unable to recover it. 00:38:29.801 [2024-10-01 17:38:28.062595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.801 [2024-10-01 17:38:28.062602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.801 qpair failed and we were unable to recover it. 00:38:29.801 [2024-10-01 17:38:28.062913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.801 [2024-10-01 17:38:28.062920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.801 qpair failed and we were unable to recover it. 00:38:29.801 [2024-10-01 17:38:28.063216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.801 [2024-10-01 17:38:28.063224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.801 qpair failed and we were unable to recover it. 00:38:29.802 [2024-10-01 17:38:28.063526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.802 [2024-10-01 17:38:28.063533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.802 qpair failed and we were unable to recover it. 00:38:29.802 [2024-10-01 17:38:28.063829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.802 [2024-10-01 17:38:28.063836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.802 qpair failed and we were unable to recover it. 00:38:29.802 [2024-10-01 17:38:28.064148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.802 [2024-10-01 17:38:28.064155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.802 qpair failed and we were unable to recover it. 00:38:29.802 [2024-10-01 17:38:28.064466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.802 [2024-10-01 17:38:28.064473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.802 qpair failed and we were unable to recover it. 00:38:29.802 [2024-10-01 17:38:28.064800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.802 [2024-10-01 17:38:28.064806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.802 qpair failed and we were unable to recover it. 00:38:29.802 [2024-10-01 17:38:28.065133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.802 [2024-10-01 17:38:28.065141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.802 qpair failed and we were unable to recover it. 00:38:29.802 [2024-10-01 17:38:28.065451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.802 [2024-10-01 17:38:28.065457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.802 qpair failed and we were unable to recover it. 00:38:29.802 [2024-10-01 17:38:28.065667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.802 [2024-10-01 17:38:28.065674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.802 qpair failed and we were unable to recover it. 00:38:29.802 [2024-10-01 17:38:28.065860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.802 [2024-10-01 17:38:28.065867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.802 qpair failed and we were unable to recover it. 00:38:29.802 [2024-10-01 17:38:28.066165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.802 [2024-10-01 17:38:28.066171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.802 qpair failed and we were unable to recover it. 00:38:29.802 [2024-10-01 17:38:28.066475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.802 [2024-10-01 17:38:28.066482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.802 qpair failed and we were unable to recover it. 00:38:29.802 [2024-10-01 17:38:28.066792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.802 [2024-10-01 17:38:28.066799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.802 qpair failed and we were unable to recover it. 00:38:29.802 [2024-10-01 17:38:28.067103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.802 [2024-10-01 17:38:28.067110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.802 qpair failed and we were unable to recover it. 00:38:29.802 [2024-10-01 17:38:28.067328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.802 [2024-10-01 17:38:28.067334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.802 qpair failed and we were unable to recover it. 00:38:29.802 [2024-10-01 17:38:28.067649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.802 [2024-10-01 17:38:28.067657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.802 qpair failed and we were unable to recover it. 00:38:29.802 [2024-10-01 17:38:28.067969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.802 [2024-10-01 17:38:28.067976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.802 qpair failed and we were unable to recover it. 00:38:29.802 [2024-10-01 17:38:28.068277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.802 [2024-10-01 17:38:28.068284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.802 qpair failed and we were unable to recover it. 00:38:29.802 [2024-10-01 17:38:28.068586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.802 [2024-10-01 17:38:28.068593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.802 qpair failed and we were unable to recover it. 00:38:29.802 [2024-10-01 17:38:28.068783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.802 [2024-10-01 17:38:28.068790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.802 qpair failed and we were unable to recover it. 00:38:29.802 [2024-10-01 17:38:28.069088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.802 [2024-10-01 17:38:28.069096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.802 qpair failed and we were unable to recover it. 00:38:29.802 [2024-10-01 17:38:28.069316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.802 [2024-10-01 17:38:28.069323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.802 qpair failed and we were unable to recover it. 00:38:29.802 [2024-10-01 17:38:28.069493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.802 [2024-10-01 17:38:28.069501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.802 qpair failed and we were unable to recover it. 00:38:29.802 [2024-10-01 17:38:28.069825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.802 [2024-10-01 17:38:28.069832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.802 qpair failed and we were unable to recover it. 00:38:29.802 [2024-10-01 17:38:28.070131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.802 [2024-10-01 17:38:28.070146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.802 qpair failed and we were unable to recover it. 00:38:29.802 [2024-10-01 17:38:28.070515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.802 [2024-10-01 17:38:28.070521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.802 qpair failed and we were unable to recover it. 00:38:29.802 [2024-10-01 17:38:28.070824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.802 [2024-10-01 17:38:28.070831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.802 qpair failed and we were unable to recover it. 00:38:29.802 [2024-10-01 17:38:28.071042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.802 [2024-10-01 17:38:28.071050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.802 qpair failed and we were unable to recover it. 00:38:29.802 [2024-10-01 17:38:28.071444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.802 [2024-10-01 17:38:28.071451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.802 qpair failed and we were unable to recover it. 00:38:29.802 [2024-10-01 17:38:28.071750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.802 [2024-10-01 17:38:28.071757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.802 qpair failed and we were unable to recover it. 00:38:29.802 [2024-10-01 17:38:28.071970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.802 [2024-10-01 17:38:28.071977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.802 qpair failed and we were unable to recover it. 00:38:29.802 [2024-10-01 17:38:28.072299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.802 [2024-10-01 17:38:28.072306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.802 qpair failed and we were unable to recover it. 00:38:29.802 [2024-10-01 17:38:28.072598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.802 [2024-10-01 17:38:28.072605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.802 qpair failed and we were unable to recover it. 00:38:29.802 [2024-10-01 17:38:28.072804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.802 [2024-10-01 17:38:28.072812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.802 qpair failed and we were unable to recover it. 00:38:29.802 [2024-10-01 17:38:28.072988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.802 [2024-10-01 17:38:28.072999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.802 qpair failed and we were unable to recover it. 00:38:29.802 [2024-10-01 17:38:28.073303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.802 [2024-10-01 17:38:28.073310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.802 qpair failed and we were unable to recover it. 00:38:29.802 [2024-10-01 17:38:28.073618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.802 [2024-10-01 17:38:28.073626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.802 qpair failed and we were unable to recover it. 00:38:29.802 [2024-10-01 17:38:28.074005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.802 [2024-10-01 17:38:28.074013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.802 qpair failed and we were unable to recover it. 00:38:29.802 [2024-10-01 17:38:28.074129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.802 [2024-10-01 17:38:28.074136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.802 qpair failed and we were unable to recover it. 00:38:29.803 [2024-10-01 17:38:28.074324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.803 [2024-10-01 17:38:28.074331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.803 qpair failed and we were unable to recover it. 00:38:29.803 [2024-10-01 17:38:28.074645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.803 [2024-10-01 17:38:28.074652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.803 qpair failed and we were unable to recover it. 00:38:29.803 [2024-10-01 17:38:28.074950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.803 [2024-10-01 17:38:28.074959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.803 qpair failed and we were unable to recover it. 00:38:29.803 [2024-10-01 17:38:28.075267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.803 [2024-10-01 17:38:28.075274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.803 qpair failed and we were unable to recover it. 00:38:29.803 [2024-10-01 17:38:28.075562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.803 [2024-10-01 17:38:28.075578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.803 qpair failed and we were unable to recover it. 00:38:29.803 [2024-10-01 17:38:28.075746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.803 [2024-10-01 17:38:28.075752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.803 qpair failed and we were unable to recover it. 00:38:29.803 [2024-10-01 17:38:28.075977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.803 [2024-10-01 17:38:28.075984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.803 qpair failed and we were unable to recover it. 00:38:29.803 [2024-10-01 17:38:28.076371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.803 [2024-10-01 17:38:28.076378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.803 qpair failed and we were unable to recover it. 00:38:29.803 [2024-10-01 17:38:28.076569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.803 [2024-10-01 17:38:28.076576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.803 qpair failed and we were unable to recover it. 00:38:29.803 [2024-10-01 17:38:28.076853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.803 [2024-10-01 17:38:28.076860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.803 qpair failed and we were unable to recover it. 00:38:29.803 [2024-10-01 17:38:28.077099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.803 [2024-10-01 17:38:28.077107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.803 qpair failed and we were unable to recover it. 00:38:29.803 [2024-10-01 17:38:28.077408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.803 [2024-10-01 17:38:28.077415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.803 qpair failed and we were unable to recover it. 00:38:29.803 [2024-10-01 17:38:28.077732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.803 [2024-10-01 17:38:28.077739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.803 qpair failed and we were unable to recover it. 00:38:29.803 [2024-10-01 17:38:28.078049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.803 [2024-10-01 17:38:28.078057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.803 qpair failed and we were unable to recover it. 00:38:29.803 [2024-10-01 17:38:28.078353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.803 [2024-10-01 17:38:28.078360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.803 qpair failed and we were unable to recover it. 00:38:29.803 [2024-10-01 17:38:28.078651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.803 [2024-10-01 17:38:28.078657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.803 qpair failed and we were unable to recover it. 00:38:29.803 [2024-10-01 17:38:28.078976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.803 [2024-10-01 17:38:28.078984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.803 qpair failed and we were unable to recover it. 00:38:29.803 [2024-10-01 17:38:28.079283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.803 [2024-10-01 17:38:28.079291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.803 qpair failed and we were unable to recover it. 00:38:29.803 [2024-10-01 17:38:28.079497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.803 [2024-10-01 17:38:28.079506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.803 qpair failed and we were unable to recover it. 00:38:29.803 [2024-10-01 17:38:28.079833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.803 [2024-10-01 17:38:28.079840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.803 qpair failed and we were unable to recover it. 00:38:29.803 [2024-10-01 17:38:28.080042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.803 [2024-10-01 17:38:28.080049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.803 qpair failed and we were unable to recover it. 00:38:29.803 [2024-10-01 17:38:28.080287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.803 [2024-10-01 17:38:28.080294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.803 qpair failed and we were unable to recover it. 00:38:29.803 [2024-10-01 17:38:28.080463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.803 [2024-10-01 17:38:28.080470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.803 qpair failed and we were unable to recover it. 00:38:29.803 [2024-10-01 17:38:28.080771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.803 [2024-10-01 17:38:28.080779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.803 qpair failed and we were unable to recover it. 00:38:29.803 [2024-10-01 17:38:28.081068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.803 [2024-10-01 17:38:28.081075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.803 qpair failed and we were unable to recover it. 00:38:29.803 [2024-10-01 17:38:28.081399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.803 [2024-10-01 17:38:28.081406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.803 qpair failed and we were unable to recover it. 00:38:29.803 [2024-10-01 17:38:28.081730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.803 [2024-10-01 17:38:28.081738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.803 qpair failed and we were unable to recover it. 00:38:29.803 [2024-10-01 17:38:28.082028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.803 [2024-10-01 17:38:28.082036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.803 qpair failed and we were unable to recover it. 00:38:29.803 [2024-10-01 17:38:28.082248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.803 [2024-10-01 17:38:28.082256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.803 qpair failed and we were unable to recover it. 00:38:29.803 [2024-10-01 17:38:28.082560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.803 [2024-10-01 17:38:28.082567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.803 qpair failed and we were unable to recover it. 00:38:29.803 [2024-10-01 17:38:28.082879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.803 [2024-10-01 17:38:28.082886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.803 qpair failed and we were unable to recover it. 00:38:29.803 [2024-10-01 17:38:28.083190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.803 [2024-10-01 17:38:28.083197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.803 qpair failed and we were unable to recover it. 00:38:29.803 [2024-10-01 17:38:28.083482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.804 [2024-10-01 17:38:28.083489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.804 qpair failed and we were unable to recover it. 00:38:29.804 [2024-10-01 17:38:28.083817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.804 [2024-10-01 17:38:28.083824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.804 qpair failed and we were unable to recover it. 00:38:29.804 [2024-10-01 17:38:28.084023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.804 [2024-10-01 17:38:28.084031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.804 qpair failed and we were unable to recover it. 00:38:29.804 [2024-10-01 17:38:28.084377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.804 [2024-10-01 17:38:28.084384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.804 qpair failed and we were unable to recover it. 00:38:29.804 [2024-10-01 17:38:28.084691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.804 [2024-10-01 17:38:28.084698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.804 qpair failed and we were unable to recover it. 00:38:29.804 [2024-10-01 17:38:28.085025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.804 [2024-10-01 17:38:28.085032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.804 qpair failed and we were unable to recover it. 00:38:29.804 [2024-10-01 17:38:28.085257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.804 [2024-10-01 17:38:28.085264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.804 qpair failed and we were unable to recover it. 00:38:29.804 [2024-10-01 17:38:28.085568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.804 [2024-10-01 17:38:28.085575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.804 qpair failed and we were unable to recover it. 00:38:29.804 [2024-10-01 17:38:28.085882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.804 [2024-10-01 17:38:28.085890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.804 qpair failed and we were unable to recover it. 00:38:29.804 [2024-10-01 17:38:28.086194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.804 [2024-10-01 17:38:28.086202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.804 qpair failed and we were unable to recover it. 00:38:29.804 [2024-10-01 17:38:28.086484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.804 [2024-10-01 17:38:28.086491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.804 qpair failed and we were unable to recover it. 00:38:29.804 [2024-10-01 17:38:28.086781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.804 [2024-10-01 17:38:28.086788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.804 qpair failed and we were unable to recover it. 00:38:29.804 [2024-10-01 17:38:28.087069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.804 [2024-10-01 17:38:28.087076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.804 qpair failed and we were unable to recover it. 00:38:29.804 [2024-10-01 17:38:28.087404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.804 [2024-10-01 17:38:28.087412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.804 qpair failed and we were unable to recover it. 00:38:29.804 [2024-10-01 17:38:28.087688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.804 [2024-10-01 17:38:28.087696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.804 qpair failed and we were unable to recover it. 00:38:29.804 [2024-10-01 17:38:28.088007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.804 [2024-10-01 17:38:28.088015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.804 qpair failed and we were unable to recover it. 00:38:29.804 [2024-10-01 17:38:28.088290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.804 [2024-10-01 17:38:28.088297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.804 qpair failed and we were unable to recover it. 00:38:29.804 [2024-10-01 17:38:28.088592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.804 [2024-10-01 17:38:28.088600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.804 qpair failed and we were unable to recover it. 00:38:29.804 [2024-10-01 17:38:28.088777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.804 [2024-10-01 17:38:28.088785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.804 qpair failed and we were unable to recover it. 00:38:29.804 [2024-10-01 17:38:28.089012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.804 [2024-10-01 17:38:28.089019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.804 qpair failed and we were unable to recover it. 00:38:29.804 [2024-10-01 17:38:28.089303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.804 [2024-10-01 17:38:28.089310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.804 qpair failed and we were unable to recover it. 00:38:29.804 [2024-10-01 17:38:28.089645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.804 [2024-10-01 17:38:28.089652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.804 qpair failed and we were unable to recover it. 00:38:29.804 [2024-10-01 17:38:28.089947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.804 [2024-10-01 17:38:28.089955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.804 qpair failed and we were unable to recover it. 00:38:29.804 [2024-10-01 17:38:28.090282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.804 [2024-10-01 17:38:28.090289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.804 qpair failed and we were unable to recover it. 00:38:29.804 [2024-10-01 17:38:28.090615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.804 [2024-10-01 17:38:28.090625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.804 qpair failed and we were unable to recover it. 00:38:29.804 [2024-10-01 17:38:28.090919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.804 [2024-10-01 17:38:28.090926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.804 qpair failed and we were unable to recover it. 00:38:29.804 [2024-10-01 17:38:28.091236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.804 [2024-10-01 17:38:28.091244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.804 qpair failed and we were unable to recover it. 00:38:29.804 [2024-10-01 17:38:28.091424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.804 [2024-10-01 17:38:28.091432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.804 qpair failed and we were unable to recover it. 00:38:29.804 [2024-10-01 17:38:28.091736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.804 [2024-10-01 17:38:28.091744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.804 qpair failed and we were unable to recover it. 00:38:29.804 [2024-10-01 17:38:28.092064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.804 [2024-10-01 17:38:28.092072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.804 qpair failed and we were unable to recover it. 00:38:29.804 [2024-10-01 17:38:28.092394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.804 [2024-10-01 17:38:28.092401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.804 qpair failed and we were unable to recover it. 00:38:29.804 [2024-10-01 17:38:28.092700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.804 [2024-10-01 17:38:28.092708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.804 qpair failed and we were unable to recover it. 00:38:29.804 [2024-10-01 17:38:28.093006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.804 [2024-10-01 17:38:28.093014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.804 qpair failed and we were unable to recover it. 00:38:29.804 [2024-10-01 17:38:28.093384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.804 [2024-10-01 17:38:28.093392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.804 qpair failed and we were unable to recover it. 00:38:29.804 [2024-10-01 17:38:28.093674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.804 [2024-10-01 17:38:28.093681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.804 qpair failed and we were unable to recover it. 00:38:29.804 [2024-10-01 17:38:28.093973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.804 [2024-10-01 17:38:28.093979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.804 qpair failed and we were unable to recover it. 00:38:29.804 [2024-10-01 17:38:28.094326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.804 [2024-10-01 17:38:28.094334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.804 qpair failed and we were unable to recover it. 00:38:29.804 [2024-10-01 17:38:28.094649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.804 [2024-10-01 17:38:28.094655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.804 qpair failed and we were unable to recover it. 00:38:29.804 [2024-10-01 17:38:28.094971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.804 [2024-10-01 17:38:28.094979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.805 qpair failed and we were unable to recover it. 00:38:29.805 [2024-10-01 17:38:28.095292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.805 [2024-10-01 17:38:28.095300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.805 qpair failed and we were unable to recover it. 00:38:29.805 [2024-10-01 17:38:28.095500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.805 [2024-10-01 17:38:28.095507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.805 qpair failed and we were unable to recover it. 00:38:29.805 [2024-10-01 17:38:28.095821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.805 [2024-10-01 17:38:28.095829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.805 qpair failed and we were unable to recover it. 00:38:29.805 [2024-10-01 17:38:28.096152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.805 [2024-10-01 17:38:28.096160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.805 qpair failed and we were unable to recover it. 00:38:29.805 [2024-10-01 17:38:28.096539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.805 [2024-10-01 17:38:28.096546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.805 qpair failed and we were unable to recover it. 00:38:29.805 [2024-10-01 17:38:28.096851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.805 [2024-10-01 17:38:28.096858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.805 qpair failed and we were unable to recover it. 00:38:29.805 [2024-10-01 17:38:28.097195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.805 [2024-10-01 17:38:28.097202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.805 qpair failed and we were unable to recover it. 00:38:29.805 [2024-10-01 17:38:28.097595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.805 [2024-10-01 17:38:28.097602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.805 qpair failed and we were unable to recover it. 00:38:29.805 [2024-10-01 17:38:28.097882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.805 [2024-10-01 17:38:28.097889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.805 qpair failed and we were unable to recover it. 00:38:29.805 [2024-10-01 17:38:28.098234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.805 [2024-10-01 17:38:28.098242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.805 qpair failed and we were unable to recover it. 00:38:29.805 [2024-10-01 17:38:28.098567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.805 [2024-10-01 17:38:28.098575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.805 qpair failed and we were unable to recover it. 00:38:29.805 [2024-10-01 17:38:28.098882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.805 [2024-10-01 17:38:28.098890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.805 qpair failed and we were unable to recover it. 00:38:29.805 [2024-10-01 17:38:28.099199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.805 [2024-10-01 17:38:28.099207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.805 qpair failed and we were unable to recover it. 00:38:29.805 [2024-10-01 17:38:28.099503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.805 [2024-10-01 17:38:28.099518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.805 qpair failed and we were unable to recover it. 00:38:29.805 [2024-10-01 17:38:28.099701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.805 [2024-10-01 17:38:28.099709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.805 qpair failed and we were unable to recover it. 00:38:29.805 [2024-10-01 17:38:28.099967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.805 [2024-10-01 17:38:28.099974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.805 qpair failed and we were unable to recover it. 00:38:29.805 [2024-10-01 17:38:28.100279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.805 [2024-10-01 17:38:28.100287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.805 qpair failed and we were unable to recover it. 00:38:29.805 [2024-10-01 17:38:28.100587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.805 [2024-10-01 17:38:28.100594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.805 qpair failed and we were unable to recover it. 00:38:29.805 [2024-10-01 17:38:28.100821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.805 [2024-10-01 17:38:28.100828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.805 qpair failed and we were unable to recover it. 00:38:29.805 [2024-10-01 17:38:28.101139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.805 [2024-10-01 17:38:28.101146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.805 qpair failed and we were unable to recover it. 00:38:29.805 [2024-10-01 17:38:28.101475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.805 [2024-10-01 17:38:28.101483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.805 qpair failed and we were unable to recover it. 00:38:29.805 [2024-10-01 17:38:28.101674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.805 [2024-10-01 17:38:28.101681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.805 qpair failed and we were unable to recover it. 00:38:29.805 [2024-10-01 17:38:28.101998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.805 [2024-10-01 17:38:28.102006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.805 qpair failed and we were unable to recover it. 00:38:29.805 [2024-10-01 17:38:28.102352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.805 [2024-10-01 17:38:28.102359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.805 qpair failed and we were unable to recover it. 00:38:29.805 [2024-10-01 17:38:28.102655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.805 [2024-10-01 17:38:28.102663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.805 qpair failed and we were unable to recover it. 00:38:29.805 [2024-10-01 17:38:28.102867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.805 [2024-10-01 17:38:28.102876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.805 qpair failed and we were unable to recover it. 00:38:29.805 [2024-10-01 17:38:28.103187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.805 [2024-10-01 17:38:28.103195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.805 qpair failed and we were unable to recover it. 00:38:29.805 [2024-10-01 17:38:28.103490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.805 [2024-10-01 17:38:28.103497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.805 qpair failed and we were unable to recover it. 00:38:29.805 [2024-10-01 17:38:28.103784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.805 [2024-10-01 17:38:28.103792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.805 qpair failed and we were unable to recover it. 00:38:29.805 [2024-10-01 17:38:28.104104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.805 [2024-10-01 17:38:28.104111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.805 qpair failed and we were unable to recover it. 00:38:29.805 [2024-10-01 17:38:28.104359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.805 [2024-10-01 17:38:28.104366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.805 qpair failed and we were unable to recover it. 00:38:29.805 [2024-10-01 17:38:28.104669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.805 [2024-10-01 17:38:28.104676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.805 qpair failed and we were unable to recover it. 00:38:29.805 [2024-10-01 17:38:28.105004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.805 [2024-10-01 17:38:28.105012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.805 qpair failed and we were unable to recover it. 00:38:29.805 [2024-10-01 17:38:28.105381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.805 [2024-10-01 17:38:28.105388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.805 qpair failed and we were unable to recover it. 00:38:29.805 [2024-10-01 17:38:28.105677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.805 [2024-10-01 17:38:28.105684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.805 qpair failed and we were unable to recover it. 00:38:29.805 [2024-10-01 17:38:28.105966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.805 [2024-10-01 17:38:28.105973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.805 qpair failed and we were unable to recover it. 00:38:29.805 [2024-10-01 17:38:28.106386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.805 [2024-10-01 17:38:28.106394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.805 qpair failed and we were unable to recover it. 00:38:29.805 [2024-10-01 17:38:28.106694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.805 [2024-10-01 17:38:28.106701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.805 qpair failed and we were unable to recover it. 00:38:29.806 [2024-10-01 17:38:28.107016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.806 [2024-10-01 17:38:28.107023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.806 qpair failed and we were unable to recover it. 00:38:29.806 [2024-10-01 17:38:28.107255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.806 [2024-10-01 17:38:28.107262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.806 qpair failed and we were unable to recover it. 00:38:29.806 [2024-10-01 17:38:28.107600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.806 [2024-10-01 17:38:28.107607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.806 qpair failed and we were unable to recover it. 00:38:29.806 [2024-10-01 17:38:28.107901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.806 [2024-10-01 17:38:28.107908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.806 qpair failed and we were unable to recover it. 00:38:29.806 [2024-10-01 17:38:28.108060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.806 [2024-10-01 17:38:28.108068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.806 qpair failed and we were unable to recover it. 00:38:29.806 [2024-10-01 17:38:28.108291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.806 [2024-10-01 17:38:28.108299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.806 qpair failed and we were unable to recover it. 00:38:29.806 [2024-10-01 17:38:28.108603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.806 [2024-10-01 17:38:28.108610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.806 qpair failed and we were unable to recover it. 00:38:29.806 [2024-10-01 17:38:28.108914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.806 [2024-10-01 17:38:28.108921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.806 qpair failed and we were unable to recover it. 00:38:29.806 [2024-10-01 17:38:28.109229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.806 [2024-10-01 17:38:28.109236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.806 qpair failed and we were unable to recover it. 00:38:29.806 [2024-10-01 17:38:28.109555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.806 [2024-10-01 17:38:28.109562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.806 qpair failed and we were unable to recover it. 00:38:29.806 [2024-10-01 17:38:28.109772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.806 [2024-10-01 17:38:28.109779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.806 qpair failed and we were unable to recover it. 00:38:29.806 [2024-10-01 17:38:28.110058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.806 [2024-10-01 17:38:28.110065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.806 qpair failed and we were unable to recover it. 00:38:29.806 [2024-10-01 17:38:28.110288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.806 [2024-10-01 17:38:28.110294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.806 qpair failed and we were unable to recover it. 00:38:29.806 [2024-10-01 17:38:28.110608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.806 [2024-10-01 17:38:28.110615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.806 qpair failed and we were unable to recover it. 00:38:29.806 [2024-10-01 17:38:28.110905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.806 [2024-10-01 17:38:28.110912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.806 qpair failed and we were unable to recover it. 00:38:29.806 [2024-10-01 17:38:28.111252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.806 [2024-10-01 17:38:28.111259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.806 qpair failed and we were unable to recover it. 00:38:29.806 [2024-10-01 17:38:28.111539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.806 [2024-10-01 17:38:28.111546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.806 qpair failed and we were unable to recover it. 00:38:29.806 [2024-10-01 17:38:28.111909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.806 [2024-10-01 17:38:28.111917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.806 qpair failed and we were unable to recover it. 00:38:29.806 [2024-10-01 17:38:28.112219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.806 [2024-10-01 17:38:28.112227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.806 qpair failed and we were unable to recover it. 00:38:29.806 [2024-10-01 17:38:28.112552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.806 [2024-10-01 17:38:28.112560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.806 qpair failed and we were unable to recover it. 00:38:29.806 [2024-10-01 17:38:28.112753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.806 [2024-10-01 17:38:28.112761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.806 qpair failed and we were unable to recover it. 00:38:29.806 [2024-10-01 17:38:28.112953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.806 [2024-10-01 17:38:28.112961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.806 qpair failed and we were unable to recover it. 00:38:29.806 [2024-10-01 17:38:28.113290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.806 [2024-10-01 17:38:28.113297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.806 qpair failed and we were unable to recover it. 00:38:29.806 [2024-10-01 17:38:28.113494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.806 [2024-10-01 17:38:28.113501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.806 qpair failed and we were unable to recover it. 00:38:29.806 [2024-10-01 17:38:28.113604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.806 [2024-10-01 17:38:28.113611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.806 qpair failed and we were unable to recover it. 00:38:29.806 [2024-10-01 17:38:28.113891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.806 [2024-10-01 17:38:28.113899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.806 qpair failed and we were unable to recover it. 00:38:29.806 [2024-10-01 17:38:28.114217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.806 [2024-10-01 17:38:28.114226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.806 qpair failed and we were unable to recover it. 00:38:29.806 [2024-10-01 17:38:28.114531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.806 [2024-10-01 17:38:28.114540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.806 qpair failed and we were unable to recover it. 00:38:29.806 [2024-10-01 17:38:28.114633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.806 [2024-10-01 17:38:28.114640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.806 qpair failed and we were unable to recover it. 00:38:29.806 [2024-10-01 17:38:28.114922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.806 [2024-10-01 17:38:28.114930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.806 qpair failed and we were unable to recover it. 00:38:29.806 [2024-10-01 17:38:28.115204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.806 [2024-10-01 17:38:28.115211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.806 qpair failed and we were unable to recover it. 00:38:29.806 [2024-10-01 17:38:28.115398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.806 [2024-10-01 17:38:28.115406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.806 qpair failed and we were unable to recover it. 00:38:29.806 [2024-10-01 17:38:28.115630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.806 [2024-10-01 17:38:28.115638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.806 qpair failed and we were unable to recover it. 00:38:29.806 [2024-10-01 17:38:28.115960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.806 [2024-10-01 17:38:28.115967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.806 qpair failed and we were unable to recover it. 00:38:29.806 [2024-10-01 17:38:28.116262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.806 [2024-10-01 17:38:28.116270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.806 qpair failed and we were unable to recover it. 00:38:29.806 [2024-10-01 17:38:28.116591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.806 [2024-10-01 17:38:28.116599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.806 qpair failed and we were unable to recover it. 00:38:29.806 [2024-10-01 17:38:28.116844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.806 [2024-10-01 17:38:28.116852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.806 qpair failed and we were unable to recover it. 00:38:29.806 [2024-10-01 17:38:28.117050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.806 [2024-10-01 17:38:28.117057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.806 qpair failed and we were unable to recover it. 00:38:29.807 [2024-10-01 17:38:28.117344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.807 [2024-10-01 17:38:28.117351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.807 qpair failed and we were unable to recover it. 00:38:29.807 [2024-10-01 17:38:28.117706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.807 [2024-10-01 17:38:28.117714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.807 qpair failed and we were unable to recover it. 00:38:29.807 [2024-10-01 17:38:28.118051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.807 [2024-10-01 17:38:28.118060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.807 qpair failed and we were unable to recover it. 00:38:29.807 [2024-10-01 17:38:28.118370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.807 [2024-10-01 17:38:28.118377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.807 qpair failed and we were unable to recover it. 00:38:29.807 [2024-10-01 17:38:28.118453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.807 [2024-10-01 17:38:28.118459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.807 qpair failed and we were unable to recover it. 00:38:29.807 [2024-10-01 17:38:28.118782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.807 [2024-10-01 17:38:28.118789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.807 qpair failed and we were unable to recover it. 00:38:29.807 [2024-10-01 17:38:28.118961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.807 [2024-10-01 17:38:28.118967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.807 qpair failed and we were unable to recover it. 00:38:29.807 [2024-10-01 17:38:28.119136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.807 [2024-10-01 17:38:28.119144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.807 qpair failed and we were unable to recover it. 00:38:29.807 [2024-10-01 17:38:28.119493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.807 [2024-10-01 17:38:28.119500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.807 qpair failed and we were unable to recover it. 00:38:29.807 [2024-10-01 17:38:28.119809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.807 [2024-10-01 17:38:28.119816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.807 qpair failed and we were unable to recover it. 00:38:29.807 [2024-10-01 17:38:28.120117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.807 [2024-10-01 17:38:28.120124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.807 qpair failed and we were unable to recover it. 00:38:29.807 [2024-10-01 17:38:28.120446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.807 [2024-10-01 17:38:28.120453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.807 qpair failed and we were unable to recover it. 00:38:29.807 [2024-10-01 17:38:28.120651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.807 [2024-10-01 17:38:28.120657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.807 qpair failed and we were unable to recover it. 00:38:29.807 [2024-10-01 17:38:28.120990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.807 [2024-10-01 17:38:28.121005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.807 qpair failed and we were unable to recover it. 00:38:29.807 [2024-10-01 17:38:28.121313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.807 [2024-10-01 17:38:28.121320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.807 qpair failed and we were unable to recover it. 00:38:29.807 [2024-10-01 17:38:28.121612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.807 [2024-10-01 17:38:28.121619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.807 qpair failed and we were unable to recover it. 00:38:29.807 [2024-10-01 17:38:28.121920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.807 [2024-10-01 17:38:28.121926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.807 qpair failed and we were unable to recover it. 00:38:29.807 [2024-10-01 17:38:28.122173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.807 [2024-10-01 17:38:28.122180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.807 qpair failed and we were unable to recover it. 00:38:29.807 [2024-10-01 17:38:28.122492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.807 [2024-10-01 17:38:28.122499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.807 qpair failed and we were unable to recover it. 00:38:29.807 [2024-10-01 17:38:28.122813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.807 [2024-10-01 17:38:28.122820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.807 qpair failed and we were unable to recover it. 00:38:29.807 [2024-10-01 17:38:28.123115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.807 [2024-10-01 17:38:28.123123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.807 qpair failed and we were unable to recover it. 00:38:29.807 [2024-10-01 17:38:28.123371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.807 [2024-10-01 17:38:28.123378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.807 qpair failed and we were unable to recover it. 00:38:29.807 [2024-10-01 17:38:28.123727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.807 [2024-10-01 17:38:28.123733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.807 qpair failed and we were unable to recover it. 00:38:29.807 [2024-10-01 17:38:28.124043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.807 [2024-10-01 17:38:28.124050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.807 qpair failed and we were unable to recover it. 00:38:29.807 [2024-10-01 17:38:28.124253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.807 [2024-10-01 17:38:28.124260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.807 qpair failed and we were unable to recover it. 00:38:29.807 [2024-10-01 17:38:28.124599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.807 [2024-10-01 17:38:28.124607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.807 qpair failed and we were unable to recover it. 00:38:29.807 [2024-10-01 17:38:28.124918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.807 [2024-10-01 17:38:28.124925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.807 qpair failed and we were unable to recover it. 00:38:29.807 [2024-10-01 17:38:28.125231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.807 [2024-10-01 17:38:28.125239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.807 qpair failed and we were unable to recover it. 00:38:29.807 [2024-10-01 17:38:28.125532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.807 [2024-10-01 17:38:28.125539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.807 qpair failed and we were unable to recover it. 00:38:29.807 [2024-10-01 17:38:28.125660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.807 [2024-10-01 17:38:28.125668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.807 qpair failed and we were unable to recover it. 00:38:29.807 [2024-10-01 17:38:28.125836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.807 [2024-10-01 17:38:28.125843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.807 qpair failed and we were unable to recover it. 00:38:29.807 [2024-10-01 17:38:28.126161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.807 [2024-10-01 17:38:28.126168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.807 qpair failed and we were unable to recover it. 00:38:29.807 [2024-10-01 17:38:28.126486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.807 [2024-10-01 17:38:28.126502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.807 qpair failed and we were unable to recover it. 00:38:29.807 [2024-10-01 17:38:28.126687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.807 [2024-10-01 17:38:28.126694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.807 qpair failed and we were unable to recover it. 00:38:29.807 [2024-10-01 17:38:28.126881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.807 [2024-10-01 17:38:28.126888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.807 qpair failed and we were unable to recover it. 00:38:29.807 [2024-10-01 17:38:28.127104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.807 [2024-10-01 17:38:28.127111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.807 qpair failed and we were unable to recover it. 00:38:29.807 [2024-10-01 17:38:28.127411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.807 [2024-10-01 17:38:28.127418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.807 qpair failed and we were unable to recover it. 00:38:29.807 [2024-10-01 17:38:28.127734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.807 [2024-10-01 17:38:28.127741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.807 qpair failed and we were unable to recover it. 00:38:29.807 [2024-10-01 17:38:28.127926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.807 [2024-10-01 17:38:28.127933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.808 qpair failed and we were unable to recover it. 00:38:29.808 [2024-10-01 17:38:28.128015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.808 [2024-10-01 17:38:28.128022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.808 qpair failed and we were unable to recover it. 00:38:29.808 [2024-10-01 17:38:28.128357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.808 [2024-10-01 17:38:28.128364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.808 qpair failed and we were unable to recover it. 00:38:29.808 [2024-10-01 17:38:28.128706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.808 [2024-10-01 17:38:28.128712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.808 qpair failed and we were unable to recover it. 00:38:29.808 [2024-10-01 17:38:28.129006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.808 [2024-10-01 17:38:28.129013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.808 qpair failed and we were unable to recover it. 00:38:29.808 [2024-10-01 17:38:28.129358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.808 [2024-10-01 17:38:28.129365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.808 qpair failed and we were unable to recover it. 00:38:29.808 [2024-10-01 17:38:28.129663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.808 [2024-10-01 17:38:28.129677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.808 qpair failed and we were unable to recover it. 00:38:29.808 [2024-10-01 17:38:28.129979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.808 [2024-10-01 17:38:28.129987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.808 qpair failed and we were unable to recover it. 00:38:29.808 [2024-10-01 17:38:28.130199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.808 [2024-10-01 17:38:28.130207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.808 qpair failed and we were unable to recover it. 00:38:29.808 [2024-10-01 17:38:28.130536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.808 [2024-10-01 17:38:28.130543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.808 qpair failed and we were unable to recover it. 00:38:29.808 [2024-10-01 17:38:28.130831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.808 [2024-10-01 17:38:28.130838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.808 qpair failed and we were unable to recover it. 00:38:29.808 [2024-10-01 17:38:28.131155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.808 [2024-10-01 17:38:28.131162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.808 qpair failed and we were unable to recover it. 00:38:29.808 [2024-10-01 17:38:28.131458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.808 [2024-10-01 17:38:28.131466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.808 qpair failed and we were unable to recover it. 00:38:29.808 [2024-10-01 17:38:28.131775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.808 [2024-10-01 17:38:28.131782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.808 qpair failed and we were unable to recover it. 00:38:29.808 [2024-10-01 17:38:28.131976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.808 [2024-10-01 17:38:28.131983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.808 qpair failed and we were unable to recover it. 00:38:29.808 [2024-10-01 17:38:28.132204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.808 [2024-10-01 17:38:28.132211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.808 qpair failed and we were unable to recover it. 00:38:29.808 [2024-10-01 17:38:28.132540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.808 [2024-10-01 17:38:28.132547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.808 qpair failed and we were unable to recover it. 00:38:29.808 [2024-10-01 17:38:28.132861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.808 [2024-10-01 17:38:28.132868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.808 qpair failed and we were unable to recover it. 00:38:29.808 [2024-10-01 17:38:28.133183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.808 [2024-10-01 17:38:28.133191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.808 qpair failed and we were unable to recover it. 00:38:29.808 [2024-10-01 17:38:28.133489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.808 [2024-10-01 17:38:28.133497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.808 qpair failed and we were unable to recover it. 00:38:29.808 [2024-10-01 17:38:28.133789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.808 [2024-10-01 17:38:28.133796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.808 qpair failed and we were unable to recover it. 00:38:29.808 [2024-10-01 17:38:28.134097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.808 [2024-10-01 17:38:28.134105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.808 qpair failed and we were unable to recover it. 00:38:29.808 [2024-10-01 17:38:28.134345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.808 [2024-10-01 17:38:28.134352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.808 qpair failed and we were unable to recover it. 00:38:29.808 [2024-10-01 17:38:28.134562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.808 [2024-10-01 17:38:28.134570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.808 qpair failed and we were unable to recover it. 00:38:29.808 [2024-10-01 17:38:28.134761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.808 [2024-10-01 17:38:28.134768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.808 qpair failed and we were unable to recover it. 00:38:29.808 [2024-10-01 17:38:28.135086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.808 [2024-10-01 17:38:28.135093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.808 qpair failed and we were unable to recover it. 00:38:29.808 [2024-10-01 17:38:28.135303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.808 [2024-10-01 17:38:28.135310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.808 qpair failed and we were unable to recover it. 00:38:29.808 [2024-10-01 17:38:28.135539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.808 [2024-10-01 17:38:28.135546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.808 qpair failed and we were unable to recover it. 00:38:29.808 [2024-10-01 17:38:28.135863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.808 [2024-10-01 17:38:28.135870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.808 qpair failed and we were unable to recover it. 00:38:29.808 [2024-10-01 17:38:28.136160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.808 [2024-10-01 17:38:28.136167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.808 qpair failed and we were unable to recover it. 00:38:29.808 [2024-10-01 17:38:28.136346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.808 [2024-10-01 17:38:28.136354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.808 qpair failed and we were unable to recover it. 00:38:29.808 [2024-10-01 17:38:28.136621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.808 [2024-10-01 17:38:28.136629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.808 qpair failed and we were unable to recover it. 00:38:29.808 [2024-10-01 17:38:28.136806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.808 [2024-10-01 17:38:28.136813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.808 qpair failed and we were unable to recover it. 00:38:29.808 [2024-10-01 17:38:28.137108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.808 [2024-10-01 17:38:28.137115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.808 qpair failed and we were unable to recover it. 00:38:29.808 [2024-10-01 17:38:28.137429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.809 [2024-10-01 17:38:28.137436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.809 qpair failed and we were unable to recover it. 00:38:29.809 [2024-10-01 17:38:28.137730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.809 [2024-10-01 17:38:28.137736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.809 qpair failed and we were unable to recover it. 00:38:29.809 [2024-10-01 17:38:28.138036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.809 [2024-10-01 17:38:28.138043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.809 qpair failed and we were unable to recover it. 00:38:29.809 [2024-10-01 17:38:28.138376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.809 [2024-10-01 17:38:28.138384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.809 qpair failed and we were unable to recover it. 00:38:29.809 [2024-10-01 17:38:28.138784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.809 [2024-10-01 17:38:28.138791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.809 qpair failed and we were unable to recover it. 00:38:29.809 [2024-10-01 17:38:28.139159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.809 [2024-10-01 17:38:28.139166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.809 qpair failed and we were unable to recover it. 00:38:29.809 [2024-10-01 17:38:28.139385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.809 [2024-10-01 17:38:28.139391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.809 qpair failed and we were unable to recover it. 00:38:29.809 [2024-10-01 17:38:28.139712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.809 [2024-10-01 17:38:28.139719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.809 qpair failed and we were unable to recover it. 00:38:29.809 [2024-10-01 17:38:28.140073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.809 [2024-10-01 17:38:28.140080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.809 qpair failed and we were unable to recover it. 00:38:29.809 [2024-10-01 17:38:28.140400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.809 [2024-10-01 17:38:28.140407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.809 qpair failed and we were unable to recover it. 00:38:29.809 [2024-10-01 17:38:28.140698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.809 [2024-10-01 17:38:28.140704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.809 qpair failed and we were unable to recover it. 00:38:29.809 [2024-10-01 17:38:28.141021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.809 [2024-10-01 17:38:28.141029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.809 qpair failed and we were unable to recover it. 00:38:29.809 [2024-10-01 17:38:28.141341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.809 [2024-10-01 17:38:28.141348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.809 qpair failed and we were unable to recover it. 00:38:29.809 [2024-10-01 17:38:28.141679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.809 [2024-10-01 17:38:28.141685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.809 qpair failed and we were unable to recover it. 00:38:29.809 [2024-10-01 17:38:28.141986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.809 [2024-10-01 17:38:28.141996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.809 qpair failed and we were unable to recover it. 00:38:29.809 [2024-10-01 17:38:28.142309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.809 [2024-10-01 17:38:28.142315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.809 qpair failed and we were unable to recover it. 00:38:29.809 [2024-10-01 17:38:28.142624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.809 [2024-10-01 17:38:28.142631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.809 qpair failed and we were unable to recover it. 00:38:29.809 [2024-10-01 17:38:28.142940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.809 [2024-10-01 17:38:28.142947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.809 qpair failed and we were unable to recover it. 00:38:29.809 [2024-10-01 17:38:28.143302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.809 [2024-10-01 17:38:28.143310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.809 qpair failed and we were unable to recover it. 00:38:29.809 [2024-10-01 17:38:28.143630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.809 [2024-10-01 17:38:28.143637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.809 qpair failed and we were unable to recover it. 00:38:29.809 [2024-10-01 17:38:28.143946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.809 [2024-10-01 17:38:28.143954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.809 qpair failed and we were unable to recover it. 00:38:29.809 [2024-10-01 17:38:28.144325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.809 [2024-10-01 17:38:28.144332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.809 qpair failed and we were unable to recover it. 00:38:29.809 [2024-10-01 17:38:28.144583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.809 [2024-10-01 17:38:28.144589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.809 qpair failed and we were unable to recover it. 00:38:29.809 [2024-10-01 17:38:28.144924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.809 [2024-10-01 17:38:28.144932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.809 qpair failed and we were unable to recover it. 00:38:29.809 [2024-10-01 17:38:28.145264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.809 [2024-10-01 17:38:28.145272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.809 qpair failed and we were unable to recover it. 00:38:29.809 [2024-10-01 17:38:28.145573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.809 [2024-10-01 17:38:28.145581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.809 qpair failed and we were unable to recover it. 00:38:29.809 [2024-10-01 17:38:28.145879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.809 [2024-10-01 17:38:28.145886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.809 qpair failed and we were unable to recover it. 00:38:29.809 [2024-10-01 17:38:28.146266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.809 [2024-10-01 17:38:28.146274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.809 qpair failed and we were unable to recover it. 00:38:29.809 [2024-10-01 17:38:28.146475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.809 [2024-10-01 17:38:28.146482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.809 qpair failed and we were unable to recover it. 00:38:29.809 [2024-10-01 17:38:28.146775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.809 [2024-10-01 17:38:28.146783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.809 qpair failed and we were unable to recover it. 00:38:29.809 [2024-10-01 17:38:28.146988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.809 [2024-10-01 17:38:28.146998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.809 qpair failed and we were unable to recover it. 00:38:29.809 [2024-10-01 17:38:28.147306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.809 [2024-10-01 17:38:28.147313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.809 qpair failed and we were unable to recover it. 00:38:29.809 [2024-10-01 17:38:28.147626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.809 [2024-10-01 17:38:28.147633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.809 qpair failed and we were unable to recover it. 00:38:29.809 [2024-10-01 17:38:28.147910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.809 [2024-10-01 17:38:28.147916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.809 qpair failed and we were unable to recover it. 00:38:29.809 [2024-10-01 17:38:28.148214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.809 [2024-10-01 17:38:28.148221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.809 qpair failed and we were unable to recover it. 00:38:29.809 [2024-10-01 17:38:28.148451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.809 [2024-10-01 17:38:28.148457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.809 qpair failed and we were unable to recover it. 00:38:29.809 [2024-10-01 17:38:28.148783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.809 [2024-10-01 17:38:28.148790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.809 qpair failed and we were unable to recover it. 00:38:29.809 [2024-10-01 17:38:28.149079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.809 [2024-10-01 17:38:28.149087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.809 qpair failed and we were unable to recover it. 00:38:29.809 [2024-10-01 17:38:28.149384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.809 [2024-10-01 17:38:28.149392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.810 qpair failed and we were unable to recover it. 00:38:29.810 [2024-10-01 17:38:28.149692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.810 [2024-10-01 17:38:28.149699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.810 qpair failed and we were unable to recover it. 00:38:29.810 [2024-10-01 17:38:28.150003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.810 [2024-10-01 17:38:28.150011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.810 qpair failed and we were unable to recover it. 00:38:29.810 [2024-10-01 17:38:28.150221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.810 [2024-10-01 17:38:28.150228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.810 qpair failed and we were unable to recover it. 00:38:29.810 [2024-10-01 17:38:28.150504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.810 [2024-10-01 17:38:28.150510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.810 qpair failed and we were unable to recover it. 00:38:29.810 [2024-10-01 17:38:28.150818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.810 [2024-10-01 17:38:28.150825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.810 qpair failed and we were unable to recover it. 00:38:29.810 [2024-10-01 17:38:28.151027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.810 [2024-10-01 17:38:28.151034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.810 qpair failed and we were unable to recover it. 00:38:29.810 [2024-10-01 17:38:28.151342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.810 [2024-10-01 17:38:28.151350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.810 qpair failed and we were unable to recover it. 00:38:29.810 [2024-10-01 17:38:28.151537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.810 [2024-10-01 17:38:28.151544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.810 qpair failed and we were unable to recover it. 00:38:29.810 [2024-10-01 17:38:28.151845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.810 [2024-10-01 17:38:28.151852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.810 qpair failed and we were unable to recover it. 00:38:29.810 [2024-10-01 17:38:28.152176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.810 [2024-10-01 17:38:28.152183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.810 qpair failed and we were unable to recover it. 00:38:29.810 [2024-10-01 17:38:28.152513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.810 [2024-10-01 17:38:28.152520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.810 qpair failed and we were unable to recover it. 00:38:29.810 [2024-10-01 17:38:28.152724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.810 [2024-10-01 17:38:28.152731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.810 qpair failed and we were unable to recover it. 00:38:29.810 [2024-10-01 17:38:28.153042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.810 [2024-10-01 17:38:28.153049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.810 qpair failed and we were unable to recover it. 00:38:29.810 [2024-10-01 17:38:28.153361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.810 [2024-10-01 17:38:28.153368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.810 qpair failed and we were unable to recover it. 00:38:29.810 [2024-10-01 17:38:28.153446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.810 [2024-10-01 17:38:28.153453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.810 qpair failed and we were unable to recover it. 00:38:29.810 [2024-10-01 17:38:28.153741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.810 [2024-10-01 17:38:28.153748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.810 qpair failed and we were unable to recover it. 00:38:29.810 [2024-10-01 17:38:28.153944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.810 [2024-10-01 17:38:28.153952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.810 qpair failed and we were unable to recover it. 00:38:29.810 [2024-10-01 17:38:28.154280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.810 [2024-10-01 17:38:28.154287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.810 qpair failed and we were unable to recover it. 00:38:29.810 [2024-10-01 17:38:28.154594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.810 [2024-10-01 17:38:28.154601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.810 qpair failed and we were unable to recover it. 00:38:29.810 [2024-10-01 17:38:28.154882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.810 [2024-10-01 17:38:28.154889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.810 qpair failed and we were unable to recover it. 00:38:29.810 [2024-10-01 17:38:28.155198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.810 [2024-10-01 17:38:28.155205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.810 qpair failed and we were unable to recover it. 00:38:29.810 [2024-10-01 17:38:28.155528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.810 [2024-10-01 17:38:28.155535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.810 qpair failed and we were unable to recover it. 00:38:29.810 [2024-10-01 17:38:28.155826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.810 [2024-10-01 17:38:28.155841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.810 qpair failed and we were unable to recover it. 00:38:29.810 [2024-10-01 17:38:28.156204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.810 [2024-10-01 17:38:28.156211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.810 qpair failed and we were unable to recover it. 00:38:29.810 [2024-10-01 17:38:28.156528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.810 [2024-10-01 17:38:28.156534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.810 qpair failed and we were unable to recover it. 00:38:29.810 [2024-10-01 17:38:28.156851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.810 [2024-10-01 17:38:28.156858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.810 qpair failed and we were unable to recover it. 00:38:29.810 [2024-10-01 17:38:28.157248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.810 [2024-10-01 17:38:28.157256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.810 qpair failed and we were unable to recover it. 00:38:29.810 [2024-10-01 17:38:28.157544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.810 [2024-10-01 17:38:28.157551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.810 qpair failed and we were unable to recover it. 00:38:29.810 [2024-10-01 17:38:28.157881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.810 [2024-10-01 17:38:28.157887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.810 qpair failed and we were unable to recover it. 00:38:29.810 [2024-10-01 17:38:28.158297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.810 [2024-10-01 17:38:28.158305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.810 qpair failed and we were unable to recover it. 00:38:29.810 [2024-10-01 17:38:28.158632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.810 [2024-10-01 17:38:28.158639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.810 qpair failed and we were unable to recover it. 00:38:29.810 [2024-10-01 17:38:28.158853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.810 [2024-10-01 17:38:28.158860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.810 qpair failed and we were unable to recover it. 00:38:29.810 [2024-10-01 17:38:28.159072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.810 [2024-10-01 17:38:28.159079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.810 qpair failed and we were unable to recover it. 00:38:29.810 [2024-10-01 17:38:28.159445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.810 [2024-10-01 17:38:28.159452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.810 qpair failed and we were unable to recover it. 00:38:29.810 [2024-10-01 17:38:28.159759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.810 [2024-10-01 17:38:28.159766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.810 qpair failed and we were unable to recover it. 00:38:29.810 [2024-10-01 17:38:28.160155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.810 [2024-10-01 17:38:28.160163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.810 qpair failed and we were unable to recover it. 00:38:29.810 [2024-10-01 17:38:28.160481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.810 [2024-10-01 17:38:28.160488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.810 qpair failed and we were unable to recover it. 00:38:29.810 [2024-10-01 17:38:28.160779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.810 [2024-10-01 17:38:28.160793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.810 qpair failed and we were unable to recover it. 00:38:29.810 [2024-10-01 17:38:28.161024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.811 [2024-10-01 17:38:28.161032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.811 qpair failed and we were unable to recover it. 00:38:29.811 [2024-10-01 17:38:28.161305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.811 [2024-10-01 17:38:28.161312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.811 qpair failed and we were unable to recover it. 00:38:29.811 [2024-10-01 17:38:28.161623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.811 [2024-10-01 17:38:28.161630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.811 qpair failed and we were unable to recover it. 00:38:29.811 [2024-10-01 17:38:28.161942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.811 [2024-10-01 17:38:28.161950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.811 qpair failed and we were unable to recover it. 00:38:29.811 [2024-10-01 17:38:28.162341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.811 [2024-10-01 17:38:28.162348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.811 qpair failed and we were unable to recover it. 00:38:29.811 [2024-10-01 17:38:28.162543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.811 [2024-10-01 17:38:28.162549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.811 qpair failed and we were unable to recover it. 00:38:29.811 [2024-10-01 17:38:28.162859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.811 [2024-10-01 17:38:28.162865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.811 qpair failed and we were unable to recover it. 00:38:29.811 [2024-10-01 17:38:28.163313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.811 [2024-10-01 17:38:28.163320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.811 qpair failed and we were unable to recover it. 00:38:29.811 [2024-10-01 17:38:28.163604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.811 [2024-10-01 17:38:28.163611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.811 qpair failed and we were unable to recover it. 00:38:29.811 [2024-10-01 17:38:28.163923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.811 [2024-10-01 17:38:28.163929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.811 qpair failed and we were unable to recover it. 00:38:29.811 [2024-10-01 17:38:28.164227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.811 [2024-10-01 17:38:28.164235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.811 qpair failed and we were unable to recover it. 00:38:29.811 [2024-10-01 17:38:28.164549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.811 [2024-10-01 17:38:28.164555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.811 qpair failed and we were unable to recover it. 00:38:29.811 [2024-10-01 17:38:28.164935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.811 [2024-10-01 17:38:28.164942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.811 qpair failed and we were unable to recover it. 00:38:29.811 [2024-10-01 17:38:28.165266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.811 [2024-10-01 17:38:28.165273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.811 qpair failed and we were unable to recover it. 00:38:29.811 [2024-10-01 17:38:28.165615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.811 [2024-10-01 17:38:28.165623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.811 qpair failed and we were unable to recover it. 00:38:29.811 [2024-10-01 17:38:28.165921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.811 [2024-10-01 17:38:28.165927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.811 qpair failed and we were unable to recover it. 00:38:29.811 [2024-10-01 17:38:28.166185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.811 [2024-10-01 17:38:28.166192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.811 qpair failed and we were unable to recover it. 00:38:29.811 [2024-10-01 17:38:28.166522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.811 [2024-10-01 17:38:28.166528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.811 qpair failed and we were unable to recover it. 00:38:29.811 [2024-10-01 17:38:28.166814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.811 [2024-10-01 17:38:28.166827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.811 qpair failed and we were unable to recover it. 00:38:29.811 [2024-10-01 17:38:28.166985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.811 [2024-10-01 17:38:28.166996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.811 qpair failed and we were unable to recover it. 00:38:29.811 [2024-10-01 17:38:28.167316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.811 [2024-10-01 17:38:28.167323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.811 qpair failed and we were unable to recover it. 00:38:29.811 [2024-10-01 17:38:28.167607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.811 [2024-10-01 17:38:28.167614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.811 qpair failed and we were unable to recover it. 00:38:29.811 [2024-10-01 17:38:28.167910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.811 [2024-10-01 17:38:28.167917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.811 qpair failed and we were unable to recover it. 00:38:29.811 [2024-10-01 17:38:28.168208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.811 [2024-10-01 17:38:28.168215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.811 qpair failed and we were unable to recover it. 00:38:29.811 [2024-10-01 17:38:28.168510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.811 [2024-10-01 17:38:28.168517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.811 qpair failed and we were unable to recover it. 00:38:29.811 [2024-10-01 17:38:28.168839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.811 [2024-10-01 17:38:28.168846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.811 qpair failed and we were unable to recover it. 00:38:29.811 [2024-10-01 17:38:28.169061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.811 [2024-10-01 17:38:28.169068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.811 qpair failed and we were unable to recover it. 00:38:29.811 [2024-10-01 17:38:28.169377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.811 [2024-10-01 17:38:28.169384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.811 qpair failed and we were unable to recover it. 00:38:29.811 [2024-10-01 17:38:28.169582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.811 [2024-10-01 17:38:28.169590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.811 qpair failed and we were unable to recover it. 00:38:29.811 [2024-10-01 17:38:28.169897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.811 [2024-10-01 17:38:28.169903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.811 qpair failed and we were unable to recover it. 00:38:29.811 [2024-10-01 17:38:28.170265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.811 [2024-10-01 17:38:28.170273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.811 qpair failed and we were unable to recover it. 00:38:29.811 [2024-10-01 17:38:28.170626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.811 [2024-10-01 17:38:28.170633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.811 qpair failed and we were unable to recover it. 00:38:29.811 [2024-10-01 17:38:28.170921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.811 [2024-10-01 17:38:28.170927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.811 qpair failed and we were unable to recover it. 00:38:29.811 [2024-10-01 17:38:28.171355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.811 [2024-10-01 17:38:28.171362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.811 qpair failed and we were unable to recover it. 00:38:29.811 [2024-10-01 17:38:28.171653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.811 [2024-10-01 17:38:28.171660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.811 qpair failed and we were unable to recover it. 00:38:29.811 [2024-10-01 17:38:28.171976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.811 [2024-10-01 17:38:28.171983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.811 qpair failed and we were unable to recover it. 00:38:29.811 [2024-10-01 17:38:28.172346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.811 [2024-10-01 17:38:28.172353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.811 qpair failed and we were unable to recover it. 00:38:29.811 [2024-10-01 17:38:28.172662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.811 [2024-10-01 17:38:28.172669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.811 qpair failed and we were unable to recover it. 00:38:29.811 [2024-10-01 17:38:28.172971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.811 [2024-10-01 17:38:28.172978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.811 qpair failed and we were unable to recover it. 00:38:29.812 [2024-10-01 17:38:28.173299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.812 [2024-10-01 17:38:28.173307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.812 qpair failed and we were unable to recover it. 00:38:29.812 [2024-10-01 17:38:28.173632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.812 [2024-10-01 17:38:28.173640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.812 qpair failed and we were unable to recover it. 00:38:29.812 [2024-10-01 17:38:28.173941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.812 [2024-10-01 17:38:28.173948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.812 qpair failed and we were unable to recover it. 00:38:29.812 [2024-10-01 17:38:28.174280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.812 [2024-10-01 17:38:28.174288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.812 qpair failed and we were unable to recover it. 00:38:29.812 [2024-10-01 17:38:28.174598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.812 [2024-10-01 17:38:28.174605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.812 qpair failed and we were unable to recover it. 00:38:29.812 [2024-10-01 17:38:28.174919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.812 [2024-10-01 17:38:28.174927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.812 qpair failed and we were unable to recover it. 00:38:29.812 [2024-10-01 17:38:28.175249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.812 [2024-10-01 17:38:28.175257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.812 qpair failed and we were unable to recover it. 00:38:29.812 [2024-10-01 17:38:28.175564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.812 [2024-10-01 17:38:28.175571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.812 qpair failed and we were unable to recover it. 00:38:29.812 [2024-10-01 17:38:28.175863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.812 [2024-10-01 17:38:28.175869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.812 qpair failed and we were unable to recover it. 00:38:29.812 [2024-10-01 17:38:28.176107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.812 [2024-10-01 17:38:28.176114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.812 qpair failed and we were unable to recover it. 00:38:29.812 [2024-10-01 17:38:28.176426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.812 [2024-10-01 17:38:28.176433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.812 qpair failed and we were unable to recover it. 00:38:29.812 [2024-10-01 17:38:28.176757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.812 [2024-10-01 17:38:28.176764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.812 qpair failed and we were unable to recover it. 00:38:29.812 [2024-10-01 17:38:28.177021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.812 [2024-10-01 17:38:28.177029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.812 qpair failed and we were unable to recover it. 00:38:29.812 [2024-10-01 17:38:28.177351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.812 [2024-10-01 17:38:28.177358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.812 qpair failed and we were unable to recover it. 00:38:29.812 [2024-10-01 17:38:28.177743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.812 [2024-10-01 17:38:28.177749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.812 qpair failed and we were unable to recover it. 00:38:29.812 [2024-10-01 17:38:28.178041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.812 [2024-10-01 17:38:28.178048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.812 qpair failed and we were unable to recover it. 00:38:29.812 [2024-10-01 17:38:28.178451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.812 [2024-10-01 17:38:28.178458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.812 qpair failed and we were unable to recover it. 00:38:29.812 [2024-10-01 17:38:28.178672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.812 [2024-10-01 17:38:28.178679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.812 qpair failed and we were unable to recover it. 00:38:29.812 [2024-10-01 17:38:28.178981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.812 [2024-10-01 17:38:28.178988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.812 qpair failed and we were unable to recover it. 00:38:29.812 [2024-10-01 17:38:28.179108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.812 [2024-10-01 17:38:28.179115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.812 qpair failed and we were unable to recover it. 00:38:29.812 [2024-10-01 17:38:28.179371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.812 [2024-10-01 17:38:28.179378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.812 qpair failed and we were unable to recover it. 00:38:29.812 [2024-10-01 17:38:28.179585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.812 [2024-10-01 17:38:28.179591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.812 qpair failed and we were unable to recover it. 00:38:29.812 [2024-10-01 17:38:28.179752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.812 [2024-10-01 17:38:28.179759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.812 qpair failed and we were unable to recover it. 00:38:29.812 [2024-10-01 17:38:28.180102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.812 [2024-10-01 17:38:28.180110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.812 qpair failed and we were unable to recover it. 00:38:29.812 [2024-10-01 17:38:28.180422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.812 [2024-10-01 17:38:28.180428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.812 qpair failed and we were unable to recover it. 00:38:29.812 [2024-10-01 17:38:28.180797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.812 [2024-10-01 17:38:28.180805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.812 qpair failed and we were unable to recover it. 00:38:29.812 [2024-10-01 17:38:28.181010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.812 [2024-10-01 17:38:28.181017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.812 qpair failed and we were unable to recover it. 00:38:29.812 [2024-10-01 17:38:28.181309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.812 [2024-10-01 17:38:28.181316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.812 qpair failed and we were unable to recover it. 00:38:29.812 [2024-10-01 17:38:28.181631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.812 [2024-10-01 17:38:28.181639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.812 qpair failed and we were unable to recover it. 00:38:29.812 [2024-10-01 17:38:28.181955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.812 [2024-10-01 17:38:28.181962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.812 qpair failed and we were unable to recover it. 00:38:29.812 [2024-10-01 17:38:28.182174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.812 [2024-10-01 17:38:28.182181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.812 qpair failed and we were unable to recover it. 00:38:29.812 [2024-10-01 17:38:28.182518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.812 [2024-10-01 17:38:28.182525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.812 qpair failed and we were unable to recover it. 00:38:29.812 [2024-10-01 17:38:28.182831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.812 [2024-10-01 17:38:28.182838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.812 qpair failed and we were unable to recover it. 00:38:29.812 [2024-10-01 17:38:28.183228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.812 [2024-10-01 17:38:28.183235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.812 qpair failed and we were unable to recover it. 00:38:29.812 [2024-10-01 17:38:28.183540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.813 [2024-10-01 17:38:28.183546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.813 qpair failed and we were unable to recover it. 00:38:29.813 [2024-10-01 17:38:28.183755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.813 [2024-10-01 17:38:28.183762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.813 qpair failed and we were unable to recover it. 00:38:29.813 [2024-10-01 17:38:28.184138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.813 [2024-10-01 17:38:28.184145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.813 qpair failed and we were unable to recover it. 00:38:29.813 [2024-10-01 17:38:28.184484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.813 [2024-10-01 17:38:28.184492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.813 qpair failed and we were unable to recover it. 00:38:29.813 [2024-10-01 17:38:28.184801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.813 [2024-10-01 17:38:28.184808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.813 qpair failed and we were unable to recover it. 00:38:29.813 [2024-10-01 17:38:28.185003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.813 [2024-10-01 17:38:28.185011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.813 qpair failed and we were unable to recover it. 00:38:29.813 [2024-10-01 17:38:28.185240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.813 [2024-10-01 17:38:28.185247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.813 qpair failed and we were unable to recover it. 00:38:29.813 [2024-10-01 17:38:28.185563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.813 [2024-10-01 17:38:28.185571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.813 qpair failed and we were unable to recover it. 00:38:29.813 [2024-10-01 17:38:28.185904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.813 [2024-10-01 17:38:28.185910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.813 qpair failed and we were unable to recover it. 00:38:29.813 [2024-10-01 17:38:28.186262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.813 [2024-10-01 17:38:28.186270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.813 qpair failed and we were unable to recover it. 00:38:29.813 [2024-10-01 17:38:28.186548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.813 [2024-10-01 17:38:28.186555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.813 qpair failed and we were unable to recover it. 00:38:29.813 [2024-10-01 17:38:28.186871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.813 [2024-10-01 17:38:28.186878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.813 qpair failed and we were unable to recover it. 00:38:29.813 [2024-10-01 17:38:28.187090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.813 [2024-10-01 17:38:28.187097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.813 qpair failed and we were unable to recover it. 00:38:29.813 [2024-10-01 17:38:28.187362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.813 [2024-10-01 17:38:28.187369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.813 qpair failed and we were unable to recover it. 00:38:29.813 [2024-10-01 17:38:28.187697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.813 [2024-10-01 17:38:28.187704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.813 qpair failed and we were unable to recover it. 00:38:29.813 [2024-10-01 17:38:28.188032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.813 [2024-10-01 17:38:28.188040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.813 qpair failed and we were unable to recover it. 00:38:29.813 [2024-10-01 17:38:28.188355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.813 [2024-10-01 17:38:28.188362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.813 qpair failed and we were unable to recover it. 00:38:29.813 [2024-10-01 17:38:28.188700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.813 [2024-10-01 17:38:28.188707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.813 qpair failed and we were unable to recover it. 00:38:29.813 [2024-10-01 17:38:28.188912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.813 [2024-10-01 17:38:28.188919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.813 qpair failed and we were unable to recover it. 00:38:29.813 [2024-10-01 17:38:28.189228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.813 [2024-10-01 17:38:28.189235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.813 qpair failed and we were unable to recover it. 00:38:29.813 [2024-10-01 17:38:28.189546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.813 [2024-10-01 17:38:28.189553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.813 qpair failed and we were unable to recover it. 00:38:29.813 [2024-10-01 17:38:28.189863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.813 [2024-10-01 17:38:28.189870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.813 qpair failed and we were unable to recover it. 00:38:29.813 [2024-10-01 17:38:28.190074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.813 [2024-10-01 17:38:28.190082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.813 qpair failed and we were unable to recover it. 00:38:29.813 [2024-10-01 17:38:28.190392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.813 [2024-10-01 17:38:28.190399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.813 qpair failed and we were unable to recover it. 00:38:29.813 [2024-10-01 17:38:28.190699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.813 [2024-10-01 17:38:28.190706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.813 qpair failed and we were unable to recover it. 00:38:29.813 [2024-10-01 17:38:28.190908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.813 [2024-10-01 17:38:28.190915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.813 qpair failed and we were unable to recover it. 00:38:29.813 [2024-10-01 17:38:28.191246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.813 [2024-10-01 17:38:28.191253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.813 qpair failed and we were unable to recover it. 00:38:29.813 [2024-10-01 17:38:28.191565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.813 [2024-10-01 17:38:28.191573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.813 qpair failed and we were unable to recover it. 00:38:29.813 [2024-10-01 17:38:28.191932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.813 [2024-10-01 17:38:28.191939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.813 qpair failed and we were unable to recover it. 00:38:29.813 [2024-10-01 17:38:28.192279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.813 [2024-10-01 17:38:28.192286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.813 qpair failed and we were unable to recover it. 00:38:29.813 [2024-10-01 17:38:28.192614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.813 [2024-10-01 17:38:28.192621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.813 qpair failed and we were unable to recover it. 00:38:29.813 [2024-10-01 17:38:28.192925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.813 [2024-10-01 17:38:28.192933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.813 qpair failed and we were unable to recover it. 00:38:29.813 [2024-10-01 17:38:28.193248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.813 [2024-10-01 17:38:28.193255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.813 qpair failed and we were unable to recover it. 00:38:29.813 [2024-10-01 17:38:28.193558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.813 [2024-10-01 17:38:28.193565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.813 qpair failed and we were unable to recover it. 00:38:29.813 [2024-10-01 17:38:28.193756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.813 [2024-10-01 17:38:28.193763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.813 qpair failed and we were unable to recover it. 00:38:29.813 [2024-10-01 17:38:28.194075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.813 [2024-10-01 17:38:28.194082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.813 qpair failed and we were unable to recover it. 00:38:29.813 [2024-10-01 17:38:28.194269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.813 [2024-10-01 17:38:28.194276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.813 qpair failed and we were unable to recover it. 00:38:29.813 [2024-10-01 17:38:28.194648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.813 [2024-10-01 17:38:28.194654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.813 qpair failed and we were unable to recover it. 00:38:29.813 [2024-10-01 17:38:28.194970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.813 [2024-10-01 17:38:28.194976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.813 qpair failed and we were unable to recover it. 00:38:29.814 [2024-10-01 17:38:28.195298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.814 [2024-10-01 17:38:28.195305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.814 qpair failed and we were unable to recover it. 00:38:29.814 [2024-10-01 17:38:28.195504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.814 [2024-10-01 17:38:28.195511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.814 qpair failed and we were unable to recover it. 00:38:29.814 [2024-10-01 17:38:28.195811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.814 [2024-10-01 17:38:28.195817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.814 qpair failed and we were unable to recover it. 00:38:29.814 [2024-10-01 17:38:28.196133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.814 [2024-10-01 17:38:28.196140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.814 qpair failed and we were unable to recover it. 00:38:29.814 [2024-10-01 17:38:28.196326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.814 [2024-10-01 17:38:28.196341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.814 qpair failed and we were unable to recover it. 00:38:29.814 [2024-10-01 17:38:28.196533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.814 [2024-10-01 17:38:28.196540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.814 qpair failed and we were unable to recover it. 00:38:29.814 [2024-10-01 17:38:28.196830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.814 [2024-10-01 17:38:28.196836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.814 qpair failed and we were unable to recover it. 00:38:29.814 [2024-10-01 17:38:28.197101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.814 [2024-10-01 17:38:28.197108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.814 qpair failed and we were unable to recover it. 00:38:29.814 [2024-10-01 17:38:28.197415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.814 [2024-10-01 17:38:28.197422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.814 qpair failed and we were unable to recover it. 00:38:29.814 [2024-10-01 17:38:28.197541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.814 [2024-10-01 17:38:28.197548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.814 qpair failed and we were unable to recover it. 00:38:29.814 [2024-10-01 17:38:28.197769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.814 [2024-10-01 17:38:28.197776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.814 qpair failed and we were unable to recover it. 00:38:29.814 [2024-10-01 17:38:28.198058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.814 [2024-10-01 17:38:28.198065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.814 qpair failed and we were unable to recover it. 00:38:29.814 [2024-10-01 17:38:28.198403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.814 [2024-10-01 17:38:28.198410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.814 qpair failed and we were unable to recover it. 00:38:29.814 [2024-10-01 17:38:28.198607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.814 [2024-10-01 17:38:28.198614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.814 qpair failed and we were unable to recover it. 00:38:29.814 [2024-10-01 17:38:28.198940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.814 [2024-10-01 17:38:28.198948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.814 qpair failed and we were unable to recover it. 00:38:29.814 [2024-10-01 17:38:28.199223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.814 [2024-10-01 17:38:28.199230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.814 qpair failed and we were unable to recover it. 00:38:29.814 [2024-10-01 17:38:28.199547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.814 [2024-10-01 17:38:28.199554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.814 qpair failed and we were unable to recover it. 00:38:29.814 [2024-10-01 17:38:28.199834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.814 [2024-10-01 17:38:28.199841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.814 qpair failed and we were unable to recover it. 00:38:29.814 [2024-10-01 17:38:28.200153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.814 [2024-10-01 17:38:28.200160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.814 qpair failed and we were unable to recover it. 00:38:29.814 [2024-10-01 17:38:28.200372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.814 [2024-10-01 17:38:28.200379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.814 qpair failed and we were unable to recover it. 00:38:29.814 [2024-10-01 17:38:28.200674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.814 [2024-10-01 17:38:28.200680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.814 qpair failed and we were unable to recover it. 00:38:29.814 [2024-10-01 17:38:28.201005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.814 [2024-10-01 17:38:28.201013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.814 qpair failed and we were unable to recover it. 00:38:29.814 [2024-10-01 17:38:28.201337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.814 [2024-10-01 17:38:28.201343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.814 qpair failed and we were unable to recover it. 00:38:29.814 [2024-10-01 17:38:28.201635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.814 [2024-10-01 17:38:28.201643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.814 qpair failed and we were unable to recover it. 00:38:29.814 [2024-10-01 17:38:28.201877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.814 [2024-10-01 17:38:28.201884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.814 qpair failed and we were unable to recover it. 00:38:29.814 [2024-10-01 17:38:28.202261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.814 [2024-10-01 17:38:28.202267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.814 qpair failed and we were unable to recover it. 00:38:29.814 [2024-10-01 17:38:28.202465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.814 [2024-10-01 17:38:28.202471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.814 qpair failed and we were unable to recover it. 00:38:29.814 [2024-10-01 17:38:28.202741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.814 [2024-10-01 17:38:28.202748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.814 qpair failed and we were unable to recover it. 00:38:29.814 [2024-10-01 17:38:28.203118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.814 [2024-10-01 17:38:28.203125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.814 qpair failed and we were unable to recover it. 00:38:29.814 [2024-10-01 17:38:28.203426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.814 [2024-10-01 17:38:28.203440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.814 qpair failed and we were unable to recover it. 00:38:29.814 [2024-10-01 17:38:28.203632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.814 [2024-10-01 17:38:28.203639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.814 qpair failed and we were unable to recover it. 00:38:29.814 [2024-10-01 17:38:28.203941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.814 [2024-10-01 17:38:28.203948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.814 qpair failed and we were unable to recover it. 00:38:29.814 [2024-10-01 17:38:28.204144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.814 [2024-10-01 17:38:28.204152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.814 qpair failed and we were unable to recover it. 00:38:29.814 [2024-10-01 17:38:28.204521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.814 [2024-10-01 17:38:28.204527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.814 qpair failed and we were unable to recover it. 00:38:29.814 [2024-10-01 17:38:28.204842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.814 [2024-10-01 17:38:28.204849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.814 qpair failed and we were unable to recover it. 00:38:29.814 [2024-10-01 17:38:28.205140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.814 [2024-10-01 17:38:28.205149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.814 qpair failed and we were unable to recover it. 00:38:29.814 [2024-10-01 17:38:28.205454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.814 [2024-10-01 17:38:28.205461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.814 qpair failed and we were unable to recover it. 00:38:29.814 [2024-10-01 17:38:28.205786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.814 [2024-10-01 17:38:28.205792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.814 qpair failed and we were unable to recover it. 00:38:29.814 [2024-10-01 17:38:28.205968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.815 [2024-10-01 17:38:28.205975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.815 qpair failed and we were unable to recover it. 00:38:29.815 [2024-10-01 17:38:28.206337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.815 [2024-10-01 17:38:28.206344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.815 qpair failed and we were unable to recover it. 00:38:29.815 [2024-10-01 17:38:28.206662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.815 [2024-10-01 17:38:28.206669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.815 qpair failed and we were unable to recover it. 00:38:29.815 [2024-10-01 17:38:28.206971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.815 [2024-10-01 17:38:28.206978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.815 qpair failed and we were unable to recover it. 00:38:29.815 [2024-10-01 17:38:28.207194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.815 [2024-10-01 17:38:28.207201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.815 qpair failed and we were unable to recover it. 00:38:29.815 [2024-10-01 17:38:28.207377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.815 [2024-10-01 17:38:28.207384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.815 qpair failed and we were unable to recover it. 00:38:29.815 [2024-10-01 17:38:28.207618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.815 [2024-10-01 17:38:28.207624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.815 qpair failed and we were unable to recover it. 00:38:29.815 [2024-10-01 17:38:28.207991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.815 [2024-10-01 17:38:28.208001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.815 qpair failed and we were unable to recover it. 00:38:29.815 [2024-10-01 17:38:28.208269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.815 [2024-10-01 17:38:28.208276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.815 qpair failed and we were unable to recover it. 00:38:29.815 [2024-10-01 17:38:28.208574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.815 [2024-10-01 17:38:28.208589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.815 qpair failed and we were unable to recover it. 00:38:29.815 [2024-10-01 17:38:28.208840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.815 [2024-10-01 17:38:28.208846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.815 qpair failed and we were unable to recover it. 00:38:29.815 [2024-10-01 17:38:28.209059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.815 [2024-10-01 17:38:28.209067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.815 qpair failed and we were unable to recover it. 00:38:29.815 [2024-10-01 17:38:28.209410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.815 [2024-10-01 17:38:28.209416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.815 qpair failed and we were unable to recover it. 00:38:29.815 [2024-10-01 17:38:28.209696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.815 [2024-10-01 17:38:28.209702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.815 qpair failed and we were unable to recover it. 00:38:29.815 [2024-10-01 17:38:28.209966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.815 [2024-10-01 17:38:28.209973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.815 qpair failed and we were unable to recover it. 00:38:29.815 [2024-10-01 17:38:28.210205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.815 [2024-10-01 17:38:28.210212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.815 qpair failed and we were unable to recover it. 00:38:29.815 [2024-10-01 17:38:28.210505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.815 [2024-10-01 17:38:28.210512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.815 qpair failed and we were unable to recover it. 00:38:29.815 [2024-10-01 17:38:28.210877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.815 [2024-10-01 17:38:28.210884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.815 qpair failed and we were unable to recover it. 00:38:29.815 [2024-10-01 17:38:28.211213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.815 [2024-10-01 17:38:28.211221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.815 qpair failed and we were unable to recover it. 00:38:29.815 [2024-10-01 17:38:28.211516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.815 [2024-10-01 17:38:28.211523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.815 qpair failed and we were unable to recover it. 00:38:29.815 [2024-10-01 17:38:28.211837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.815 [2024-10-01 17:38:28.211844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.815 qpair failed and we were unable to recover it. 00:38:29.815 [2024-10-01 17:38:28.212038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.815 [2024-10-01 17:38:28.212045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.815 qpair failed and we were unable to recover it. 00:38:29.815 [2024-10-01 17:38:28.212366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.815 [2024-10-01 17:38:28.212373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.815 qpair failed and we were unable to recover it. 00:38:29.815 [2024-10-01 17:38:28.212699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.815 [2024-10-01 17:38:28.212706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.815 qpair failed and we were unable to recover it. 00:38:29.815 [2024-10-01 17:38:28.212993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.815 [2024-10-01 17:38:28.213004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.815 qpair failed and we were unable to recover it. 00:38:29.815 [2024-10-01 17:38:28.213381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.815 [2024-10-01 17:38:28.213388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.815 qpair failed and we were unable to recover it. 00:38:29.815 [2024-10-01 17:38:28.213649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.815 [2024-10-01 17:38:28.213656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.815 qpair failed and we were unable to recover it. 00:38:29.815 [2024-10-01 17:38:28.213841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.815 [2024-10-01 17:38:28.213848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.815 qpair failed and we were unable to recover it. 00:38:29.815 [2024-10-01 17:38:28.214167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.815 [2024-10-01 17:38:28.214174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.815 qpair failed and we were unable to recover it. 00:38:29.815 [2024-10-01 17:38:28.214472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.815 [2024-10-01 17:38:28.214487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.815 qpair failed and we were unable to recover it. 00:38:29.815 [2024-10-01 17:38:28.214700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.815 [2024-10-01 17:38:28.214707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.815 qpair failed and we were unable to recover it. 00:38:29.815 [2024-10-01 17:38:28.215052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.815 [2024-10-01 17:38:28.215060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.815 qpair failed and we were unable to recover it. 00:38:29.815 [2024-10-01 17:38:28.215361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.815 [2024-10-01 17:38:28.215368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.815 qpair failed and we were unable to recover it. 00:38:29.815 [2024-10-01 17:38:28.215693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.815 [2024-10-01 17:38:28.215700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.815 qpair failed and we were unable to recover it. 00:38:29.815 [2024-10-01 17:38:28.215919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.815 [2024-10-01 17:38:28.215926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.815 qpair failed and we were unable to recover it. 00:38:29.815 [2024-10-01 17:38:28.216158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.815 [2024-10-01 17:38:28.216165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.815 qpair failed and we were unable to recover it. 00:38:29.815 [2024-10-01 17:38:28.216510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.815 [2024-10-01 17:38:28.216517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.815 qpair failed and we were unable to recover it. 00:38:29.815 [2024-10-01 17:38:28.216729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.815 [2024-10-01 17:38:28.216737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.815 qpair failed and we were unable to recover it. 00:38:29.815 [2024-10-01 17:38:28.217050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.815 [2024-10-01 17:38:28.217057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.815 qpair failed and we were unable to recover it. 00:38:29.815 [2024-10-01 17:38:28.217361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.816 [2024-10-01 17:38:28.217368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.816 qpair failed and we were unable to recover it. 00:38:29.816 [2024-10-01 17:38:28.217688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.816 [2024-10-01 17:38:28.217695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.816 qpair failed and we were unable to recover it. 00:38:29.816 [2024-10-01 17:38:28.218009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.816 [2024-10-01 17:38:28.218016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.816 qpair failed and we were unable to recover it. 00:38:29.816 [2024-10-01 17:38:28.218353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.816 [2024-10-01 17:38:28.218360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.816 qpair failed and we were unable to recover it. 00:38:29.816 [2024-10-01 17:38:28.218669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.816 [2024-10-01 17:38:28.218676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.816 qpair failed and we were unable to recover it. 00:38:29.816 [2024-10-01 17:38:28.218873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.816 [2024-10-01 17:38:28.218880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.816 qpair failed and we were unable to recover it. 00:38:29.816 [2024-10-01 17:38:28.219166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.816 [2024-10-01 17:38:28.219173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.816 qpair failed and we were unable to recover it. 00:38:29.816 [2024-10-01 17:38:28.219482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.816 [2024-10-01 17:38:28.219489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.816 qpair failed and we were unable to recover it. 00:38:29.816 [2024-10-01 17:38:28.219685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.816 [2024-10-01 17:38:28.219692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.816 qpair failed and we were unable to recover it. 00:38:29.816 [2024-10-01 17:38:28.219878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.816 [2024-10-01 17:38:28.219885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.816 qpair failed and we were unable to recover it. 00:38:29.816 [2024-10-01 17:38:28.220093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.816 [2024-10-01 17:38:28.220100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.816 qpair failed and we were unable to recover it. 00:38:29.816 [2024-10-01 17:38:28.220423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.816 [2024-10-01 17:38:28.220430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.816 qpair failed and we were unable to recover it. 00:38:29.816 [2024-10-01 17:38:28.220745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.816 [2024-10-01 17:38:28.220753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.816 qpair failed and we were unable to recover it. 00:38:29.816 [2024-10-01 17:38:28.220948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.816 [2024-10-01 17:38:28.220955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.816 qpair failed and we were unable to recover it. 00:38:29.816 [2024-10-01 17:38:28.221177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.816 [2024-10-01 17:38:28.221185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.816 qpair failed and we were unable to recover it. 00:38:29.816 [2024-10-01 17:38:28.221477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.816 [2024-10-01 17:38:28.221485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.816 qpair failed and we were unable to recover it. 00:38:29.816 [2024-10-01 17:38:28.221777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.816 [2024-10-01 17:38:28.221785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.816 qpair failed and we were unable to recover it. 00:38:29.816 [2024-10-01 17:38:28.222001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.816 [2024-10-01 17:38:28.222009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.816 qpair failed and we were unable to recover it. 00:38:29.816 [2024-10-01 17:38:28.222207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.816 [2024-10-01 17:38:28.222213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.816 qpair failed and we were unable to recover it. 00:38:29.816 [2024-10-01 17:38:28.222558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.816 [2024-10-01 17:38:28.222565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.816 qpair failed and we were unable to recover it. 00:38:29.816 [2024-10-01 17:38:28.222868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.816 [2024-10-01 17:38:28.222875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.816 qpair failed and we were unable to recover it. 00:38:29.816 [2024-10-01 17:38:28.223079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.816 [2024-10-01 17:38:28.223087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.816 qpair failed and we were unable to recover it. 00:38:29.816 [2024-10-01 17:38:28.223421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.816 [2024-10-01 17:38:28.223428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.816 qpair failed and we were unable to recover it. 00:38:29.816 [2024-10-01 17:38:28.223755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.816 [2024-10-01 17:38:28.223762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.816 qpair failed and we were unable to recover it. 00:38:29.816 [2024-10-01 17:38:28.224065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.816 [2024-10-01 17:38:28.224072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.816 qpair failed and we were unable to recover it. 00:38:29.816 [2024-10-01 17:38:28.224417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.816 [2024-10-01 17:38:28.224424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.816 qpair failed and we were unable to recover it. 00:38:29.816 [2024-10-01 17:38:28.224685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.816 [2024-10-01 17:38:28.224692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.816 qpair failed and we were unable to recover it. 00:38:29.816 [2024-10-01 17:38:28.224886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.816 [2024-10-01 17:38:28.224893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.816 qpair failed and we were unable to recover it. 00:38:29.816 [2024-10-01 17:38:28.225124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.816 [2024-10-01 17:38:28.225132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.816 qpair failed and we were unable to recover it. 00:38:29.816 [2024-10-01 17:38:28.225419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.816 [2024-10-01 17:38:28.225425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.816 qpair failed and we were unable to recover it. 00:38:29.816 [2024-10-01 17:38:28.225719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.816 [2024-10-01 17:38:28.225727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.816 qpair failed and we were unable to recover it. 00:38:29.816 [2024-10-01 17:38:28.226046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.816 [2024-10-01 17:38:28.226053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.816 qpair failed and we were unable to recover it. 00:38:29.816 [2024-10-01 17:38:28.226462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.816 [2024-10-01 17:38:28.226469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.816 qpair failed and we were unable to recover it. 00:38:29.816 [2024-10-01 17:38:28.226765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.816 [2024-10-01 17:38:28.226772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.816 qpair failed and we were unable to recover it. 00:38:29.816 [2024-10-01 17:38:28.227024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.816 [2024-10-01 17:38:28.227031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.816 qpair failed and we were unable to recover it. 00:38:29.816 [2024-10-01 17:38:28.227353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.816 [2024-10-01 17:38:28.227359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.816 qpair failed and we were unable to recover it. 00:38:29.816 [2024-10-01 17:38:28.227653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.816 [2024-10-01 17:38:28.227660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.816 qpair failed and we were unable to recover it. 00:38:29.816 [2024-10-01 17:38:28.227879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.816 [2024-10-01 17:38:28.227886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.816 qpair failed and we were unable to recover it. 00:38:29.816 [2024-10-01 17:38:28.228201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.816 [2024-10-01 17:38:28.228209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.817 qpair failed and we were unable to recover it. 00:38:29.817 [2024-10-01 17:38:28.228501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.817 [2024-10-01 17:38:28.228508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.817 qpair failed and we were unable to recover it. 00:38:29.817 [2024-10-01 17:38:28.228805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.817 [2024-10-01 17:38:28.228812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.817 qpair failed and we were unable to recover it. 00:38:29.817 [2024-10-01 17:38:28.229125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.817 [2024-10-01 17:38:28.229131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.817 qpair failed and we were unable to recover it. 00:38:29.817 [2024-10-01 17:38:28.229436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.817 [2024-10-01 17:38:28.229443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.817 qpair failed and we were unable to recover it. 00:38:29.817 [2024-10-01 17:38:28.229767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.817 [2024-10-01 17:38:28.229775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.817 qpair failed and we were unable to recover it. 00:38:29.817 [2024-10-01 17:38:28.230075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.817 [2024-10-01 17:38:28.230082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.817 qpair failed and we were unable to recover it. 00:38:29.817 [2024-10-01 17:38:28.230391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.817 [2024-10-01 17:38:28.230398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.817 qpair failed and we were unable to recover it. 00:38:29.817 [2024-10-01 17:38:28.230706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.817 [2024-10-01 17:38:28.230713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.817 qpair failed and we were unable to recover it. 00:38:29.817 [2024-10-01 17:38:28.231006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.817 [2024-10-01 17:38:28.231014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.817 qpair failed and we were unable to recover it. 00:38:29.817 [2024-10-01 17:38:28.231244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.817 [2024-10-01 17:38:28.231251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.817 qpair failed and we were unable to recover it. 00:38:29.817 [2024-10-01 17:38:28.231455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.817 [2024-10-01 17:38:28.231462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.817 qpair failed and we were unable to recover it. 00:38:29.817 [2024-10-01 17:38:28.231743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.817 [2024-10-01 17:38:28.231750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.817 qpair failed and we were unable to recover it. 00:38:29.817 [2024-10-01 17:38:28.232069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.817 [2024-10-01 17:38:28.232076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.817 qpair failed and we were unable to recover it. 00:38:29.817 [2024-10-01 17:38:28.232415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.817 [2024-10-01 17:38:28.232422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.817 qpair failed and we were unable to recover it. 00:38:29.817 [2024-10-01 17:38:28.232723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.817 [2024-10-01 17:38:28.232729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.817 qpair failed and we were unable to recover it. 00:38:29.817 [2024-10-01 17:38:28.233039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.817 [2024-10-01 17:38:28.233047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.817 qpair failed and we were unable to recover it. 00:38:29.817 [2024-10-01 17:38:28.233336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.817 [2024-10-01 17:38:28.233343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.817 qpair failed and we were unable to recover it. 00:38:29.817 [2024-10-01 17:38:28.233649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.817 [2024-10-01 17:38:28.233656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.817 qpair failed and we were unable to recover it. 00:38:29.817 [2024-10-01 17:38:28.233972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.817 [2024-10-01 17:38:28.233979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.817 qpair failed and we were unable to recover it. 00:38:29.817 [2024-10-01 17:38:28.234310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.817 [2024-10-01 17:38:28.234317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.817 qpair failed and we were unable to recover it. 00:38:29.817 [2024-10-01 17:38:28.234521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.817 [2024-10-01 17:38:28.234528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.817 qpair failed and we were unable to recover it. 00:38:29.817 [2024-10-01 17:38:28.234830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.817 [2024-10-01 17:38:28.234837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.817 qpair failed and we were unable to recover it. 00:38:29.817 [2024-10-01 17:38:28.235142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.817 [2024-10-01 17:38:28.235149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.817 qpair failed and we were unable to recover it. 00:38:29.817 [2024-10-01 17:38:28.235445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.817 [2024-10-01 17:38:28.235452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.817 qpair failed and we were unable to recover it. 00:38:29.817 [2024-10-01 17:38:28.235753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.817 [2024-10-01 17:38:28.235759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.817 qpair failed and we were unable to recover it. 00:38:29.817 [2024-10-01 17:38:28.236059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.817 [2024-10-01 17:38:28.236066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.817 qpair failed and we were unable to recover it. 00:38:29.817 [2024-10-01 17:38:28.236288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.817 [2024-10-01 17:38:28.236294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.817 qpair failed and we were unable to recover it. 00:38:29.817 [2024-10-01 17:38:28.236613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.817 [2024-10-01 17:38:28.236620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.817 qpair failed and we were unable to recover it. 00:38:29.817 [2024-10-01 17:38:28.236938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.817 [2024-10-01 17:38:28.236944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.817 qpair failed and we were unable to recover it. 00:38:29.817 [2024-10-01 17:38:28.237257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.817 [2024-10-01 17:38:28.237264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.817 qpair failed and we were unable to recover it. 00:38:29.817 [2024-10-01 17:38:28.237570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.817 [2024-10-01 17:38:28.237577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.817 qpair failed and we were unable to recover it. 00:38:29.817 [2024-10-01 17:38:28.237891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.817 [2024-10-01 17:38:28.237898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.817 qpair failed and we were unable to recover it. 00:38:29.817 [2024-10-01 17:38:28.238209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.817 [2024-10-01 17:38:28.238216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.817 qpair failed and we were unable to recover it. 00:38:29.817 [2024-10-01 17:38:28.238542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.817 [2024-10-01 17:38:28.238549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.818 qpair failed and we were unable to recover it. 00:38:29.818 [2024-10-01 17:38:28.238863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.818 [2024-10-01 17:38:28.238870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.818 qpair failed and we were unable to recover it. 00:38:29.818 [2024-10-01 17:38:28.239164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.818 [2024-10-01 17:38:28.239172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.818 qpair failed and we were unable to recover it. 00:38:29.818 [2024-10-01 17:38:28.239346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.818 [2024-10-01 17:38:28.239353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.818 qpair failed and we were unable to recover it. 00:38:29.818 [2024-10-01 17:38:28.239615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.818 [2024-10-01 17:38:28.239621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.818 qpair failed and we were unable to recover it. 00:38:29.818 [2024-10-01 17:38:28.239907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.818 [2024-10-01 17:38:28.239915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.818 qpair failed and we were unable to recover it. 00:38:29.818 [2024-10-01 17:38:28.240243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.818 [2024-10-01 17:38:28.240252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.818 qpair failed and we were unable to recover it. 00:38:29.818 [2024-10-01 17:38:28.240511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.818 [2024-10-01 17:38:28.240517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.818 qpair failed and we were unable to recover it. 00:38:29.818 [2024-10-01 17:38:28.240828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.818 [2024-10-01 17:38:28.240835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.818 qpair failed and we were unable to recover it. 00:38:29.818 [2024-10-01 17:38:28.241137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.818 [2024-10-01 17:38:28.241144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.818 qpair failed and we were unable to recover it. 00:38:29.818 [2024-10-01 17:38:28.241437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.818 [2024-10-01 17:38:28.241443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.818 qpair failed and we were unable to recover it. 00:38:29.818 [2024-10-01 17:38:28.241726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.818 [2024-10-01 17:38:28.241733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.818 qpair failed and we were unable to recover it. 00:38:29.818 [2024-10-01 17:38:28.242027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.818 [2024-10-01 17:38:28.242034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.818 qpair failed and we were unable to recover it. 00:38:29.818 [2024-10-01 17:38:28.242334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.818 [2024-10-01 17:38:28.242341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.818 qpair failed and we were unable to recover it. 00:38:29.818 [2024-10-01 17:38:28.242650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.818 [2024-10-01 17:38:28.242656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.818 qpair failed and we were unable to recover it. 00:38:29.818 [2024-10-01 17:38:28.242970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.818 [2024-10-01 17:38:28.242976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.818 qpair failed and we were unable to recover it. 00:38:29.818 [2024-10-01 17:38:28.243288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.818 [2024-10-01 17:38:28.243295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.818 qpair failed and we were unable to recover it. 00:38:29.818 [2024-10-01 17:38:28.243584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.818 [2024-10-01 17:38:28.243590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.818 qpair failed and we were unable to recover it. 00:38:29.818 [2024-10-01 17:38:28.243915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.818 [2024-10-01 17:38:28.243922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.818 qpair failed and we were unable to recover it. 00:38:29.818 [2024-10-01 17:38:28.244114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.818 [2024-10-01 17:38:28.244121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.818 qpair failed and we were unable to recover it. 00:38:29.818 [2024-10-01 17:38:28.244398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.818 [2024-10-01 17:38:28.244405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.818 qpair failed and we were unable to recover it. 00:38:29.818 [2024-10-01 17:38:28.244759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.818 [2024-10-01 17:38:28.244766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.818 qpair failed and we were unable to recover it. 00:38:29.818 [2024-10-01 17:38:28.245059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.818 [2024-10-01 17:38:28.245067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.818 qpair failed and we were unable to recover it. 00:38:29.818 [2024-10-01 17:38:28.245273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.818 [2024-10-01 17:38:28.245279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.818 qpair failed and we were unable to recover it. 00:38:29.818 [2024-10-01 17:38:28.245587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.818 [2024-10-01 17:38:28.245594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.818 qpair failed and we were unable to recover it. 00:38:29.818 [2024-10-01 17:38:28.245913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.818 [2024-10-01 17:38:28.245920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.818 qpair failed and we were unable to recover it. 00:38:29.818 [2024-10-01 17:38:28.246275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.818 [2024-10-01 17:38:28.246283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.818 qpair failed and we were unable to recover it. 00:38:29.818 [2024-10-01 17:38:28.246592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.818 [2024-10-01 17:38:28.246600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.818 qpair failed and we were unable to recover it. 00:38:29.818 [2024-10-01 17:38:28.246882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.818 [2024-10-01 17:38:28.246889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.818 qpair failed and we were unable to recover it. 00:38:29.818 [2024-10-01 17:38:28.247219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.818 [2024-10-01 17:38:28.247226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.818 qpair failed and we were unable to recover it. 00:38:29.818 [2024-10-01 17:38:28.247535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.818 [2024-10-01 17:38:28.247542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.818 qpair failed and we were unable to recover it. 00:38:29.818 [2024-10-01 17:38:28.247852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.818 [2024-10-01 17:38:28.247859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.818 qpair failed and we were unable to recover it. 00:38:29.818 [2024-10-01 17:38:28.248161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.818 [2024-10-01 17:38:28.248168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.818 qpair failed and we were unable to recover it. 00:38:29.818 [2024-10-01 17:38:28.248456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.818 [2024-10-01 17:38:28.248463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.818 qpair failed and we were unable to recover it. 00:38:29.818 [2024-10-01 17:38:28.248769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.818 [2024-10-01 17:38:28.248777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.818 qpair failed and we were unable to recover it. 00:38:29.818 [2024-10-01 17:38:28.248941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.818 [2024-10-01 17:38:28.248949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.818 qpair failed and we were unable to recover it. 00:38:29.818 [2024-10-01 17:38:28.249237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.818 [2024-10-01 17:38:28.249244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.818 qpair failed and we were unable to recover it. 00:38:29.818 [2024-10-01 17:38:28.249543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.818 [2024-10-01 17:38:28.249550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.818 qpair failed and we were unable to recover it. 00:38:29.818 [2024-10-01 17:38:28.249819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.818 [2024-10-01 17:38:28.249826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.818 qpair failed and we were unable to recover it. 00:38:29.818 [2024-10-01 17:38:28.250133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.819 [2024-10-01 17:38:28.250141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.819 qpair failed and we were unable to recover it. 00:38:29.819 [2024-10-01 17:38:28.250440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.819 [2024-10-01 17:38:28.250447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.819 qpair failed and we were unable to recover it. 00:38:29.819 [2024-10-01 17:38:28.250743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.819 [2024-10-01 17:38:28.250751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.819 qpair failed and we were unable to recover it. 00:38:29.819 [2024-10-01 17:38:28.251053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.819 [2024-10-01 17:38:28.251061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.819 qpair failed and we were unable to recover it. 00:38:29.819 [2024-10-01 17:38:28.251354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.819 [2024-10-01 17:38:28.251360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.819 qpair failed and we were unable to recover it. 00:38:29.819 [2024-10-01 17:38:28.251727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.819 [2024-10-01 17:38:28.251734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.819 qpair failed and we were unable to recover it. 00:38:29.819 [2024-10-01 17:38:28.252039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.819 [2024-10-01 17:38:28.252046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.819 qpair failed and we were unable to recover it. 00:38:29.819 [2024-10-01 17:38:28.252371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.819 [2024-10-01 17:38:28.252380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.819 qpair failed and we were unable to recover it. 00:38:29.819 [2024-10-01 17:38:28.252685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.819 [2024-10-01 17:38:28.252692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.819 qpair failed and we were unable to recover it. 00:38:29.819 [2024-10-01 17:38:28.253000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.819 [2024-10-01 17:38:28.253008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.819 qpair failed and we were unable to recover it. 00:38:29.819 [2024-10-01 17:38:28.253210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.819 [2024-10-01 17:38:28.253217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.819 qpair failed and we were unable to recover it. 00:38:29.819 [2024-10-01 17:38:28.253529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.819 [2024-10-01 17:38:28.253536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.819 qpair failed and we were unable to recover it. 00:38:29.819 [2024-10-01 17:38:28.253841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.819 [2024-10-01 17:38:28.253847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.819 qpair failed and we were unable to recover it. 00:38:29.819 [2024-10-01 17:38:28.254140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.819 [2024-10-01 17:38:28.254147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.819 qpair failed and we were unable to recover it. 00:38:29.819 [2024-10-01 17:38:28.254455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.819 [2024-10-01 17:38:28.254462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.819 qpair failed and we were unable to recover it. 00:38:29.819 [2024-10-01 17:38:28.254771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.819 [2024-10-01 17:38:28.254778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.819 qpair failed and we were unable to recover it. 00:38:29.819 [2024-10-01 17:38:28.255070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.819 [2024-10-01 17:38:28.255077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.819 qpair failed and we were unable to recover it. 00:38:29.819 [2024-10-01 17:38:28.255429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.819 [2024-10-01 17:38:28.255437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.819 qpair failed and we were unable to recover it. 00:38:29.819 [2024-10-01 17:38:28.255714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.819 [2024-10-01 17:38:28.255722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.819 qpair failed and we were unable to recover it. 00:38:29.819 [2024-10-01 17:38:28.256014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.819 [2024-10-01 17:38:28.256021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.819 qpair failed and we were unable to recover it. 00:38:29.819 [2024-10-01 17:38:28.256329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.819 [2024-10-01 17:38:28.256336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.819 qpair failed and we were unable to recover it. 00:38:29.819 [2024-10-01 17:38:28.256694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.819 [2024-10-01 17:38:28.256701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.819 qpair failed and we were unable to recover it. 00:38:29.819 [2024-10-01 17:38:28.257011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.819 [2024-10-01 17:38:28.257017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.819 qpair failed and we were unable to recover it. 00:38:29.819 [2024-10-01 17:38:28.257187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.819 [2024-10-01 17:38:28.257194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.819 qpair failed and we were unable to recover it. 00:38:29.819 [2024-10-01 17:38:28.257482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.819 [2024-10-01 17:38:28.257489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.819 qpair failed and we were unable to recover it. 00:38:29.819 [2024-10-01 17:38:28.257792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.819 [2024-10-01 17:38:28.257799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.819 qpair failed and we were unable to recover it. 00:38:29.819 [2024-10-01 17:38:28.258105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.819 [2024-10-01 17:38:28.258112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.819 qpair failed and we were unable to recover it. 00:38:29.819 [2024-10-01 17:38:28.258407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.819 [2024-10-01 17:38:28.258414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.819 qpair failed and we were unable to recover it. 00:38:29.819 [2024-10-01 17:38:28.258722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.819 [2024-10-01 17:38:28.258729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.819 qpair failed and we were unable to recover it. 00:38:29.819 [2024-10-01 17:38:28.259027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.819 [2024-10-01 17:38:28.259035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.819 qpair failed and we were unable to recover it. 00:38:29.819 [2024-10-01 17:38:28.259340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.819 [2024-10-01 17:38:28.259347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.819 qpair failed and we were unable to recover it. 00:38:29.819 [2024-10-01 17:38:28.259667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.819 [2024-10-01 17:38:28.259674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.819 qpair failed and we were unable to recover it. 00:38:29.819 [2024-10-01 17:38:28.259882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.819 [2024-10-01 17:38:28.259889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.819 qpair failed and we were unable to recover it. 00:38:29.819 [2024-10-01 17:38:28.260066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.819 [2024-10-01 17:38:28.260074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.819 qpair failed and we were unable to recover it. 00:38:29.819 [2024-10-01 17:38:28.260294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.819 [2024-10-01 17:38:28.260301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.819 qpair failed and we were unable to recover it. 00:38:29.819 [2024-10-01 17:38:28.260581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.819 [2024-10-01 17:38:28.260588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.819 qpair failed and we were unable to recover it. 00:38:29.819 [2024-10-01 17:38:28.260782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.819 [2024-10-01 17:38:28.260789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.819 qpair failed and we were unable to recover it. 00:38:29.819 [2024-10-01 17:38:28.261065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.819 [2024-10-01 17:38:28.261072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.819 qpair failed and we were unable to recover it. 00:38:29.819 [2024-10-01 17:38:28.261253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.819 [2024-10-01 17:38:28.261259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.820 qpair failed and we were unable to recover it. 00:38:29.820 [2024-10-01 17:38:28.261540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.820 [2024-10-01 17:38:28.261547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.820 qpair failed and we were unable to recover it. 00:38:29.820 [2024-10-01 17:38:28.261862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.820 [2024-10-01 17:38:28.261870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.820 qpair failed and we were unable to recover it. 00:38:29.820 [2024-10-01 17:38:28.262160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.820 [2024-10-01 17:38:28.262167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.820 qpair failed and we were unable to recover it. 00:38:29.820 [2024-10-01 17:38:28.262464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.820 [2024-10-01 17:38:28.262472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.820 qpair failed and we were unable to recover it. 00:38:29.820 [2024-10-01 17:38:28.262783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.820 [2024-10-01 17:38:28.262789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.820 qpair failed and we were unable to recover it. 00:38:29.820 [2024-10-01 17:38:28.263102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.820 [2024-10-01 17:38:28.263109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.820 qpair failed and we were unable to recover it. 00:38:29.820 [2024-10-01 17:38:28.263399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.820 [2024-10-01 17:38:28.263405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.820 qpair failed and we were unable to recover it. 00:38:29.820 [2024-10-01 17:38:28.263757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.820 [2024-10-01 17:38:28.263764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.820 qpair failed and we were unable to recover it. 00:38:29.820 [2024-10-01 17:38:28.264085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.820 [2024-10-01 17:38:28.264094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.820 qpair failed and we were unable to recover it. 00:38:29.820 [2024-10-01 17:38:28.264309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.820 [2024-10-01 17:38:28.264316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.820 qpair failed and we were unable to recover it. 00:38:29.820 [2024-10-01 17:38:28.264505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.820 [2024-10-01 17:38:28.264513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.820 qpair failed and we were unable to recover it. 00:38:29.820 [2024-10-01 17:38:28.264803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.820 [2024-10-01 17:38:28.264810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.820 qpair failed and we were unable to recover it. 00:38:29.820 [2024-10-01 17:38:28.265122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.820 [2024-10-01 17:38:28.265129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.820 qpair failed and we were unable to recover it. 00:38:29.820 [2024-10-01 17:38:28.265440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.820 [2024-10-01 17:38:28.265447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.820 qpair failed and we were unable to recover it. 00:38:29.820 [2024-10-01 17:38:28.265732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.820 [2024-10-01 17:38:28.265739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.820 qpair failed and we were unable to recover it. 00:38:29.820 [2024-10-01 17:38:28.266022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.820 [2024-10-01 17:38:28.266029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.820 qpair failed and we were unable to recover it. 00:38:29.820 [2024-10-01 17:38:28.266322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.820 [2024-10-01 17:38:28.266328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.820 qpair failed and we were unable to recover it. 00:38:29.820 [2024-10-01 17:38:28.266612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.820 [2024-10-01 17:38:28.266618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.820 qpair failed and we were unable to recover it. 00:38:29.820 [2024-10-01 17:38:28.266906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.820 [2024-10-01 17:38:28.266914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.820 qpair failed and we were unable to recover it. 00:38:29.820 [2024-10-01 17:38:28.267211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.820 [2024-10-01 17:38:28.267219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.820 qpair failed and we were unable to recover it. 00:38:29.820 [2024-10-01 17:38:28.267569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.820 [2024-10-01 17:38:28.267577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.820 qpair failed and we were unable to recover it. 00:38:29.820 [2024-10-01 17:38:28.267886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.820 [2024-10-01 17:38:28.267892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.820 qpair failed and we were unable to recover it. 00:38:29.820 [2024-10-01 17:38:28.268211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.820 [2024-10-01 17:38:28.268218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.820 qpair failed and we were unable to recover it. 00:38:29.820 [2024-10-01 17:38:28.268506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.820 [2024-10-01 17:38:28.268513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.820 qpair failed and we were unable to recover it. 00:38:29.820 [2024-10-01 17:38:28.268820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.820 [2024-10-01 17:38:28.268827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.820 qpair failed and we were unable to recover it. 00:38:29.820 [2024-10-01 17:38:28.269134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.820 [2024-10-01 17:38:28.269142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.820 qpair failed and we were unable to recover it. 00:38:29.820 [2024-10-01 17:38:28.269527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.820 [2024-10-01 17:38:28.269533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.820 qpair failed and we were unable to recover it. 00:38:29.820 [2024-10-01 17:38:28.269856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.820 [2024-10-01 17:38:28.269862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.820 qpair failed and we were unable to recover it. 00:38:29.820 [2024-10-01 17:38:28.270093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.820 [2024-10-01 17:38:28.270100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.820 qpair failed and we were unable to recover it. 00:38:29.820 [2024-10-01 17:38:28.270420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.820 [2024-10-01 17:38:28.270427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.820 qpair failed and we were unable to recover it. 00:38:29.820 [2024-10-01 17:38:28.270621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.820 [2024-10-01 17:38:28.270628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.820 qpair failed and we were unable to recover it. 00:38:29.820 [2024-10-01 17:38:28.270804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.820 [2024-10-01 17:38:28.270811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.820 qpair failed and we were unable to recover it. 00:38:29.820 [2024-10-01 17:38:28.271124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.820 [2024-10-01 17:38:28.271131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.820 qpair failed and we were unable to recover it. 00:38:29.820 [2024-10-01 17:38:28.271324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.820 [2024-10-01 17:38:28.271331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.820 qpair failed and we were unable to recover it. 00:38:29.820 [2024-10-01 17:38:28.271664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.820 [2024-10-01 17:38:28.271670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.820 qpair failed and we were unable to recover it. 00:38:29.820 [2024-10-01 17:38:28.272007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.820 [2024-10-01 17:38:28.272015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.820 qpair failed and we were unable to recover it. 00:38:29.820 [2024-10-01 17:38:28.272319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.820 [2024-10-01 17:38:28.272326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.820 qpair failed and we were unable to recover it. 00:38:29.820 [2024-10-01 17:38:28.272536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.820 [2024-10-01 17:38:28.272543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.820 qpair failed and we were unable to recover it. 00:38:29.821 [2024-10-01 17:38:28.272718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.821 [2024-10-01 17:38:28.272724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.821 qpair failed and we were unable to recover it. 00:38:29.821 [2024-10-01 17:38:28.272977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.821 [2024-10-01 17:38:28.272984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.821 qpair failed and we were unable to recover it. 00:38:29.821 [2024-10-01 17:38:28.273296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.821 [2024-10-01 17:38:28.273303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.821 qpair failed and we were unable to recover it. 00:38:29.821 [2024-10-01 17:38:28.273624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.821 [2024-10-01 17:38:28.273631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.821 qpair failed and we were unable to recover it. 00:38:29.821 [2024-10-01 17:38:28.273913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.821 [2024-10-01 17:38:28.273920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.821 qpair failed and we were unable to recover it. 00:38:29.821 [2024-10-01 17:38:28.274229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.821 [2024-10-01 17:38:28.274236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.821 qpair failed and we were unable to recover it. 00:38:29.821 [2024-10-01 17:38:28.274553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.821 [2024-10-01 17:38:28.274560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.821 qpair failed and we were unable to recover it. 00:38:29.821 [2024-10-01 17:38:28.274867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.821 [2024-10-01 17:38:28.274873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.821 qpair failed and we were unable to recover it. 00:38:29.821 [2024-10-01 17:38:28.275160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.821 [2024-10-01 17:38:28.275168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.821 qpair failed and we were unable to recover it. 00:38:29.821 [2024-10-01 17:38:28.275366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.821 [2024-10-01 17:38:28.275373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.821 qpair failed and we were unable to recover it. 00:38:29.821 [2024-10-01 17:38:28.275681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.821 [2024-10-01 17:38:28.275689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.821 qpair failed and we were unable to recover it. 00:38:29.821 [2024-10-01 17:38:28.276004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.821 [2024-10-01 17:38:28.276013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.821 qpair failed and we were unable to recover it. 00:38:29.821 [2024-10-01 17:38:28.276289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.821 [2024-10-01 17:38:28.276296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.821 qpair failed and we were unable to recover it. 00:38:29.821 [2024-10-01 17:38:28.276657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.821 [2024-10-01 17:38:28.276664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.821 qpair failed and we were unable to recover it. 00:38:29.821 [2024-10-01 17:38:28.276982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.821 [2024-10-01 17:38:28.276989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.821 qpair failed and we were unable to recover it. 00:38:29.821 [2024-10-01 17:38:28.277294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.821 [2024-10-01 17:38:28.277301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.821 qpair failed and we were unable to recover it. 00:38:29.821 [2024-10-01 17:38:28.277597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.821 [2024-10-01 17:38:28.277604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.821 qpair failed and we were unable to recover it. 00:38:29.821 [2024-10-01 17:38:28.277938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.821 [2024-10-01 17:38:28.277945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.821 qpair failed and we were unable to recover it. 00:38:29.821 [2024-10-01 17:38:28.278230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.821 [2024-10-01 17:38:28.278238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.821 qpair failed and we were unable to recover it. 00:38:29.821 [2024-10-01 17:38:28.278562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.821 [2024-10-01 17:38:28.278569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.821 qpair failed and we were unable to recover it. 00:38:29.821 [2024-10-01 17:38:28.278858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.821 [2024-10-01 17:38:28.278865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.821 qpair failed and we were unable to recover it. 00:38:29.821 [2024-10-01 17:38:28.279201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.821 [2024-10-01 17:38:28.279208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.821 qpair failed and we were unable to recover it. 00:38:29.821 [2024-10-01 17:38:28.279516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.821 [2024-10-01 17:38:28.279523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.821 qpair failed and we were unable to recover it. 00:38:29.821 [2024-10-01 17:38:28.279830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.821 [2024-10-01 17:38:28.279837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.821 qpair failed and we were unable to recover it. 00:38:29.821 [2024-10-01 17:38:28.280139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.821 [2024-10-01 17:38:28.280148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.821 qpair failed and we were unable to recover it. 00:38:29.821 [2024-10-01 17:38:28.280464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.821 [2024-10-01 17:38:28.280471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.821 qpair failed and we were unable to recover it. 00:38:29.821 [2024-10-01 17:38:28.280779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.821 [2024-10-01 17:38:28.280786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.821 qpair failed and we were unable to recover it. 00:38:29.821 [2024-10-01 17:38:28.281097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.821 [2024-10-01 17:38:28.281104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.821 qpair failed and we were unable to recover it. 00:38:29.821 [2024-10-01 17:38:28.281290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.821 [2024-10-01 17:38:28.281298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.821 qpair failed and we were unable to recover it. 00:38:29.821 [2024-10-01 17:38:28.281622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.821 [2024-10-01 17:38:28.281629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.821 qpair failed and we were unable to recover it. 00:38:29.821 [2024-10-01 17:38:28.281942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.821 [2024-10-01 17:38:28.281949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.821 qpair failed and we were unable to recover it. 00:38:29.821 [2024-10-01 17:38:28.282254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.821 [2024-10-01 17:38:28.282261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.821 qpair failed and we were unable to recover it. 00:38:29.821 [2024-10-01 17:38:28.282570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.821 [2024-10-01 17:38:28.282577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.821 qpair failed and we were unable to recover it. 00:38:29.821 [2024-10-01 17:38:28.282882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.821 [2024-10-01 17:38:28.282889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.821 qpair failed and we were unable to recover it. 00:38:29.821 [2024-10-01 17:38:28.283187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.822 [2024-10-01 17:38:28.283195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.822 qpair failed and we were unable to recover it. 00:38:29.822 [2024-10-01 17:38:28.283500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.822 [2024-10-01 17:38:28.283507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.822 qpair failed and we were unable to recover it. 00:38:29.822 [2024-10-01 17:38:28.283804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.822 [2024-10-01 17:38:28.283811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.822 qpair failed and we were unable to recover it. 00:38:29.822 [2024-10-01 17:38:28.284112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.822 [2024-10-01 17:38:28.284119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.822 qpair failed and we were unable to recover it. 00:38:29.822 [2024-10-01 17:38:28.284417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.822 [2024-10-01 17:38:28.284424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.822 qpair failed and we were unable to recover it. 00:38:29.822 [2024-10-01 17:38:28.284734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.822 [2024-10-01 17:38:28.284741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.822 qpair failed and we were unable to recover it. 00:38:29.822 [2024-10-01 17:38:28.285052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.822 [2024-10-01 17:38:28.285060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.822 qpair failed and we were unable to recover it. 00:38:29.822 [2024-10-01 17:38:28.285311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.822 [2024-10-01 17:38:28.285318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.822 qpair failed and we were unable to recover it. 00:38:29.822 [2024-10-01 17:38:28.285595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.822 [2024-10-01 17:38:28.285602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.822 qpair failed and we were unable to recover it. 00:38:29.822 [2024-10-01 17:38:28.285910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.822 [2024-10-01 17:38:28.285916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.822 qpair failed and we were unable to recover it. 00:38:29.822 [2024-10-01 17:38:28.286212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.822 [2024-10-01 17:38:28.286219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.822 qpair failed and we were unable to recover it. 00:38:29.822 [2024-10-01 17:38:28.286371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.822 [2024-10-01 17:38:28.286378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.822 qpair failed and we were unable to recover it. 00:38:29.822 [2024-10-01 17:38:28.286688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.822 [2024-10-01 17:38:28.286695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.822 qpair failed and we were unable to recover it. 00:38:29.822 [2024-10-01 17:38:28.287012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.822 [2024-10-01 17:38:28.287021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.822 qpair failed and we were unable to recover it. 00:38:29.822 [2024-10-01 17:38:28.287324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.822 [2024-10-01 17:38:28.287331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.822 qpair failed and we were unable to recover it. 00:38:29.822 [2024-10-01 17:38:28.287623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.822 [2024-10-01 17:38:28.287630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.822 qpair failed and we were unable to recover it. 00:38:29.822 [2024-10-01 17:38:28.287798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.822 [2024-10-01 17:38:28.287808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.822 qpair failed and we were unable to recover it. 00:38:29.822 [2024-10-01 17:38:28.288134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.822 [2024-10-01 17:38:28.288141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.822 qpair failed and we were unable to recover it. 00:38:29.822 [2024-10-01 17:38:28.288446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.822 [2024-10-01 17:38:28.288453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.822 qpair failed and we were unable to recover it. 00:38:29.822 [2024-10-01 17:38:28.288748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.822 [2024-10-01 17:38:28.288754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.822 qpair failed and we were unable to recover it. 00:38:29.822 [2024-10-01 17:38:28.288918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.822 [2024-10-01 17:38:28.288926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.822 qpair failed and we were unable to recover it. 00:38:29.822 [2024-10-01 17:38:28.289281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.822 [2024-10-01 17:38:28.289288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.822 qpair failed and we were unable to recover it. 00:38:29.822 [2024-10-01 17:38:28.289617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.822 [2024-10-01 17:38:28.289624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.822 qpair failed and we were unable to recover it. 00:38:29.822 [2024-10-01 17:38:28.289903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.822 [2024-10-01 17:38:28.289909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.822 qpair failed and we were unable to recover it. 00:38:29.822 [2024-10-01 17:38:28.290203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.822 [2024-10-01 17:38:28.290210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.822 qpair failed and we were unable to recover it. 00:38:29.822 [2024-10-01 17:38:28.290526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.822 [2024-10-01 17:38:28.290533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.822 qpair failed and we were unable to recover it. 00:38:29.822 [2024-10-01 17:38:28.290697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.822 [2024-10-01 17:38:28.290705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.822 qpair failed and we were unable to recover it. 00:38:29.822 [2024-10-01 17:38:28.291073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.822 [2024-10-01 17:38:28.291080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.822 qpair failed and we were unable to recover it. 00:38:29.822 [2024-10-01 17:38:28.291398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.822 [2024-10-01 17:38:28.291405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.822 qpair failed and we were unable to recover it. 00:38:29.822 [2024-10-01 17:38:28.291716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.822 [2024-10-01 17:38:28.291723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.822 qpair failed and we were unable to recover it. 00:38:29.822 [2024-10-01 17:38:28.292016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.822 [2024-10-01 17:38:28.292024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.822 qpair failed and we were unable to recover it. 00:38:29.822 [2024-10-01 17:38:28.292231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.822 [2024-10-01 17:38:28.292238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.822 qpair failed and we were unable to recover it. 00:38:29.822 [2024-10-01 17:38:28.292546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.822 [2024-10-01 17:38:28.292553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.822 qpair failed and we were unable to recover it. 00:38:29.822 [2024-10-01 17:38:28.292859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.822 [2024-10-01 17:38:28.292866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.822 qpair failed and we were unable to recover it. 00:38:29.822 [2024-10-01 17:38:28.293115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.822 [2024-10-01 17:38:28.293122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.822 qpair failed and we were unable to recover it. 00:38:29.822 [2024-10-01 17:38:28.293416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.822 [2024-10-01 17:38:28.293422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.822 qpair failed and we were unable to recover it. 00:38:29.822 [2024-10-01 17:38:28.293628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.822 [2024-10-01 17:38:28.293636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.822 qpair failed and we were unable to recover it. 00:38:29.822 [2024-10-01 17:38:28.293951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.822 [2024-10-01 17:38:28.293958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.822 qpair failed and we were unable to recover it. 00:38:29.822 [2024-10-01 17:38:28.294274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.822 [2024-10-01 17:38:28.294281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.822 qpair failed and we were unable to recover it. 00:38:29.823 [2024-10-01 17:38:28.294569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.823 [2024-10-01 17:38:28.294576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.823 qpair failed and we were unable to recover it. 00:38:29.823 [2024-10-01 17:38:28.294860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.823 [2024-10-01 17:38:28.294867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.823 qpair failed and we were unable to recover it. 00:38:29.823 [2024-10-01 17:38:28.295159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.823 [2024-10-01 17:38:28.295166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.823 qpair failed and we were unable to recover it. 00:38:29.823 [2024-10-01 17:38:28.295462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.823 [2024-10-01 17:38:28.295469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.823 qpair failed and we were unable to recover it. 00:38:29.823 [2024-10-01 17:38:28.295769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.823 [2024-10-01 17:38:28.295777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.823 qpair failed and we were unable to recover it. 00:38:29.823 [2024-10-01 17:38:28.296117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.823 [2024-10-01 17:38:28.296125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.823 qpair failed and we were unable to recover it. 00:38:29.823 [2024-10-01 17:38:28.296457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.823 [2024-10-01 17:38:28.296464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.823 qpair failed and we were unable to recover it. 00:38:29.823 [2024-10-01 17:38:28.296786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.823 [2024-10-01 17:38:28.296792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.823 qpair failed and we were unable to recover it. 00:38:29.823 [2024-10-01 17:38:28.297084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.823 [2024-10-01 17:38:28.297091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.823 qpair failed and we were unable to recover it. 00:38:29.823 [2024-10-01 17:38:28.297454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.823 [2024-10-01 17:38:28.297461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.823 qpair failed and we were unable to recover it. 00:38:29.823 [2024-10-01 17:38:28.297738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.823 [2024-10-01 17:38:28.297745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.823 qpair failed and we were unable to recover it. 00:38:29.823 [2024-10-01 17:38:28.297919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.823 [2024-10-01 17:38:28.297926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.823 qpair failed and we were unable to recover it. 00:38:29.823 [2024-10-01 17:38:28.298249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.823 [2024-10-01 17:38:28.298256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.823 qpair failed and we were unable to recover it. 00:38:29.823 [2024-10-01 17:38:28.298591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.823 [2024-10-01 17:38:28.298597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.823 qpair failed and we were unable to recover it. 00:38:29.823 [2024-10-01 17:38:28.298903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.823 [2024-10-01 17:38:28.298910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.823 qpair failed and we were unable to recover it. 00:38:29.823 [2024-10-01 17:38:28.299299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.823 [2024-10-01 17:38:28.299306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.823 qpair failed and we were unable to recover it. 00:38:29.823 [2024-10-01 17:38:28.299612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.823 [2024-10-01 17:38:28.299618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.823 qpair failed and we were unable to recover it. 00:38:29.823 [2024-10-01 17:38:28.299936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.823 [2024-10-01 17:38:28.299945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.823 qpair failed and we were unable to recover it. 00:38:29.823 [2024-10-01 17:38:28.300310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.823 [2024-10-01 17:38:28.300317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.823 qpair failed and we were unable to recover it. 00:38:29.823 [2024-10-01 17:38:28.300623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.823 [2024-10-01 17:38:28.300629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.823 qpair failed and we were unable to recover it. 00:38:29.823 [2024-10-01 17:38:28.300919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.823 [2024-10-01 17:38:28.300926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.823 qpair failed and we were unable to recover it. 00:38:29.823 [2024-10-01 17:38:28.301264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.823 [2024-10-01 17:38:28.301271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.823 qpair failed and we were unable to recover it. 00:38:29.823 [2024-10-01 17:38:28.301577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.823 [2024-10-01 17:38:28.301584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.823 qpair failed and we were unable to recover it. 00:38:29.823 [2024-10-01 17:38:28.301896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.823 [2024-10-01 17:38:28.301903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.823 qpair failed and we were unable to recover it. 00:38:29.823 [2024-10-01 17:38:28.302221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.823 [2024-10-01 17:38:28.302228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.823 qpair failed and we were unable to recover it. 00:38:29.823 [2024-10-01 17:38:28.302549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.823 [2024-10-01 17:38:28.302556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.823 qpair failed and we were unable to recover it. 00:38:29.823 [2024-10-01 17:38:28.302870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.823 [2024-10-01 17:38:28.302877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.823 qpair failed and we were unable to recover it. 00:38:29.823 [2024-10-01 17:38:28.303163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.823 [2024-10-01 17:38:28.303171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.823 qpair failed and we were unable to recover it. 00:38:29.823 [2024-10-01 17:38:28.303477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.823 [2024-10-01 17:38:28.303484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.823 qpair failed and we were unable to recover it. 00:38:29.823 [2024-10-01 17:38:28.303758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.823 [2024-10-01 17:38:28.303766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.823 qpair failed and we were unable to recover it. 00:38:29.823 [2024-10-01 17:38:28.304053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.823 [2024-10-01 17:38:28.304060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.823 qpair failed and we were unable to recover it. 00:38:29.823 [2024-10-01 17:38:28.304383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.823 [2024-10-01 17:38:28.304390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.823 qpair failed and we were unable to recover it. 00:38:29.823 [2024-10-01 17:38:28.304681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.823 [2024-10-01 17:38:28.304688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.823 qpair failed and we were unable to recover it. 00:38:29.823 [2024-10-01 17:38:28.304999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.823 [2024-10-01 17:38:28.305006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.823 qpair failed and we were unable to recover it. 00:38:29.823 [2024-10-01 17:38:28.305336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.823 [2024-10-01 17:38:28.305344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.823 qpair failed and we were unable to recover it. 00:38:29.823 [2024-10-01 17:38:28.305635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.823 [2024-10-01 17:38:28.305641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.823 qpair failed and we were unable to recover it. 00:38:29.823 [2024-10-01 17:38:28.305924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.823 [2024-10-01 17:38:28.305931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.823 qpair failed and we were unable to recover it. 00:38:29.823 [2024-10-01 17:38:28.306220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.823 [2024-10-01 17:38:28.306227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.823 qpair failed and we were unable to recover it. 00:38:29.823 [2024-10-01 17:38:28.306377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.824 [2024-10-01 17:38:28.306385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.824 qpair failed and we were unable to recover it. 00:38:29.824 [2024-10-01 17:38:28.306627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.824 [2024-10-01 17:38:28.306634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.824 qpair failed and we were unable to recover it. 00:38:29.824 [2024-10-01 17:38:28.306955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.824 [2024-10-01 17:38:28.306962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.824 qpair failed and we were unable to recover it. 00:38:29.824 [2024-10-01 17:38:28.307159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.824 [2024-10-01 17:38:28.307166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.824 qpair failed and we were unable to recover it. 00:38:29.824 [2024-10-01 17:38:28.307458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.824 [2024-10-01 17:38:28.307464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.824 qpair failed and we were unable to recover it. 00:38:29.824 [2024-10-01 17:38:28.307654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.824 [2024-10-01 17:38:28.307661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.824 qpair failed and we were unable to recover it. 00:38:29.824 [2024-10-01 17:38:28.308008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.824 [2024-10-01 17:38:28.308016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.824 qpair failed and we were unable to recover it. 00:38:29.824 [2024-10-01 17:38:28.308323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.824 [2024-10-01 17:38:28.308330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.824 qpair failed and we were unable to recover it. 00:38:29.824 [2024-10-01 17:38:28.308640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.824 [2024-10-01 17:38:28.308647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.824 qpair failed and we were unable to recover it. 00:38:29.824 [2024-10-01 17:38:28.308931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.824 [2024-10-01 17:38:28.308938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.824 qpair failed and we were unable to recover it. 00:38:29.824 [2024-10-01 17:38:28.309224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.824 [2024-10-01 17:38:28.309238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.824 qpair failed and we were unable to recover it. 00:38:29.824 [2024-10-01 17:38:28.309544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.824 [2024-10-01 17:38:28.309552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.824 qpair failed and we were unable to recover it. 00:38:29.824 [2024-10-01 17:38:28.309859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.824 [2024-10-01 17:38:28.309866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.824 qpair failed and we were unable to recover it. 00:38:29.824 [2024-10-01 17:38:28.310164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.824 [2024-10-01 17:38:28.310171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.824 qpair failed and we were unable to recover it. 00:38:29.824 [2024-10-01 17:38:28.310570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.824 [2024-10-01 17:38:28.310577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.824 qpair failed and we were unable to recover it. 00:38:29.824 [2024-10-01 17:38:28.310873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.824 [2024-10-01 17:38:28.310880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.824 qpair failed and we were unable to recover it. 00:38:29.824 [2024-10-01 17:38:28.311181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.824 [2024-10-01 17:38:28.311188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.824 qpair failed and we were unable to recover it. 00:38:29.824 [2024-10-01 17:38:28.311500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.824 [2024-10-01 17:38:28.311507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.824 qpair failed and we were unable to recover it. 00:38:29.824 [2024-10-01 17:38:28.311828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.824 [2024-10-01 17:38:28.311836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.824 qpair failed and we were unable to recover it. 00:38:29.824 [2024-10-01 17:38:28.312237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.824 [2024-10-01 17:38:28.312246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.824 qpair failed and we were unable to recover it. 00:38:29.824 [2024-10-01 17:38:28.312493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.824 [2024-10-01 17:38:28.312500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.824 qpair failed and we were unable to recover it. 00:38:29.824 [2024-10-01 17:38:28.312825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.824 [2024-10-01 17:38:28.312831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.824 qpair failed and we were unable to recover it. 00:38:29.824 [2024-10-01 17:38:28.313034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.824 [2024-10-01 17:38:28.313041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.824 qpair failed and we were unable to recover it. 00:38:29.824 [2024-10-01 17:38:28.313338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.824 [2024-10-01 17:38:28.313345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.824 qpair failed and we were unable to recover it. 00:38:29.824 [2024-10-01 17:38:28.313657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.824 [2024-10-01 17:38:28.313664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.824 qpair failed and we were unable to recover it. 00:38:29.824 [2024-10-01 17:38:28.313988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.824 [2024-10-01 17:38:28.313998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.824 qpair failed and we were unable to recover it. 00:38:29.824 [2024-10-01 17:38:28.314285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.824 [2024-10-01 17:38:28.314292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.824 qpair failed and we were unable to recover it. 00:38:29.824 [2024-10-01 17:38:28.314597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.824 [2024-10-01 17:38:28.314604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.824 qpair failed and we were unable to recover it. 00:38:29.824 [2024-10-01 17:38:28.314912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.824 [2024-10-01 17:38:28.314919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.824 qpair failed and we were unable to recover it. 00:38:29.824 [2024-10-01 17:38:28.315237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.824 [2024-10-01 17:38:28.315245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.824 qpair failed and we were unable to recover it. 00:38:29.824 [2024-10-01 17:38:28.315548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.824 [2024-10-01 17:38:28.315554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.824 qpair failed and we were unable to recover it. 00:38:29.824 [2024-10-01 17:38:28.315756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.824 [2024-10-01 17:38:28.315764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.824 qpair failed and we were unable to recover it. 00:38:29.824 [2024-10-01 17:38:28.316035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.824 [2024-10-01 17:38:28.316042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.824 qpair failed and we were unable to recover it. 00:38:29.824 [2024-10-01 17:38:28.316366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.824 [2024-10-01 17:38:28.316373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.824 qpair failed and we were unable to recover it. 00:38:29.824 [2024-10-01 17:38:28.316681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.824 [2024-10-01 17:38:28.316688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.824 qpair failed and we were unable to recover it. 00:38:29.824 [2024-10-01 17:38:28.317000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.824 [2024-10-01 17:38:28.317007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.824 qpair failed and we were unable to recover it. 00:38:29.824 [2024-10-01 17:38:28.317327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.824 [2024-10-01 17:38:28.317334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.824 qpair failed and we were unable to recover it. 00:38:29.824 [2024-10-01 17:38:28.317622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.824 [2024-10-01 17:38:28.317629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.824 qpair failed and we were unable to recover it. 00:38:29.824 [2024-10-01 17:38:28.317915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.824 [2024-10-01 17:38:28.317922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.824 qpair failed and we were unable to recover it. 00:38:29.824 [2024-10-01 17:38:28.318109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.825 [2024-10-01 17:38:28.318116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.825 qpair failed and we were unable to recover it. 00:38:29.825 [2024-10-01 17:38:28.318310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.825 [2024-10-01 17:38:28.318317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.825 qpair failed and we were unable to recover it. 00:38:29.825 [2024-10-01 17:38:28.318640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.825 [2024-10-01 17:38:28.318647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.825 qpair failed and we were unable to recover it. 00:38:29.825 [2024-10-01 17:38:28.318853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.825 [2024-10-01 17:38:28.318860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.825 qpair failed and we were unable to recover it. 00:38:29.825 [2024-10-01 17:38:28.319167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.825 [2024-10-01 17:38:28.319175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.825 qpair failed and we were unable to recover it. 00:38:29.825 [2024-10-01 17:38:28.319337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.825 [2024-10-01 17:38:28.319345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.825 qpair failed and we were unable to recover it. 00:38:29.825 [2024-10-01 17:38:28.319651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.825 [2024-10-01 17:38:28.319658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.825 qpair failed and we were unable to recover it. 00:38:29.825 [2024-10-01 17:38:28.319865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.825 [2024-10-01 17:38:28.319872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.825 qpair failed and we were unable to recover it. 00:38:29.825 [2024-10-01 17:38:28.320072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.825 [2024-10-01 17:38:28.320080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.825 qpair failed and we were unable to recover it. 00:38:29.825 [2024-10-01 17:38:28.320347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.825 [2024-10-01 17:38:28.320354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.825 qpair failed and we were unable to recover it. 00:38:29.825 [2024-10-01 17:38:28.320630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.825 [2024-10-01 17:38:28.320637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.825 qpair failed and we were unable to recover it. 00:38:29.825 [2024-10-01 17:38:28.320883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.825 [2024-10-01 17:38:28.320889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.825 qpair failed and we were unable to recover it. 00:38:29.825 [2024-10-01 17:38:28.321189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.825 [2024-10-01 17:38:28.321196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.825 qpair failed and we were unable to recover it. 00:38:29.825 [2024-10-01 17:38:28.321522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.825 [2024-10-01 17:38:28.321530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.825 qpair failed and we were unable to recover it. 00:38:29.825 [2024-10-01 17:38:28.321865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.825 [2024-10-01 17:38:28.321873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.825 qpair failed and we were unable to recover it. 00:38:29.825 [2024-10-01 17:38:28.322055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.825 [2024-10-01 17:38:28.322063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.825 qpair failed and we were unable to recover it. 00:38:29.825 [2024-10-01 17:38:28.322381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.825 [2024-10-01 17:38:28.322388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.825 qpair failed and we were unable to recover it. 00:38:29.825 [2024-10-01 17:38:28.322695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.825 [2024-10-01 17:38:28.322704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.825 qpair failed and we were unable to recover it. 00:38:29.825 [2024-10-01 17:38:28.323030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.825 [2024-10-01 17:38:28.323038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.825 qpair failed and we were unable to recover it. 00:38:29.825 [2024-10-01 17:38:28.323403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.825 [2024-10-01 17:38:28.323412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.825 qpair failed and we were unable to recover it. 00:38:29.825 [2024-10-01 17:38:28.323720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.825 [2024-10-01 17:38:28.323728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.825 qpair failed and we were unable to recover it. 00:38:29.825 [2024-10-01 17:38:28.324057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.825 [2024-10-01 17:38:28.324064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.825 qpair failed and we were unable to recover it. 00:38:29.825 [2024-10-01 17:38:28.324378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.825 [2024-10-01 17:38:28.324385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.825 qpair failed and we were unable to recover it. 00:38:29.825 [2024-10-01 17:38:28.324671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.825 [2024-10-01 17:38:28.324678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.825 qpair failed and we were unable to recover it. 00:38:29.825 [2024-10-01 17:38:28.325018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.825 [2024-10-01 17:38:28.325025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.825 qpair failed and we were unable to recover it. 00:38:29.825 [2024-10-01 17:38:28.325206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.825 [2024-10-01 17:38:28.325214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.825 qpair failed and we were unable to recover it. 00:38:29.825 [2024-10-01 17:38:28.325538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.825 [2024-10-01 17:38:28.325545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.825 qpair failed and we were unable to recover it. 00:38:29.825 [2024-10-01 17:38:28.325752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.825 [2024-10-01 17:38:28.325758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.825 qpair failed and we were unable to recover it. 00:38:29.825 [2024-10-01 17:38:28.326083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.825 [2024-10-01 17:38:28.326091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.825 qpair failed and we were unable to recover it. 00:38:29.825 [2024-10-01 17:38:28.326466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.825 [2024-10-01 17:38:28.326474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.825 qpair failed and we were unable to recover it. 00:38:29.825 [2024-10-01 17:38:28.326798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.825 [2024-10-01 17:38:28.326805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.825 qpair failed and we were unable to recover it. 00:38:29.825 [2024-10-01 17:38:28.326971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.825 [2024-10-01 17:38:28.326978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.825 qpair failed and we were unable to recover it. 00:38:29.825 [2024-10-01 17:38:28.327334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.825 [2024-10-01 17:38:28.327342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.825 qpair failed and we were unable to recover it. 00:38:29.825 [2024-10-01 17:38:28.327602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.825 [2024-10-01 17:38:28.327609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.825 qpair failed and we were unable to recover it. 00:38:29.825 [2024-10-01 17:38:28.327913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.825 [2024-10-01 17:38:28.327920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.825 qpair failed and we were unable to recover it. 00:38:29.825 [2024-10-01 17:38:28.328195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.825 [2024-10-01 17:38:28.328202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.825 qpair failed and we were unable to recover it. 00:38:29.825 [2024-10-01 17:38:28.328506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.825 [2024-10-01 17:38:28.328513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.825 qpair failed and we were unable to recover it. 00:38:29.825 [2024-10-01 17:38:28.328709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.825 [2024-10-01 17:38:28.328716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.825 qpair failed and we were unable to recover it. 00:38:29.825 [2024-10-01 17:38:28.329015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.825 [2024-10-01 17:38:28.329023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.825 qpair failed and we were unable to recover it. 00:38:29.826 [2024-10-01 17:38:28.329328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.826 [2024-10-01 17:38:28.329336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.826 qpair failed and we were unable to recover it. 00:38:29.826 [2024-10-01 17:38:28.329525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.826 [2024-10-01 17:38:28.329533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.826 qpair failed and we were unable to recover it. 00:38:29.826 [2024-10-01 17:38:28.329879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.826 [2024-10-01 17:38:28.329886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.826 qpair failed and we were unable to recover it. 00:38:29.826 [2024-10-01 17:38:28.330354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.826 [2024-10-01 17:38:28.330362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.826 qpair failed and we were unable to recover it. 00:38:29.826 [2024-10-01 17:38:28.330707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.826 [2024-10-01 17:38:28.330714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.826 qpair failed and we were unable to recover it. 00:38:29.826 [2024-10-01 17:38:28.330786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.826 [2024-10-01 17:38:28.330793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.826 qpair failed and we were unable to recover it. 00:38:29.826 [2024-10-01 17:38:28.331072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.826 [2024-10-01 17:38:28.331080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:29.826 qpair failed and we were unable to recover it. 00:38:30.106 [2024-10-01 17:38:28.331404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.106 [2024-10-01 17:38:28.331413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.107 qpair failed and we were unable to recover it. 00:38:30.107 [2024-10-01 17:38:28.331687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.107 [2024-10-01 17:38:28.331695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.107 qpair failed and we were unable to recover it. 00:38:30.107 [2024-10-01 17:38:28.332026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.107 [2024-10-01 17:38:28.332034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.107 qpair failed and we were unable to recover it. 00:38:30.107 [2024-10-01 17:38:28.332258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.107 [2024-10-01 17:38:28.332265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.107 qpair failed and we were unable to recover it. 00:38:30.107 [2024-10-01 17:38:28.332472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.107 [2024-10-01 17:38:28.332478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.107 qpair failed and we were unable to recover it. 00:38:30.107 [2024-10-01 17:38:28.332772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.107 [2024-10-01 17:38:28.332778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.107 qpair failed and we were unable to recover it. 00:38:30.107 [2024-10-01 17:38:28.333110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.107 [2024-10-01 17:38:28.333119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.107 qpair failed and we were unable to recover it. 00:38:30.107 [2024-10-01 17:38:28.333432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.107 [2024-10-01 17:38:28.333439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.107 qpair failed and we were unable to recover it. 00:38:30.107 [2024-10-01 17:38:28.333751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.107 [2024-10-01 17:38:28.333758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.107 qpair failed and we were unable to recover it. 00:38:30.107 [2024-10-01 17:38:28.334071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.107 [2024-10-01 17:38:28.334079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.107 qpair failed and we were unable to recover it. 00:38:30.107 [2024-10-01 17:38:28.334268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.107 [2024-10-01 17:38:28.334276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.107 qpair failed and we were unable to recover it. 00:38:30.107 [2024-10-01 17:38:28.334584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.107 [2024-10-01 17:38:28.334591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.107 qpair failed and we were unable to recover it. 00:38:30.107 [2024-10-01 17:38:28.334915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.107 [2024-10-01 17:38:28.334930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.107 qpair failed and we were unable to recover it. 00:38:30.107 [2024-10-01 17:38:28.335238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.107 [2024-10-01 17:38:28.335246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.107 qpair failed and we were unable to recover it. 00:38:30.107 [2024-10-01 17:38:28.335547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.107 [2024-10-01 17:38:28.335556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.107 qpair failed and we were unable to recover it. 00:38:30.107 [2024-10-01 17:38:28.335921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.107 [2024-10-01 17:38:28.335928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.107 qpair failed and we were unable to recover it. 00:38:30.107 [2024-10-01 17:38:28.336244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.107 [2024-10-01 17:38:28.336261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.107 qpair failed and we were unable to recover it. 00:38:30.107 [2024-10-01 17:38:28.336533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.107 [2024-10-01 17:38:28.336540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.107 qpair failed and we were unable to recover it. 00:38:30.107 [2024-10-01 17:38:28.336894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.107 [2024-10-01 17:38:28.336901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.107 qpair failed and we were unable to recover it. 00:38:30.107 [2024-10-01 17:38:28.337097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.107 [2024-10-01 17:38:28.337105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.107 qpair failed and we were unable to recover it. 00:38:30.107 [2024-10-01 17:38:28.337434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.107 [2024-10-01 17:38:28.337441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.107 qpair failed and we were unable to recover it. 00:38:30.107 [2024-10-01 17:38:28.337634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.107 [2024-10-01 17:38:28.337641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.107 qpair failed and we were unable to recover it. 00:38:30.107 [2024-10-01 17:38:28.338001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.107 [2024-10-01 17:38:28.338008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.107 qpair failed and we were unable to recover it. 00:38:30.107 [2024-10-01 17:38:28.338383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.107 [2024-10-01 17:38:28.338390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.107 qpair failed and we were unable to recover it. 00:38:30.107 [2024-10-01 17:38:28.338737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.107 [2024-10-01 17:38:28.338745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.107 qpair failed and we were unable to recover it. 00:38:30.107 [2024-10-01 17:38:28.339071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.107 [2024-10-01 17:38:28.339078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.107 qpair failed and we were unable to recover it. 00:38:30.107 [2024-10-01 17:38:28.339334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.107 [2024-10-01 17:38:28.339341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.107 qpair failed and we were unable to recover it. 00:38:30.107 [2024-10-01 17:38:28.339632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.107 [2024-10-01 17:38:28.339639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.107 qpair failed and we were unable to recover it. 00:38:30.107 [2024-10-01 17:38:28.339953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.107 [2024-10-01 17:38:28.339960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.107 qpair failed and we were unable to recover it. 00:38:30.107 [2024-10-01 17:38:28.340181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.107 [2024-10-01 17:38:28.340188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.107 qpair failed and we were unable to recover it. 00:38:30.107 [2024-10-01 17:38:28.340519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.107 [2024-10-01 17:38:28.340526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.107 qpair failed and we were unable to recover it. 00:38:30.107 [2024-10-01 17:38:28.340821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.107 [2024-10-01 17:38:28.340828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.107 qpair failed and we were unable to recover it. 00:38:30.107 [2024-10-01 17:38:28.341125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.107 [2024-10-01 17:38:28.341133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.107 qpair failed and we were unable to recover it. 00:38:30.107 [2024-10-01 17:38:28.341361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.107 [2024-10-01 17:38:28.341368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.107 qpair failed and we were unable to recover it. 00:38:30.107 [2024-10-01 17:38:28.341641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.107 [2024-10-01 17:38:28.341649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.107 qpair failed and we were unable to recover it. 00:38:30.107 [2024-10-01 17:38:28.341967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.107 [2024-10-01 17:38:28.341974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.107 qpair failed and we were unable to recover it. 00:38:30.107 [2024-10-01 17:38:28.342333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.107 [2024-10-01 17:38:28.342341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.107 qpair failed and we were unable to recover it. 00:38:30.107 [2024-10-01 17:38:28.342647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.107 [2024-10-01 17:38:28.342654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.107 qpair failed and we were unable to recover it. 00:38:30.108 [2024-10-01 17:38:28.342942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.108 [2024-10-01 17:38:28.342949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.108 qpair failed and we were unable to recover it. 00:38:30.108 [2024-10-01 17:38:28.343331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.108 [2024-10-01 17:38:28.343339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.108 qpair failed and we were unable to recover it. 00:38:30.108 [2024-10-01 17:38:28.343551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.108 [2024-10-01 17:38:28.343558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.108 qpair failed and we were unable to recover it. 00:38:30.108 [2024-10-01 17:38:28.343880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.108 [2024-10-01 17:38:28.343887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.108 qpair failed and we were unable to recover it. 00:38:30.108 [2024-10-01 17:38:28.344289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.108 [2024-10-01 17:38:28.344296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.108 qpair failed and we were unable to recover it. 00:38:30.108 [2024-10-01 17:38:28.344609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.108 [2024-10-01 17:38:28.344617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.108 qpair failed and we were unable to recover it. 00:38:30.108 [2024-10-01 17:38:28.344916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.108 [2024-10-01 17:38:28.344923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.108 qpair failed and we were unable to recover it. 00:38:30.108 [2024-10-01 17:38:28.345287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.108 [2024-10-01 17:38:28.345295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.108 qpair failed and we were unable to recover it. 00:38:30.108 [2024-10-01 17:38:28.345606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.108 [2024-10-01 17:38:28.345613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.108 qpair failed and we were unable to recover it. 00:38:30.108 [2024-10-01 17:38:28.345929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.108 [2024-10-01 17:38:28.345936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.108 qpair failed and we were unable to recover it. 00:38:30.108 [2024-10-01 17:38:28.346247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.108 [2024-10-01 17:38:28.346254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.108 qpair failed and we were unable to recover it. 00:38:30.108 [2024-10-01 17:38:28.346554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.108 [2024-10-01 17:38:28.346561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.108 qpair failed and we were unable to recover it. 00:38:30.108 [2024-10-01 17:38:28.346939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.108 [2024-10-01 17:38:28.346946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.108 qpair failed and we were unable to recover it. 00:38:30.108 [2024-10-01 17:38:28.347310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.108 [2024-10-01 17:38:28.347319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.108 qpair failed and we were unable to recover it. 00:38:30.108 [2024-10-01 17:38:28.347617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.108 [2024-10-01 17:38:28.347624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.108 qpair failed and we were unable to recover it. 00:38:30.108 [2024-10-01 17:38:28.347909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.108 [2024-10-01 17:38:28.347916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.108 qpair failed and we were unable to recover it. 00:38:30.108 [2024-10-01 17:38:28.348311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.108 [2024-10-01 17:38:28.348318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.108 qpair failed and we were unable to recover it. 00:38:30.108 [2024-10-01 17:38:28.348524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.108 [2024-10-01 17:38:28.348532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.108 qpair failed and we were unable to recover it. 00:38:30.108 [2024-10-01 17:38:28.348805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.108 [2024-10-01 17:38:28.348813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.108 qpair failed and we were unable to recover it. 00:38:30.108 [2024-10-01 17:38:28.348999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.108 [2024-10-01 17:38:28.349007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.108 qpair failed and we were unable to recover it. 00:38:30.108 [2024-10-01 17:38:28.349325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.108 [2024-10-01 17:38:28.349332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.108 qpair failed and we were unable to recover it. 00:38:30.108 [2024-10-01 17:38:28.349603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.108 [2024-10-01 17:38:28.349610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.108 qpair failed and we were unable to recover it. 00:38:30.108 [2024-10-01 17:38:28.349938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.108 [2024-10-01 17:38:28.349944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.108 qpair failed and we were unable to recover it. 00:38:30.108 [2024-10-01 17:38:28.350298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.108 [2024-10-01 17:38:28.350305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.108 qpair failed and we were unable to recover it. 00:38:30.108 [2024-10-01 17:38:28.350492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.108 [2024-10-01 17:38:28.350499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.108 qpair failed and we were unable to recover it. 00:38:30.108 [2024-10-01 17:38:28.350779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.108 [2024-10-01 17:38:28.350786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.108 qpair failed and we were unable to recover it. 00:38:30.108 [2024-10-01 17:38:28.350986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.108 [2024-10-01 17:38:28.350999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.108 qpair failed and we were unable to recover it. 00:38:30.108 [2024-10-01 17:38:28.351289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.108 [2024-10-01 17:38:28.351296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.108 qpair failed and we were unable to recover it. 00:38:30.108 [2024-10-01 17:38:28.351639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.108 [2024-10-01 17:38:28.351646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.108 qpair failed and we were unable to recover it. 00:38:30.108 [2024-10-01 17:38:28.351806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.108 [2024-10-01 17:38:28.351813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.108 qpair failed and we were unable to recover it. 00:38:30.108 [2024-10-01 17:38:28.352143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.108 [2024-10-01 17:38:28.352151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.108 qpair failed and we were unable to recover it. 00:38:30.108 [2024-10-01 17:38:28.352435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.108 [2024-10-01 17:38:28.352443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.108 qpair failed and we were unable to recover it. 00:38:30.108 [2024-10-01 17:38:28.352788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.108 [2024-10-01 17:38:28.352795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.108 qpair failed and we were unable to recover it. 00:38:30.108 [2024-10-01 17:38:28.353113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.108 [2024-10-01 17:38:28.353120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.108 qpair failed and we were unable to recover it. 00:38:30.108 [2024-10-01 17:38:28.353336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.108 [2024-10-01 17:38:28.353344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.108 qpair failed and we were unable to recover it. 00:38:30.108 [2024-10-01 17:38:28.353543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.108 [2024-10-01 17:38:28.353549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.108 qpair failed and we were unable to recover it. 00:38:30.108 [2024-10-01 17:38:28.353913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.108 [2024-10-01 17:38:28.353920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.108 qpair failed and we were unable to recover it. 00:38:30.108 [2024-10-01 17:38:28.354209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.108 [2024-10-01 17:38:28.354216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.108 qpair failed and we were unable to recover it. 00:38:30.108 [2024-10-01 17:38:28.354406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.109 [2024-10-01 17:38:28.354412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.109 qpair failed and we were unable to recover it. 00:38:30.109 [2024-10-01 17:38:28.354726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.109 [2024-10-01 17:38:28.354734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.109 qpair failed and we were unable to recover it. 00:38:30.109 [2024-10-01 17:38:28.355072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.109 [2024-10-01 17:38:28.355080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.109 qpair failed and we were unable to recover it. 00:38:30.109 [2024-10-01 17:38:28.355406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.109 [2024-10-01 17:38:28.355413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.109 qpair failed and we were unable to recover it. 00:38:30.109 [2024-10-01 17:38:28.355714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.109 [2024-10-01 17:38:28.355721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.109 qpair failed and we were unable to recover it. 00:38:30.109 [2024-10-01 17:38:28.356103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.109 [2024-10-01 17:38:28.356112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.109 qpair failed and we were unable to recover it. 00:38:30.109 [2024-10-01 17:38:28.356333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.109 [2024-10-01 17:38:28.356340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.109 qpair failed and we were unable to recover it. 00:38:30.109 [2024-10-01 17:38:28.356666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.109 [2024-10-01 17:38:28.356674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.109 qpair failed and we were unable to recover it. 00:38:30.109 [2024-10-01 17:38:28.356932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.109 [2024-10-01 17:38:28.356940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.109 qpair failed and we were unable to recover it. 00:38:30.109 [2024-10-01 17:38:28.357268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.109 [2024-10-01 17:38:28.357277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.109 qpair failed and we were unable to recover it. 00:38:30.109 [2024-10-01 17:38:28.357594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.109 [2024-10-01 17:38:28.357601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.109 qpair failed and we were unable to recover it. 00:38:30.109 [2024-10-01 17:38:28.357920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.109 [2024-10-01 17:38:28.357927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.109 qpair failed and we were unable to recover it. 00:38:30.109 [2024-10-01 17:38:28.358243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.109 [2024-10-01 17:38:28.358251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.109 qpair failed and we were unable to recover it. 00:38:30.109 [2024-10-01 17:38:28.358454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.109 [2024-10-01 17:38:28.358460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.109 qpair failed and we were unable to recover it. 00:38:30.109 [2024-10-01 17:38:28.358752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.109 [2024-10-01 17:38:28.358759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.109 qpair failed and we were unable to recover it. 00:38:30.109 [2024-10-01 17:38:28.359066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.109 [2024-10-01 17:38:28.359074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.109 qpair failed and we were unable to recover it. 00:38:30.109 [2024-10-01 17:38:28.359451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.109 [2024-10-01 17:38:28.359459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.109 qpair failed and we were unable to recover it. 00:38:30.109 [2024-10-01 17:38:28.359744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.109 [2024-10-01 17:38:28.359751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.109 qpair failed and we were unable to recover it. 00:38:30.109 [2024-10-01 17:38:28.360062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.109 [2024-10-01 17:38:28.360070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.109 qpair failed and we were unable to recover it. 00:38:30.109 [2024-10-01 17:38:28.360291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.109 [2024-10-01 17:38:28.360299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.109 qpair failed and we were unable to recover it. 00:38:30.109 [2024-10-01 17:38:28.360508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.109 [2024-10-01 17:38:28.360515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.109 qpair failed and we were unable to recover it. 00:38:30.109 [2024-10-01 17:38:28.360859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.109 [2024-10-01 17:38:28.360866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.109 qpair failed and we were unable to recover it. 00:38:30.109 [2024-10-01 17:38:28.361053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.109 [2024-10-01 17:38:28.361061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.109 qpair failed and we were unable to recover it. 00:38:30.109 [2024-10-01 17:38:28.361373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.109 [2024-10-01 17:38:28.361380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.109 qpair failed and we were unable to recover it. 00:38:30.109 [2024-10-01 17:38:28.361685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.109 [2024-10-01 17:38:28.361691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.109 qpair failed and we were unable to recover it. 00:38:30.109 [2024-10-01 17:38:28.361990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.109 [2024-10-01 17:38:28.362002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.109 qpair failed and we were unable to recover it. 00:38:30.109 [2024-10-01 17:38:28.362262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.109 [2024-10-01 17:38:28.362269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.109 qpair failed and we were unable to recover it. 00:38:30.109 [2024-10-01 17:38:28.362623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.109 [2024-10-01 17:38:28.362629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.109 qpair failed and we were unable to recover it. 00:38:30.109 [2024-10-01 17:38:28.362777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.109 [2024-10-01 17:38:28.362785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.109 qpair failed and we were unable to recover it. 00:38:30.109 [2024-10-01 17:38:28.362970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.109 [2024-10-01 17:38:28.362977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.109 qpair failed and we were unable to recover it. 00:38:30.109 [2024-10-01 17:38:28.363346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.109 [2024-10-01 17:38:28.363358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.109 qpair failed and we were unable to recover it. 00:38:30.109 [2024-10-01 17:38:28.363657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.109 [2024-10-01 17:38:28.363664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.109 qpair failed and we were unable to recover it. 00:38:30.109 [2024-10-01 17:38:28.363976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.109 [2024-10-01 17:38:28.363984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.109 qpair failed and we were unable to recover it. 00:38:30.109 [2024-10-01 17:38:28.364306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.109 [2024-10-01 17:38:28.364314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.109 qpair failed and we were unable to recover it. 00:38:30.109 [2024-10-01 17:38:28.364609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.109 [2024-10-01 17:38:28.364617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.109 qpair failed and we were unable to recover it. 00:38:30.109 [2024-10-01 17:38:28.364926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.109 [2024-10-01 17:38:28.364933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.109 qpair failed and we were unable to recover it. 00:38:30.109 [2024-10-01 17:38:28.365284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.109 [2024-10-01 17:38:28.365291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.109 qpair failed and we were unable to recover it. 00:38:30.109 [2024-10-01 17:38:28.365598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.109 [2024-10-01 17:38:28.365606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.109 qpair failed and we were unable to recover it. 00:38:30.109 [2024-10-01 17:38:28.365933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.109 [2024-10-01 17:38:28.365940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.109 qpair failed and we were unable to recover it. 00:38:30.110 [2024-10-01 17:38:28.366302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.110 [2024-10-01 17:38:28.366309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.110 qpair failed and we were unable to recover it. 00:38:30.110 [2024-10-01 17:38:28.366635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.110 [2024-10-01 17:38:28.366642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.110 qpair failed and we were unable to recover it. 00:38:30.110 [2024-10-01 17:38:28.366929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.110 [2024-10-01 17:38:28.366937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.110 qpair failed and we were unable to recover it. 00:38:30.110 [2024-10-01 17:38:28.367214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.110 [2024-10-01 17:38:28.367222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.110 qpair failed and we were unable to recover it. 00:38:30.110 [2024-10-01 17:38:28.367436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.110 [2024-10-01 17:38:28.367443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.110 qpair failed and we were unable to recover it. 00:38:30.110 [2024-10-01 17:38:28.367762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.110 [2024-10-01 17:38:28.367770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.110 qpair failed and we were unable to recover it. 00:38:30.110 [2024-10-01 17:38:28.368100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.110 [2024-10-01 17:38:28.368111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.110 qpair failed and we were unable to recover it. 00:38:30.110 [2024-10-01 17:38:28.368400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.110 [2024-10-01 17:38:28.368408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.110 qpair failed and we were unable to recover it. 00:38:30.110 [2024-10-01 17:38:28.368707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.110 [2024-10-01 17:38:28.368715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.110 qpair failed and we were unable to recover it. 00:38:30.110 [2024-10-01 17:38:28.368894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.110 [2024-10-01 17:38:28.368902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.110 qpair failed and we were unable to recover it. 00:38:30.110 [2024-10-01 17:38:28.369307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.110 [2024-10-01 17:38:28.369315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.110 qpair failed and we were unable to recover it. 00:38:30.110 [2024-10-01 17:38:28.369624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.110 [2024-10-01 17:38:28.369631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.110 qpair failed and we were unable to recover it. 00:38:30.110 [2024-10-01 17:38:28.369966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.110 [2024-10-01 17:38:28.369978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.110 qpair failed and we were unable to recover it. 00:38:30.110 [2024-10-01 17:38:28.370321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.110 [2024-10-01 17:38:28.370329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.110 qpair failed and we were unable to recover it. 00:38:30.110 [2024-10-01 17:38:28.370609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.110 [2024-10-01 17:38:28.370616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.110 qpair failed and we were unable to recover it. 00:38:30.110 [2024-10-01 17:38:28.370729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.110 [2024-10-01 17:38:28.370736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.110 qpair failed and we were unable to recover it. 00:38:30.110 [2024-10-01 17:38:28.371063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.110 [2024-10-01 17:38:28.371070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.110 qpair failed and we were unable to recover it. 00:38:30.110 [2024-10-01 17:38:28.371385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.110 [2024-10-01 17:38:28.371392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.110 qpair failed and we were unable to recover it. 00:38:30.110 [2024-10-01 17:38:28.371691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.110 [2024-10-01 17:38:28.371699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.110 qpair failed and we were unable to recover it. 00:38:30.110 [2024-10-01 17:38:28.371986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.110 [2024-10-01 17:38:28.371997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.110 qpair failed and we were unable to recover it. 00:38:30.110 [2024-10-01 17:38:28.372353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.110 [2024-10-01 17:38:28.372361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.110 qpair failed and we were unable to recover it. 00:38:30.110 [2024-10-01 17:38:28.372624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.110 [2024-10-01 17:38:28.372632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.110 qpair failed and we were unable to recover it. 00:38:30.110 [2024-10-01 17:38:28.372958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.110 [2024-10-01 17:38:28.372965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.110 qpair failed and we were unable to recover it. 00:38:30.110 [2024-10-01 17:38:28.373272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.110 [2024-10-01 17:38:28.373280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.110 qpair failed and we were unable to recover it. 00:38:30.110 [2024-10-01 17:38:28.373479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.110 [2024-10-01 17:38:28.373486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.110 qpair failed and we were unable to recover it. 00:38:30.110 [2024-10-01 17:38:28.373805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.110 [2024-10-01 17:38:28.373812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.110 qpair failed and we were unable to recover it. 00:38:30.110 [2024-10-01 17:38:28.374075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.110 [2024-10-01 17:38:28.374083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.110 qpair failed and we were unable to recover it. 00:38:30.110 [2024-10-01 17:38:28.374375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.110 [2024-10-01 17:38:28.374381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.110 qpair failed and we were unable to recover it. 00:38:30.110 [2024-10-01 17:38:28.374710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.110 [2024-10-01 17:38:28.374718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.110 qpair failed and we were unable to recover it. 00:38:30.110 [2024-10-01 17:38:28.374914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.110 [2024-10-01 17:38:28.374922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.110 qpair failed and we were unable to recover it. 00:38:30.110 [2024-10-01 17:38:28.375207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.110 [2024-10-01 17:38:28.375215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.110 qpair failed and we were unable to recover it. 00:38:30.110 [2024-10-01 17:38:28.375507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.110 [2024-10-01 17:38:28.375515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.110 qpair failed and we were unable to recover it. 00:38:30.110 [2024-10-01 17:38:28.375818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.110 [2024-10-01 17:38:28.375826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.110 qpair failed and we were unable to recover it. 00:38:30.110 [2024-10-01 17:38:28.376048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.110 [2024-10-01 17:38:28.376055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.110 qpair failed and we were unable to recover it. 00:38:30.110 [2024-10-01 17:38:28.376340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.110 [2024-10-01 17:38:28.376347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.110 qpair failed and we were unable to recover it. 00:38:30.110 [2024-10-01 17:38:28.376712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.110 [2024-10-01 17:38:28.376719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.110 qpair failed and we were unable to recover it. 00:38:30.110 [2024-10-01 17:38:28.377019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.110 [2024-10-01 17:38:28.377026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.110 qpair failed and we were unable to recover it. 00:38:30.110 [2024-10-01 17:38:28.377330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.110 [2024-10-01 17:38:28.377337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.110 qpair failed and we were unable to recover it. 00:38:30.110 [2024-10-01 17:38:28.377659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.110 [2024-10-01 17:38:28.377666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.111 qpair failed and we were unable to recover it. 00:38:30.111 [2024-10-01 17:38:28.377865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.111 [2024-10-01 17:38:28.377871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.111 qpair failed and we were unable to recover it. 00:38:30.111 [2024-10-01 17:38:28.378077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.111 [2024-10-01 17:38:28.378085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.111 qpair failed and we were unable to recover it. 00:38:30.111 [2024-10-01 17:38:28.378291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.111 [2024-10-01 17:38:28.378299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.111 qpair failed and we were unable to recover it. 00:38:30.111 [2024-10-01 17:38:28.378563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.111 [2024-10-01 17:38:28.378571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.111 qpair failed and we were unable to recover it. 00:38:30.111 [2024-10-01 17:38:28.378887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.111 [2024-10-01 17:38:28.378894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.111 qpair failed and we were unable to recover it. 00:38:30.111 [2024-10-01 17:38:28.379271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.111 [2024-10-01 17:38:28.379279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.111 qpair failed and we were unable to recover it. 00:38:30.111 [2024-10-01 17:38:28.379479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.111 [2024-10-01 17:38:28.379486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.111 qpair failed and we were unable to recover it. 00:38:30.111 [2024-10-01 17:38:28.379831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.111 [2024-10-01 17:38:28.379842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.111 qpair failed and we were unable to recover it. 00:38:30.111 [2024-10-01 17:38:28.380060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.111 [2024-10-01 17:38:28.380067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.111 qpair failed and we were unable to recover it. 00:38:30.111 [2024-10-01 17:38:28.380374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.111 [2024-10-01 17:38:28.380382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.111 qpair failed and we were unable to recover it. 00:38:30.111 [2024-10-01 17:38:28.380662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.111 [2024-10-01 17:38:28.380669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.111 qpair failed and we were unable to recover it. 00:38:30.111 [2024-10-01 17:38:28.380951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.111 [2024-10-01 17:38:28.380958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.111 qpair failed and we were unable to recover it. 00:38:30.111 [2024-10-01 17:38:28.381307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.111 [2024-10-01 17:38:28.381315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.111 qpair failed and we were unable to recover it. 00:38:30.111 [2024-10-01 17:38:28.381501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.111 [2024-10-01 17:38:28.381508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.111 qpair failed and we were unable to recover it. 00:38:30.111 [2024-10-01 17:38:28.381880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.111 [2024-10-01 17:38:28.381887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.111 qpair failed and we were unable to recover it. 00:38:30.111 [2024-10-01 17:38:28.382198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.111 [2024-10-01 17:38:28.382206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.111 qpair failed and we were unable to recover it. 00:38:30.111 [2024-10-01 17:38:28.382506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.111 [2024-10-01 17:38:28.382512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.111 qpair failed and we were unable to recover it. 00:38:30.111 [2024-10-01 17:38:28.382801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.111 [2024-10-01 17:38:28.382808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.111 qpair failed and we were unable to recover it. 00:38:30.111 [2024-10-01 17:38:28.383111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.111 [2024-10-01 17:38:28.383118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.111 qpair failed and we were unable to recover it. 00:38:30.111 [2024-10-01 17:38:28.383298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.111 [2024-10-01 17:38:28.383305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.111 qpair failed and we were unable to recover it. 00:38:30.111 [2024-10-01 17:38:28.383647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.111 [2024-10-01 17:38:28.383654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.111 qpair failed and we were unable to recover it. 00:38:30.111 [2024-10-01 17:38:28.383863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.111 [2024-10-01 17:38:28.383870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.111 qpair failed and we were unable to recover it. 00:38:30.111 [2024-10-01 17:38:28.384151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.111 [2024-10-01 17:38:28.384158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.111 qpair failed and we were unable to recover it. 00:38:30.111 [2024-10-01 17:38:28.384476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.111 [2024-10-01 17:38:28.384483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.111 qpair failed and we were unable to recover it. 00:38:30.111 [2024-10-01 17:38:28.384785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.111 [2024-10-01 17:38:28.384793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.111 qpair failed and we were unable to recover it. 00:38:30.111 [2024-10-01 17:38:28.385122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.111 [2024-10-01 17:38:28.385129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.111 qpair failed and we were unable to recover it. 00:38:30.111 [2024-10-01 17:38:28.385444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.111 [2024-10-01 17:38:28.385451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.111 qpair failed and we were unable to recover it. 00:38:30.111 [2024-10-01 17:38:28.385738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.111 [2024-10-01 17:38:28.385744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.111 qpair failed and we were unable to recover it. 00:38:30.111 [2024-10-01 17:38:28.385932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.111 [2024-10-01 17:38:28.385939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.111 qpair failed and we were unable to recover it. 00:38:30.111 [2024-10-01 17:38:28.386305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.111 [2024-10-01 17:38:28.386313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.111 qpair failed and we were unable to recover it. 00:38:30.111 [2024-10-01 17:38:28.386624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.111 [2024-10-01 17:38:28.386630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.111 qpair failed and we were unable to recover it. 00:38:30.111 [2024-10-01 17:38:28.386923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.111 [2024-10-01 17:38:28.386930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.111 qpair failed and we were unable to recover it. 00:38:30.112 [2024-10-01 17:38:28.387237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.112 [2024-10-01 17:38:28.387245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.112 qpair failed and we were unable to recover it. 00:38:30.112 [2024-10-01 17:38:28.387534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.112 [2024-10-01 17:38:28.387541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.112 qpair failed and we were unable to recover it. 00:38:30.112 [2024-10-01 17:38:28.387871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.112 [2024-10-01 17:38:28.387878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.112 qpair failed and we were unable to recover it. 00:38:30.112 [2024-10-01 17:38:28.388206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.112 [2024-10-01 17:38:28.388213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.112 qpair failed and we were unable to recover it. 00:38:30.112 [2024-10-01 17:38:28.388537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.112 [2024-10-01 17:38:28.388543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.112 qpair failed and we were unable to recover it. 00:38:30.112 [2024-10-01 17:38:28.388860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.112 [2024-10-01 17:38:28.388867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.112 qpair failed and we were unable to recover it. 00:38:30.112 [2024-10-01 17:38:28.389178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.112 [2024-10-01 17:38:28.389186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.112 qpair failed and we were unable to recover it. 00:38:30.112 [2024-10-01 17:38:28.389570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.112 [2024-10-01 17:38:28.389577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.112 qpair failed and we were unable to recover it. 00:38:30.112 [2024-10-01 17:38:28.389885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.112 [2024-10-01 17:38:28.389892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.112 qpair failed and we were unable to recover it. 00:38:30.112 [2024-10-01 17:38:28.390198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.112 [2024-10-01 17:38:28.390205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.112 qpair failed and we were unable to recover it. 00:38:30.112 [2024-10-01 17:38:28.390508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.112 [2024-10-01 17:38:28.390515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.112 qpair failed and we were unable to recover it. 00:38:30.112 [2024-10-01 17:38:28.390723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.112 [2024-10-01 17:38:28.390731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.112 qpair failed and we were unable to recover it. 00:38:30.112 [2024-10-01 17:38:28.391070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.112 [2024-10-01 17:38:28.391077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.112 qpair failed and we were unable to recover it. 00:38:30.112 [2024-10-01 17:38:28.391443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.112 [2024-10-01 17:38:28.391451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.112 qpair failed and we were unable to recover it. 00:38:30.112 [2024-10-01 17:38:28.391497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.112 [2024-10-01 17:38:28.391504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.112 qpair failed and we were unable to recover it. 00:38:30.112 [2024-10-01 17:38:28.391805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.112 [2024-10-01 17:38:28.391814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.112 qpair failed and we were unable to recover it. 00:38:30.112 [2024-10-01 17:38:28.392118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.112 [2024-10-01 17:38:28.392126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.112 qpair failed and we were unable to recover it. 00:38:30.112 [2024-10-01 17:38:28.392460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.112 [2024-10-01 17:38:28.392468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.112 qpair failed and we were unable to recover it. 00:38:30.112 [2024-10-01 17:38:28.392663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.112 [2024-10-01 17:38:28.392671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.112 qpair failed and we were unable to recover it. 00:38:30.112 [2024-10-01 17:38:28.393002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.112 [2024-10-01 17:38:28.393009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.112 qpair failed and we were unable to recover it. 00:38:30.112 [2024-10-01 17:38:28.393293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.112 [2024-10-01 17:38:28.393300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.112 qpair failed and we were unable to recover it. 00:38:30.112 [2024-10-01 17:38:28.393472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.112 [2024-10-01 17:38:28.393480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.112 qpair failed and we were unable to recover it. 00:38:30.112 [2024-10-01 17:38:28.393790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.112 [2024-10-01 17:38:28.393797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.112 qpair failed and we were unable to recover it. 00:38:30.112 [2024-10-01 17:38:28.394129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.112 [2024-10-01 17:38:28.394137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.112 qpair failed and we were unable to recover it. 00:38:30.112 [2024-10-01 17:38:28.394473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.112 [2024-10-01 17:38:28.394481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.112 qpair failed and we were unable to recover it. 00:38:30.112 [2024-10-01 17:38:28.394770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.112 [2024-10-01 17:38:28.394777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.112 qpair failed and we were unable to recover it. 00:38:30.112 [2024-10-01 17:38:28.395098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.112 [2024-10-01 17:38:28.395105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.112 qpair failed and we were unable to recover it. 00:38:30.112 [2024-10-01 17:38:28.395365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.112 [2024-10-01 17:38:28.395372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.112 qpair failed and we were unable to recover it. 00:38:30.112 [2024-10-01 17:38:28.395671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.112 [2024-10-01 17:38:28.395678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.112 qpair failed and we were unable to recover it. 00:38:30.112 [2024-10-01 17:38:28.395968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.112 [2024-10-01 17:38:28.395975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.112 qpair failed and we were unable to recover it. 00:38:30.112 [2024-10-01 17:38:28.396302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.112 [2024-10-01 17:38:28.396310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.112 qpair failed and we were unable to recover it. 00:38:30.112 [2024-10-01 17:38:28.396397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.112 [2024-10-01 17:38:28.396403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.112 qpair failed and we were unable to recover it. 00:38:30.112 [2024-10-01 17:38:28.396724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.112 [2024-10-01 17:38:28.396731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.112 qpair failed and we were unable to recover it. 00:38:30.112 [2024-10-01 17:38:28.397030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.112 [2024-10-01 17:38:28.397038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.112 qpair failed and we were unable to recover it. 00:38:30.112 [2024-10-01 17:38:28.397343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.112 [2024-10-01 17:38:28.397349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.112 qpair failed and we were unable to recover it. 00:38:30.112 [2024-10-01 17:38:28.397647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.112 [2024-10-01 17:38:28.397653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.112 qpair failed and we were unable to recover it. 00:38:30.112 [2024-10-01 17:38:28.397961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.112 [2024-10-01 17:38:28.397968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.112 qpair failed and we were unable to recover it. 00:38:30.112 [2024-10-01 17:38:28.398079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.112 [2024-10-01 17:38:28.398086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.112 qpair failed and we were unable to recover it. 00:38:30.112 [2024-10-01 17:38:28.398357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.112 [2024-10-01 17:38:28.398364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.113 qpair failed and we were unable to recover it. 00:38:30.113 [2024-10-01 17:38:28.398668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.113 [2024-10-01 17:38:28.398675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.113 qpair failed and we were unable to recover it. 00:38:30.113 [2024-10-01 17:38:28.398877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.113 [2024-10-01 17:38:28.398884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.113 qpair failed and we were unable to recover it. 00:38:30.113 [2024-10-01 17:38:28.399169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.113 [2024-10-01 17:38:28.399177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.113 qpair failed and we were unable to recover it. 00:38:30.113 [2024-10-01 17:38:28.399398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.113 [2024-10-01 17:38:28.399405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.113 qpair failed and we were unable to recover it. 00:38:30.113 [2024-10-01 17:38:28.399700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.113 [2024-10-01 17:38:28.399706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.113 qpair failed and we were unable to recover it. 00:38:30.113 [2024-10-01 17:38:28.400023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.113 [2024-10-01 17:38:28.400030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.113 qpair failed and we were unable to recover it. 00:38:30.113 [2024-10-01 17:38:28.400131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.113 [2024-10-01 17:38:28.400137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.113 qpair failed and we were unable to recover it. 00:38:30.113 [2024-10-01 17:38:28.400459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.113 [2024-10-01 17:38:28.400466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.113 qpair failed and we were unable to recover it. 00:38:30.113 [2024-10-01 17:38:28.400763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.113 [2024-10-01 17:38:28.400770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.113 qpair failed and we were unable to recover it. 00:38:30.113 [2024-10-01 17:38:28.401029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.113 [2024-10-01 17:38:28.401037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.113 qpair failed and we were unable to recover it. 00:38:30.113 [2024-10-01 17:38:28.401418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.113 [2024-10-01 17:38:28.401426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.113 qpair failed and we were unable to recover it. 00:38:30.113 [2024-10-01 17:38:28.401624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.113 [2024-10-01 17:38:28.401631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.113 qpair failed and we were unable to recover it. 00:38:30.113 [2024-10-01 17:38:28.401940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.113 [2024-10-01 17:38:28.401947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.113 qpair failed and we were unable to recover it. 00:38:30.113 [2024-10-01 17:38:28.402251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.113 [2024-10-01 17:38:28.402258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.113 qpair failed and we were unable to recover it. 00:38:30.113 [2024-10-01 17:38:28.402570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.113 [2024-10-01 17:38:28.402577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.113 qpair failed and we were unable to recover it. 00:38:30.113 [2024-10-01 17:38:28.402896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.113 [2024-10-01 17:38:28.402903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.113 qpair failed and we were unable to recover it. 00:38:30.113 [2024-10-01 17:38:28.403210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.113 [2024-10-01 17:38:28.403221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.113 qpair failed and we were unable to recover it. 00:38:30.113 [2024-10-01 17:38:28.403502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.113 [2024-10-01 17:38:28.403510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.113 qpair failed and we were unable to recover it. 00:38:30.113 [2024-10-01 17:38:28.403731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.113 [2024-10-01 17:38:28.403738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.113 qpair failed and we were unable to recover it. 00:38:30.113 [2024-10-01 17:38:28.404050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.113 [2024-10-01 17:38:28.404057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.113 qpair failed and we were unable to recover it. 00:38:30.113 [2024-10-01 17:38:28.404360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.113 [2024-10-01 17:38:28.404367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.113 qpair failed and we were unable to recover it. 00:38:30.113 [2024-10-01 17:38:28.404734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.113 [2024-10-01 17:38:28.404741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.113 qpair failed and we were unable to recover it. 00:38:30.113 [2024-10-01 17:38:28.405014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.113 [2024-10-01 17:38:28.405022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.113 qpair failed and we were unable to recover it. 00:38:30.113 [2024-10-01 17:38:28.405245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.113 [2024-10-01 17:38:28.405252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.113 qpair failed and we were unable to recover it. 00:38:30.113 [2024-10-01 17:38:28.405435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.113 [2024-10-01 17:38:28.405442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.113 qpair failed and we were unable to recover it. 00:38:30.113 [2024-10-01 17:38:28.405748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.113 [2024-10-01 17:38:28.405756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.113 qpair failed and we were unable to recover it. 00:38:30.113 [2024-10-01 17:38:28.406063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.113 [2024-10-01 17:38:28.406070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.113 qpair failed and we were unable to recover it. 00:38:30.113 [2024-10-01 17:38:28.406325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.113 [2024-10-01 17:38:28.406332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.113 qpair failed and we were unable to recover it. 00:38:30.113 [2024-10-01 17:38:28.406625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.113 [2024-10-01 17:38:28.406632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.113 qpair failed and we were unable to recover it. 00:38:30.113 [2024-10-01 17:38:28.406861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.113 [2024-10-01 17:38:28.406869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.113 qpair failed and we were unable to recover it. 00:38:30.113 [2024-10-01 17:38:28.407181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.113 [2024-10-01 17:38:28.407188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.113 qpair failed and we were unable to recover it. 00:38:30.113 [2024-10-01 17:38:28.407470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.113 [2024-10-01 17:38:28.407477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.113 qpair failed and we were unable to recover it. 00:38:30.113 [2024-10-01 17:38:28.407765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.113 [2024-10-01 17:38:28.407772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.113 qpair failed and we were unable to recover it. 00:38:30.113 [2024-10-01 17:38:28.408078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.113 [2024-10-01 17:38:28.408087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.113 qpair failed and we were unable to recover it. 00:38:30.113 [2024-10-01 17:38:28.408386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.113 [2024-10-01 17:38:28.408394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.113 qpair failed and we were unable to recover it. 00:38:30.113 [2024-10-01 17:38:28.408610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.113 [2024-10-01 17:38:28.408617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.113 qpair failed and we were unable to recover it. 00:38:30.113 [2024-10-01 17:38:28.408999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.113 [2024-10-01 17:38:28.409007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.113 qpair failed and we were unable to recover it. 00:38:30.113 [2024-10-01 17:38:28.409318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.113 [2024-10-01 17:38:28.409325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.113 qpair failed and we were unable to recover it. 00:38:30.113 [2024-10-01 17:38:28.409527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.114 [2024-10-01 17:38:28.409534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.114 qpair failed and we were unable to recover it. 00:38:30.114 [2024-10-01 17:38:28.409844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.114 [2024-10-01 17:38:28.409852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.114 qpair failed and we were unable to recover it. 00:38:30.114 [2024-10-01 17:38:28.410150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.114 [2024-10-01 17:38:28.410158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.114 qpair failed and we were unable to recover it. 00:38:30.114 [2024-10-01 17:38:28.410447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.114 [2024-10-01 17:38:28.410455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.114 qpair failed and we were unable to recover it. 00:38:30.114 [2024-10-01 17:38:28.410751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.114 [2024-10-01 17:38:28.410757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.114 qpair failed and we were unable to recover it. 00:38:30.114 [2024-10-01 17:38:28.411065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.114 [2024-10-01 17:38:28.411073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.114 qpair failed and we were unable to recover it. 00:38:30.114 [2024-10-01 17:38:28.411290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.114 [2024-10-01 17:38:28.411297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.114 qpair failed and we were unable to recover it. 00:38:30.114 [2024-10-01 17:38:28.411599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.114 [2024-10-01 17:38:28.411607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.114 qpair failed and we were unable to recover it. 00:38:30.114 [2024-10-01 17:38:28.411923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.114 [2024-10-01 17:38:28.411930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.114 qpair failed and we were unable to recover it. 00:38:30.114 [2024-10-01 17:38:28.412247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.114 [2024-10-01 17:38:28.412259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.114 qpair failed and we were unable to recover it. 00:38:30.114 [2024-10-01 17:38:28.412567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.114 [2024-10-01 17:38:28.412574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.114 qpair failed and we were unable to recover it. 00:38:30.114 [2024-10-01 17:38:28.412867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.114 [2024-10-01 17:38:28.412874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.114 qpair failed and we were unable to recover it. 00:38:30.114 [2024-10-01 17:38:28.413096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.114 [2024-10-01 17:38:28.413103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.114 qpair failed and we were unable to recover it. 00:38:30.114 [2024-10-01 17:38:28.413495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.114 [2024-10-01 17:38:28.413503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.114 qpair failed and we were unable to recover it. 00:38:30.114 [2024-10-01 17:38:28.413802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.114 [2024-10-01 17:38:28.413809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.114 qpair failed and we were unable to recover it. 00:38:30.114 [2024-10-01 17:38:28.414133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.114 [2024-10-01 17:38:28.414141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.114 qpair failed and we were unable to recover it. 00:38:30.114 [2024-10-01 17:38:28.414355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.114 [2024-10-01 17:38:28.414362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.114 qpair failed and we were unable to recover it. 00:38:30.114 [2024-10-01 17:38:28.414577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.114 [2024-10-01 17:38:28.414584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.114 qpair failed and we were unable to recover it. 00:38:30.114 [2024-10-01 17:38:28.414881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.114 [2024-10-01 17:38:28.414890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.114 qpair failed and we were unable to recover it. 00:38:30.114 [2024-10-01 17:38:28.415219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.114 [2024-10-01 17:38:28.415227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.114 qpair failed and we were unable to recover it. 00:38:30.114 [2024-10-01 17:38:28.415531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.114 [2024-10-01 17:38:28.415538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.114 qpair failed and we were unable to recover it. 00:38:30.114 [2024-10-01 17:38:28.415841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.114 [2024-10-01 17:38:28.415849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.114 qpair failed and we were unable to recover it. 00:38:30.114 [2024-10-01 17:38:28.416205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.114 [2024-10-01 17:38:28.416212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.114 qpair failed and we were unable to recover it. 00:38:30.114 [2024-10-01 17:38:28.416520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.114 [2024-10-01 17:38:28.416528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.114 qpair failed and we were unable to recover it. 00:38:30.114 [2024-10-01 17:38:28.416847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.114 [2024-10-01 17:38:28.416854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.114 qpair failed and we were unable to recover it. 00:38:30.114 [2024-10-01 17:38:28.417033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.114 [2024-10-01 17:38:28.417040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.114 qpair failed and we were unable to recover it. 00:38:30.114 [2024-10-01 17:38:28.417329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.114 [2024-10-01 17:38:28.417336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.114 qpair failed and we were unable to recover it. 00:38:30.114 [2024-10-01 17:38:28.417647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.114 [2024-10-01 17:38:28.417654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.114 qpair failed and we were unable to recover it. 00:38:30.114 [2024-10-01 17:38:28.417941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.114 [2024-10-01 17:38:28.417948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.114 qpair failed and we were unable to recover it. 00:38:30.114 [2024-10-01 17:38:28.418265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.114 [2024-10-01 17:38:28.418272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.114 qpair failed and we were unable to recover it. 00:38:30.114 [2024-10-01 17:38:28.418610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.114 [2024-10-01 17:38:28.418618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.114 qpair failed and we were unable to recover it. 00:38:30.114 [2024-10-01 17:38:28.418946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.114 [2024-10-01 17:38:28.418953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.114 qpair failed and we were unable to recover it. 00:38:30.114 [2024-10-01 17:38:28.419310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.114 [2024-10-01 17:38:28.419317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.114 qpair failed and we were unable to recover it. 00:38:30.114 [2024-10-01 17:38:28.419479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.114 [2024-10-01 17:38:28.419486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.114 qpair failed and we were unable to recover it. 00:38:30.114 [2024-10-01 17:38:28.419764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.114 [2024-10-01 17:38:28.419771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.114 qpair failed and we were unable to recover it. 00:38:30.114 [2024-10-01 17:38:28.420103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.114 [2024-10-01 17:38:28.420110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.114 qpair failed and we were unable to recover it. 00:38:30.114 [2024-10-01 17:38:28.420417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.114 [2024-10-01 17:38:28.420424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.114 qpair failed and we were unable to recover it. 00:38:30.114 [2024-10-01 17:38:28.420762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.114 [2024-10-01 17:38:28.420769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.114 qpair failed and we were unable to recover it. 00:38:30.114 [2024-10-01 17:38:28.421055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.114 [2024-10-01 17:38:28.421062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.115 qpair failed and we were unable to recover it. 00:38:30.115 [2024-10-01 17:38:28.421373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.115 [2024-10-01 17:38:28.421380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.115 qpair failed and we were unable to recover it. 00:38:30.115 [2024-10-01 17:38:28.421663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.115 [2024-10-01 17:38:28.421669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.115 qpair failed and we were unable to recover it. 00:38:30.115 [2024-10-01 17:38:28.421981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.115 [2024-10-01 17:38:28.421988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.115 qpair failed and we were unable to recover it. 00:38:30.115 [2024-10-01 17:38:28.422204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.115 [2024-10-01 17:38:28.422211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.115 qpair failed and we were unable to recover it. 00:38:30.115 [2024-10-01 17:38:28.422518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.115 [2024-10-01 17:38:28.422525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.115 qpair failed and we were unable to recover it. 00:38:30.115 [2024-10-01 17:38:28.422710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.115 [2024-10-01 17:38:28.422716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.115 qpair failed and we were unable to recover it. 00:38:30.115 [2024-10-01 17:38:28.422990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.115 [2024-10-01 17:38:28.423001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.115 qpair failed and we were unable to recover it. 00:38:30.115 [2024-10-01 17:38:28.423330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.115 [2024-10-01 17:38:28.423338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.115 qpair failed and we were unable to recover it. 00:38:30.115 [2024-10-01 17:38:28.423490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.115 [2024-10-01 17:38:28.423498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.115 qpair failed and we were unable to recover it. 00:38:30.115 [2024-10-01 17:38:28.423717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.115 [2024-10-01 17:38:28.423724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.115 qpair failed and we were unable to recover it. 00:38:30.115 [2024-10-01 17:38:28.424063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.115 [2024-10-01 17:38:28.424071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.115 qpair failed and we were unable to recover it. 00:38:30.115 [2024-10-01 17:38:28.424385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.115 [2024-10-01 17:38:28.424392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.115 qpair failed and we were unable to recover it. 00:38:30.115 [2024-10-01 17:38:28.424727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.115 [2024-10-01 17:38:28.424735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.115 qpair failed and we were unable to recover it. 00:38:30.115 [2024-10-01 17:38:28.424899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.115 [2024-10-01 17:38:28.424907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.115 qpair failed and we were unable to recover it. 00:38:30.115 [2024-10-01 17:38:28.425199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.115 [2024-10-01 17:38:28.425206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.115 qpair failed and we were unable to recover it. 00:38:30.115 [2024-10-01 17:38:28.425506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.115 [2024-10-01 17:38:28.425513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.115 qpair failed and we were unable to recover it. 00:38:30.115 [2024-10-01 17:38:28.425793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.115 [2024-10-01 17:38:28.425800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.115 qpair failed and we were unable to recover it. 00:38:30.115 [2024-10-01 17:38:28.426114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.115 [2024-10-01 17:38:28.426121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.115 qpair failed and we were unable to recover it. 00:38:30.115 [2024-10-01 17:38:28.426342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.115 [2024-10-01 17:38:28.426350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.115 qpair failed and we were unable to recover it. 00:38:30.115 [2024-10-01 17:38:28.426668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.115 [2024-10-01 17:38:28.426677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.115 qpair failed and we were unable to recover it. 00:38:30.115 [2024-10-01 17:38:28.426976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.115 [2024-10-01 17:38:28.426983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.115 qpair failed and we were unable to recover it. 00:38:30.115 [2024-10-01 17:38:28.427271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.115 [2024-10-01 17:38:28.427278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.115 qpair failed and we were unable to recover it. 00:38:30.115 [2024-10-01 17:38:28.427595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.115 [2024-10-01 17:38:28.427602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.115 qpair failed and we were unable to recover it. 00:38:30.115 [2024-10-01 17:38:28.427935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.115 [2024-10-01 17:38:28.427941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.115 qpair failed and we were unable to recover it. 00:38:30.115 [2024-10-01 17:38:28.428241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.115 [2024-10-01 17:38:28.428248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.115 qpair failed and we were unable to recover it. 00:38:30.115 [2024-10-01 17:38:28.428528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.115 [2024-10-01 17:38:28.428534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.115 qpair failed and we were unable to recover it. 00:38:30.115 [2024-10-01 17:38:28.428795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.115 [2024-10-01 17:38:28.428802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.115 qpair failed and we were unable to recover it. 00:38:30.115 [2024-10-01 17:38:28.428962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.115 [2024-10-01 17:38:28.428970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.115 qpair failed and we were unable to recover it. 00:38:30.115 [2024-10-01 17:38:28.429074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.115 [2024-10-01 17:38:28.429081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.115 qpair failed and we were unable to recover it. 00:38:30.115 [2024-10-01 17:38:28.429355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.115 [2024-10-01 17:38:28.429362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.115 qpair failed and we were unable to recover it. 00:38:30.115 [2024-10-01 17:38:28.429659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.115 [2024-10-01 17:38:28.429666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.115 qpair failed and we were unable to recover it. 00:38:30.115 [2024-10-01 17:38:28.429998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.115 [2024-10-01 17:38:28.430006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.115 qpair failed and we were unable to recover it. 00:38:30.115 [2024-10-01 17:38:28.430315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.115 [2024-10-01 17:38:28.430322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.115 qpair failed and we were unable to recover it. 00:38:30.115 [2024-10-01 17:38:28.430611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.115 [2024-10-01 17:38:28.430618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.115 qpair failed and we were unable to recover it. 00:38:30.115 [2024-10-01 17:38:28.430930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.115 [2024-10-01 17:38:28.430936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.115 qpair failed and we were unable to recover it. 00:38:30.115 [2024-10-01 17:38:28.431177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.115 [2024-10-01 17:38:28.431185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.116 qpair failed and we were unable to recover it. 00:38:30.116 [2024-10-01 17:38:28.431479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.116 [2024-10-01 17:38:28.431487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.116 qpair failed and we were unable to recover it. 00:38:30.116 [2024-10-01 17:38:28.431667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.116 [2024-10-01 17:38:28.431674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.116 qpair failed and we were unable to recover it. 00:38:30.116 [2024-10-01 17:38:28.432075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.116 [2024-10-01 17:38:28.432082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.116 qpair failed and we were unable to recover it. 00:38:30.116 [2024-10-01 17:38:28.432399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.116 [2024-10-01 17:38:28.432406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.116 qpair failed and we were unable to recover it. 00:38:30.116 [2024-10-01 17:38:28.432613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.116 [2024-10-01 17:38:28.432620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.116 qpair failed and we were unable to recover it. 00:38:30.116 [2024-10-01 17:38:28.432810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.116 [2024-10-01 17:38:28.432817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.116 qpair failed and we were unable to recover it. 00:38:30.116 [2024-10-01 17:38:28.433160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.116 [2024-10-01 17:38:28.433167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.116 qpair failed and we were unable to recover it. 00:38:30.116 [2024-10-01 17:38:28.433427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.116 [2024-10-01 17:38:28.433434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.116 qpair failed and we were unable to recover it. 00:38:30.116 [2024-10-01 17:38:28.433757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.116 [2024-10-01 17:38:28.433764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.116 qpair failed and we were unable to recover it. 00:38:30.116 [2024-10-01 17:38:28.434050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.116 [2024-10-01 17:38:28.434058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.116 qpair failed and we were unable to recover it. 00:38:30.116 [2024-10-01 17:38:28.434374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.116 [2024-10-01 17:38:28.434381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.116 qpair failed and we were unable to recover it. 00:38:30.116 [2024-10-01 17:38:28.434662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.116 [2024-10-01 17:38:28.434669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.116 qpair failed and we were unable to recover it. 00:38:30.116 [2024-10-01 17:38:28.434966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.116 [2024-10-01 17:38:28.434973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.116 qpair failed and we were unable to recover it. 00:38:30.116 [2024-10-01 17:38:28.435271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.116 [2024-10-01 17:38:28.435279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.116 qpair failed and we were unable to recover it. 00:38:30.116 [2024-10-01 17:38:28.435600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.116 [2024-10-01 17:38:28.435607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.116 qpair failed and we were unable to recover it. 00:38:30.116 [2024-10-01 17:38:28.435902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.116 [2024-10-01 17:38:28.435910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.116 qpair failed and we were unable to recover it. 00:38:30.116 [2024-10-01 17:38:28.436246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.116 [2024-10-01 17:38:28.436254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.116 qpair failed and we were unable to recover it. 00:38:30.116 [2024-10-01 17:38:28.436576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.116 [2024-10-01 17:38:28.436584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.116 qpair failed and we were unable to recover it. 00:38:30.116 [2024-10-01 17:38:28.436880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.116 [2024-10-01 17:38:28.436887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.116 qpair failed and we were unable to recover it. 00:38:30.116 [2024-10-01 17:38:28.437210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.116 [2024-10-01 17:38:28.437217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.116 qpair failed and we were unable to recover it. 00:38:30.116 [2024-10-01 17:38:28.437498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.116 [2024-10-01 17:38:28.437504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.116 qpair failed and we were unable to recover it. 00:38:30.116 [2024-10-01 17:38:28.437835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.116 [2024-10-01 17:38:28.437842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.116 qpair failed and we were unable to recover it. 00:38:30.116 [2024-10-01 17:38:28.438141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.116 [2024-10-01 17:38:28.438149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.116 qpair failed and we were unable to recover it. 00:38:30.116 [2024-10-01 17:38:28.438488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.116 [2024-10-01 17:38:28.438500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.116 qpair failed and we were unable to recover it. 00:38:30.116 [2024-10-01 17:38:28.438822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.116 [2024-10-01 17:38:28.438829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.116 qpair failed and we were unable to recover it. 00:38:30.116 [2024-10-01 17:38:28.439121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.116 [2024-10-01 17:38:28.439128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.116 qpair failed and we were unable to recover it. 00:38:30.116 [2024-10-01 17:38:28.439431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.116 [2024-10-01 17:38:28.439437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.116 qpair failed and we were unable to recover it. 00:38:30.116 [2024-10-01 17:38:28.439758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.116 [2024-10-01 17:38:28.439766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.116 qpair failed and we were unable to recover it. 00:38:30.116 [2024-10-01 17:38:28.440047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.116 [2024-10-01 17:38:28.440054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.116 qpair failed and we were unable to recover it. 00:38:30.116 [2024-10-01 17:38:28.440324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.116 [2024-10-01 17:38:28.440331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.116 qpair failed and we were unable to recover it. 00:38:30.116 [2024-10-01 17:38:28.440661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.116 [2024-10-01 17:38:28.440668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.116 qpair failed and we were unable to recover it. 00:38:30.116 [2024-10-01 17:38:28.441024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.116 [2024-10-01 17:38:28.441032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.116 qpair failed and we were unable to recover it. 00:38:30.116 [2024-10-01 17:38:28.441312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.116 [2024-10-01 17:38:28.441319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.116 qpair failed and we were unable to recover it. 00:38:30.116 [2024-10-01 17:38:28.441626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.116 [2024-10-01 17:38:28.441633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.116 qpair failed and we were unable to recover it. 00:38:30.116 [2024-10-01 17:38:28.441956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.116 [2024-10-01 17:38:28.441963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.116 qpair failed and we were unable to recover it. 00:38:30.116 [2024-10-01 17:38:28.442318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.116 [2024-10-01 17:38:28.442325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.117 qpair failed and we were unable to recover it. 00:38:30.117 [2024-10-01 17:38:28.442635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.117 [2024-10-01 17:38:28.442642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.117 qpair failed and we were unable to recover it. 00:38:30.117 [2024-10-01 17:38:28.442814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.117 [2024-10-01 17:38:28.442821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.117 qpair failed and we were unable to recover it. 00:38:30.117 [2024-10-01 17:38:28.443141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.117 [2024-10-01 17:38:28.443148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.117 qpair failed and we were unable to recover it. 00:38:30.117 [2024-10-01 17:38:28.443310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.117 [2024-10-01 17:38:28.443317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.117 qpair failed and we were unable to recover it. 00:38:30.117 [2024-10-01 17:38:28.443605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.117 [2024-10-01 17:38:28.443612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.117 qpair failed and we were unable to recover it. 00:38:30.117 [2024-10-01 17:38:28.443924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.117 [2024-10-01 17:38:28.443931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.117 qpair failed and we were unable to recover it. 00:38:30.117 [2024-10-01 17:38:28.444304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.117 [2024-10-01 17:38:28.444311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.117 qpair failed and we were unable to recover it. 00:38:30.117 [2024-10-01 17:38:28.444570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.117 [2024-10-01 17:38:28.444577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.117 qpair failed and we were unable to recover it. 00:38:30.117 [2024-10-01 17:38:28.444867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.117 [2024-10-01 17:38:28.444874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.117 qpair failed and we were unable to recover it. 00:38:30.117 [2024-10-01 17:38:28.445064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.117 [2024-10-01 17:38:28.445070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.117 qpair failed and we were unable to recover it. 00:38:30.117 [2024-10-01 17:38:28.445213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.117 [2024-10-01 17:38:28.445219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.117 qpair failed and we were unable to recover it. 00:38:30.117 [2024-10-01 17:38:28.445531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.117 [2024-10-01 17:38:28.445539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.117 qpair failed and we were unable to recover it. 00:38:30.117 [2024-10-01 17:38:28.445718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.117 [2024-10-01 17:38:28.445726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.117 qpair failed and we were unable to recover it. 00:38:30.117 [2024-10-01 17:38:28.445980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.117 [2024-10-01 17:38:28.445987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.117 qpair failed and we were unable to recover it. 00:38:30.117 [2024-10-01 17:38:28.446163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.117 [2024-10-01 17:38:28.446170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.117 qpair failed and we were unable to recover it. 00:38:30.117 [2024-10-01 17:38:28.446491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.117 [2024-10-01 17:38:28.446498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.117 qpair failed and we were unable to recover it. 00:38:30.117 [2024-10-01 17:38:28.446784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.117 [2024-10-01 17:38:28.446791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.117 qpair failed and we were unable to recover it. 00:38:30.117 [2024-10-01 17:38:28.447109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.117 [2024-10-01 17:38:28.447116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.117 qpair failed and we were unable to recover it. 00:38:30.117 [2024-10-01 17:38:28.447418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.117 [2024-10-01 17:38:28.447426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.117 qpair failed and we were unable to recover it. 00:38:30.117 [2024-10-01 17:38:28.447709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.117 [2024-10-01 17:38:28.447716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.117 qpair failed and we were unable to recover it. 00:38:30.117 [2024-10-01 17:38:28.448026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.117 [2024-10-01 17:38:28.448033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.117 qpair failed and we were unable to recover it. 00:38:30.117 [2024-10-01 17:38:28.448233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.117 [2024-10-01 17:38:28.448240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.117 qpair failed and we were unable to recover it. 00:38:30.117 [2024-10-01 17:38:28.448571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.117 [2024-10-01 17:38:28.448578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.117 qpair failed and we were unable to recover it. 00:38:30.117 [2024-10-01 17:38:28.448866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.117 [2024-10-01 17:38:28.448873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.117 qpair failed and we were unable to recover it. 00:38:30.117 [2024-10-01 17:38:28.449181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.117 [2024-10-01 17:38:28.449188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.117 qpair failed and we were unable to recover it. 00:38:30.117 [2024-10-01 17:38:28.449472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.117 [2024-10-01 17:38:28.449479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.117 qpair failed and we were unable to recover it. 00:38:30.117 [2024-10-01 17:38:28.449744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.117 [2024-10-01 17:38:28.449752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.117 qpair failed and we were unable to recover it. 00:38:30.117 [2024-10-01 17:38:28.449861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.117 [2024-10-01 17:38:28.449870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9564000b90 with addr=10.0.0.2, port=4420 00:38:30.117 qpair failed and we were unable to recover it. 00:38:30.117 Read completed with error (sct=0, sc=8) 00:38:30.117 starting I/O failed 00:38:30.117 Read completed with error (sct=0, sc=8) 00:38:30.117 starting I/O failed 00:38:30.117 Read completed with error (sct=0, sc=8) 00:38:30.117 starting I/O failed 00:38:30.117 Read completed with error (sct=0, sc=8) 00:38:30.117 starting I/O failed 00:38:30.117 Read completed with error (sct=0, sc=8) 00:38:30.117 starting I/O failed 00:38:30.117 Read completed with error (sct=0, sc=8) 00:38:30.117 starting I/O failed 00:38:30.117 Read completed with error (sct=0, sc=8) 00:38:30.117 starting I/O failed 00:38:30.117 Read completed with error (sct=0, sc=8) 00:38:30.117 starting I/O failed 00:38:30.117 Read completed with error (sct=0, sc=8) 00:38:30.117 starting I/O failed 00:38:30.117 Read completed with error (sct=0, sc=8) 00:38:30.117 starting I/O failed 00:38:30.117 Read completed with error (sct=0, sc=8) 00:38:30.117 starting I/O failed 00:38:30.117 Read completed with error (sct=0, sc=8) 00:38:30.117 starting I/O failed 00:38:30.117 Write completed with error (sct=0, sc=8) 00:38:30.117 starting I/O failed 00:38:30.117 Read completed with error (sct=0, sc=8) 00:38:30.117 starting I/O failed 00:38:30.117 Write completed with error (sct=0, sc=8) 00:38:30.117 starting I/O failed 00:38:30.117 Read completed with error (sct=0, sc=8) 00:38:30.117 starting I/O failed 00:38:30.117 Write completed with error (sct=0, sc=8) 00:38:30.117 starting I/O failed 00:38:30.117 Read completed with error (sct=0, sc=8) 00:38:30.117 starting I/O failed 00:38:30.117 Read completed with error (sct=0, sc=8) 00:38:30.117 starting I/O failed 00:38:30.117 Read completed with error (sct=0, sc=8) 00:38:30.117 starting I/O failed 00:38:30.117 Read completed with error (sct=0, sc=8) 00:38:30.117 starting I/O failed 00:38:30.117 Write completed with error (sct=0, sc=8) 00:38:30.117 starting I/O failed 00:38:30.117 Read completed with error (sct=0, sc=8) 00:38:30.117 starting I/O failed 00:38:30.117 Read completed with error (sct=0, sc=8) 00:38:30.117 starting I/O failed 00:38:30.117 Read completed with error (sct=0, sc=8) 00:38:30.117 starting I/O failed 00:38:30.117 Write completed with error (sct=0, sc=8) 00:38:30.117 starting I/O failed 00:38:30.117 Write completed with error (sct=0, sc=8) 00:38:30.117 starting I/O failed 00:38:30.117 Read completed with error (sct=0, sc=8) 00:38:30.117 starting I/O failed 00:38:30.117 Read completed with error (sct=0, sc=8) 00:38:30.117 starting I/O failed 00:38:30.117 Write completed with error (sct=0, sc=8) 00:38:30.117 starting I/O failed 00:38:30.117 Read completed with error (sct=0, sc=8) 00:38:30.117 starting I/O failed 00:38:30.117 Write completed with error (sct=0, sc=8) 00:38:30.117 starting I/O failed 00:38:30.118 [2024-10-01 17:38:28.450144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.118 [2024-10-01 17:38:28.450469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.118 [2024-10-01 17:38:28.450485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.118 qpair failed and we were unable to recover it. 00:38:30.118 [2024-10-01 17:38:28.450697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.118 [2024-10-01 17:38:28.450708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.118 qpair failed and we were unable to recover it. 00:38:30.118 [2024-10-01 17:38:28.450976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.118 [2024-10-01 17:38:28.450986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.118 qpair failed and we were unable to recover it. 00:38:30.118 [2024-10-01 17:38:28.451319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.118 [2024-10-01 17:38:28.451329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.118 qpair failed and we were unable to recover it. 00:38:30.118 [2024-10-01 17:38:28.451642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.118 [2024-10-01 17:38:28.451652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.118 qpair failed and we were unable to recover it. 00:38:30.118 [2024-10-01 17:38:28.451987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.118 [2024-10-01 17:38:28.452001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.118 qpair failed and we were unable to recover it. 00:38:30.118 [2024-10-01 17:38:28.452331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.118 [2024-10-01 17:38:28.452342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.118 qpair failed and we were unable to recover it. 00:38:30.118 [2024-10-01 17:38:28.452660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.118 [2024-10-01 17:38:28.452670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.118 qpair failed and we were unable to recover it. 00:38:30.118 [2024-10-01 17:38:28.452750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.118 [2024-10-01 17:38:28.452761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.118 qpair failed and we were unable to recover it. 00:38:30.118 [2024-10-01 17:38:28.452977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.118 [2024-10-01 17:38:28.452987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.118 qpair failed and we were unable to recover it. 00:38:30.118 [2024-10-01 17:38:28.453283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.118 [2024-10-01 17:38:28.453293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.118 qpair failed and we were unable to recover it. 00:38:30.118 [2024-10-01 17:38:28.453609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.118 [2024-10-01 17:38:28.453619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.118 qpair failed and we were unable to recover it. 00:38:30.118 [2024-10-01 17:38:28.453922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.118 [2024-10-01 17:38:28.453932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.118 qpair failed and we were unable to recover it. 00:38:30.118 [2024-10-01 17:38:28.454218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.118 [2024-10-01 17:38:28.454228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.118 qpair failed and we were unable to recover it. 00:38:30.118 [2024-10-01 17:38:28.454534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.118 [2024-10-01 17:38:28.454543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.118 qpair failed and we were unable to recover it. 00:38:30.118 [2024-10-01 17:38:28.454867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.118 [2024-10-01 17:38:28.454876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.118 qpair failed and we were unable to recover it. 00:38:30.118 [2024-10-01 17:38:28.455072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.118 [2024-10-01 17:38:28.455082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.118 qpair failed and we were unable to recover it. 00:38:30.118 [2024-10-01 17:38:28.455397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.118 [2024-10-01 17:38:28.455406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.118 qpair failed and we were unable to recover it. 00:38:30.118 [2024-10-01 17:38:28.455731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.118 [2024-10-01 17:38:28.455740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.118 qpair failed and we were unable to recover it. 00:38:30.118 [2024-10-01 17:38:28.456004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.118 [2024-10-01 17:38:28.456014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.118 qpair failed and we were unable to recover it. 00:38:30.118 [2024-10-01 17:38:28.456304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.118 [2024-10-01 17:38:28.456314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.118 qpair failed and we were unable to recover it. 00:38:30.118 [2024-10-01 17:38:28.456633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.118 [2024-10-01 17:38:28.456644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.118 qpair failed and we were unable to recover it. 00:38:30.118 [2024-10-01 17:38:28.456953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.118 [2024-10-01 17:38:28.456963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.118 qpair failed and we were unable to recover it. 00:38:30.118 [2024-10-01 17:38:28.457291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.118 [2024-10-01 17:38:28.457302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.118 qpair failed and we were unable to recover it. 00:38:30.118 [2024-10-01 17:38:28.457610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.118 [2024-10-01 17:38:28.457621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.118 qpair failed and we were unable to recover it. 00:38:30.118 [2024-10-01 17:38:28.457790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.118 [2024-10-01 17:38:28.457801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.118 qpair failed and we were unable to recover it. 00:38:30.118 [2024-10-01 17:38:28.458136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.118 [2024-10-01 17:38:28.458147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.118 qpair failed and we were unable to recover it. 00:38:30.118 [2024-10-01 17:38:28.458470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.118 [2024-10-01 17:38:28.458479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.118 qpair failed and we were unable to recover it. 00:38:30.118 [2024-10-01 17:38:28.458862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.118 [2024-10-01 17:38:28.458873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.118 qpair failed and we were unable to recover it. 00:38:30.118 [2024-10-01 17:38:28.459212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.118 [2024-10-01 17:38:28.459222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.118 qpair failed and we were unable to recover it. 00:38:30.118 [2024-10-01 17:38:28.459543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.118 [2024-10-01 17:38:28.459552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.118 qpair failed and we were unable to recover it. 00:38:30.118 [2024-10-01 17:38:28.459821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.118 [2024-10-01 17:38:28.459831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.118 qpair failed and we were unable to recover it. 00:38:30.118 [2024-10-01 17:38:28.460023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.118 [2024-10-01 17:38:28.460033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.118 qpair failed and we were unable to recover it. 00:38:30.118 [2024-10-01 17:38:28.460223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.118 [2024-10-01 17:38:28.460237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.118 qpair failed and we were unable to recover it. 00:38:30.118 [2024-10-01 17:38:28.460537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.118 [2024-10-01 17:38:28.460546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.118 qpair failed and we were unable to recover it. 00:38:30.118 [2024-10-01 17:38:28.460864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.118 [2024-10-01 17:38:28.460876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.118 qpair failed and we were unable to recover it. 00:38:30.118 [2024-10-01 17:38:28.461164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.118 [2024-10-01 17:38:28.461174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.118 qpair failed and we were unable to recover it. 00:38:30.118 [2024-10-01 17:38:28.461466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.118 [2024-10-01 17:38:28.461476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.118 qpair failed and we were unable to recover it. 00:38:30.118 [2024-10-01 17:38:28.461789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.119 [2024-10-01 17:38:28.461799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.119 qpair failed and we were unable to recover it. 00:38:30.119 [2024-10-01 17:38:28.462132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.119 [2024-10-01 17:38:28.462142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.119 qpair failed and we were unable to recover it. 00:38:30.119 [2024-10-01 17:38:28.462426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.119 [2024-10-01 17:38:28.462436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.119 qpair failed and we were unable to recover it. 00:38:30.119 [2024-10-01 17:38:28.462739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.119 [2024-10-01 17:38:28.462749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.119 qpair failed and we were unable to recover it. 00:38:30.119 [2024-10-01 17:38:28.463054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.119 [2024-10-01 17:38:28.463064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.119 qpair failed and we were unable to recover it. 00:38:30.119 [2024-10-01 17:38:28.463463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.119 [2024-10-01 17:38:28.463472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.119 qpair failed and we were unable to recover it. 00:38:30.119 [2024-10-01 17:38:28.463757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.119 [2024-10-01 17:38:28.463768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.119 qpair failed and we were unable to recover it. 00:38:30.119 [2024-10-01 17:38:28.464099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.119 [2024-10-01 17:38:28.464108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.119 qpair failed and we were unable to recover it. 00:38:30.119 [2024-10-01 17:38:28.464396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.119 [2024-10-01 17:38:28.464406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.119 qpair failed and we were unable to recover it. 00:38:30.119 [2024-10-01 17:38:28.464683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.119 [2024-10-01 17:38:28.464693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.119 qpair failed and we were unable to recover it. 00:38:30.119 [2024-10-01 17:38:28.464922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.119 [2024-10-01 17:38:28.464932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.119 qpair failed and we were unable to recover it. 00:38:30.119 [2024-10-01 17:38:28.465218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.119 [2024-10-01 17:38:28.465229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.119 qpair failed and we were unable to recover it. 00:38:30.119 [2024-10-01 17:38:28.465531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.119 [2024-10-01 17:38:28.465542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.119 qpair failed and we were unable to recover it. 00:38:30.119 [2024-10-01 17:38:28.465846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.119 [2024-10-01 17:38:28.465857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.119 qpair failed and we were unable to recover it. 00:38:30.119 [2024-10-01 17:38:28.466072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.119 [2024-10-01 17:38:28.466084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.119 qpair failed and we were unable to recover it. 00:38:30.119 [2024-10-01 17:38:28.466386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.119 [2024-10-01 17:38:28.466395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.119 qpair failed and we were unable to recover it. 00:38:30.119 [2024-10-01 17:38:28.466704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.119 [2024-10-01 17:38:28.466713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.119 qpair failed and we were unable to recover it. 00:38:30.119 [2024-10-01 17:38:28.466925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.119 [2024-10-01 17:38:28.466935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.119 qpair failed and we were unable to recover it. 00:38:30.119 [2024-10-01 17:38:28.467285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.119 [2024-10-01 17:38:28.467295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.119 qpair failed and we were unable to recover it. 00:38:30.119 [2024-10-01 17:38:28.467479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.119 [2024-10-01 17:38:28.467489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.119 qpair failed and we were unable to recover it. 00:38:30.119 [2024-10-01 17:38:28.467795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.119 [2024-10-01 17:38:28.467804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.119 qpair failed and we were unable to recover it. 00:38:30.119 [2024-10-01 17:38:28.467997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.119 [2024-10-01 17:38:28.468007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.119 qpair failed and we were unable to recover it. 00:38:30.119 [2024-10-01 17:38:28.468304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.119 [2024-10-01 17:38:28.468316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.119 qpair failed and we were unable to recover it. 00:38:30.119 [2024-10-01 17:38:28.468705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.119 [2024-10-01 17:38:28.468714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.119 qpair failed and we were unable to recover it. 00:38:30.119 [2024-10-01 17:38:28.468986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.119 [2024-10-01 17:38:28.469002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.119 qpair failed and we were unable to recover it. 00:38:30.119 [2024-10-01 17:38:28.469278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.119 [2024-10-01 17:38:28.469288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.119 qpair failed and we were unable to recover it. 00:38:30.119 [2024-10-01 17:38:28.469586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.119 [2024-10-01 17:38:28.469596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.119 qpair failed and we were unable to recover it. 00:38:30.119 [2024-10-01 17:38:28.469790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.119 [2024-10-01 17:38:28.469800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.119 qpair failed and we were unable to recover it. 00:38:30.119 [2024-10-01 17:38:28.470076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.119 [2024-10-01 17:38:28.470087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.119 qpair failed and we were unable to recover it. 00:38:30.119 [2024-10-01 17:38:28.470370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.119 [2024-10-01 17:38:28.470380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.119 qpair failed and we were unable to recover it. 00:38:30.119 [2024-10-01 17:38:28.470685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.119 [2024-10-01 17:38:28.470695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.119 qpair failed and we were unable to recover it. 00:38:30.119 [2024-10-01 17:38:28.470972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.119 [2024-10-01 17:38:28.470982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.119 qpair failed and we were unable to recover it. 00:38:30.119 [2024-10-01 17:38:28.471295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.119 [2024-10-01 17:38:28.471305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.119 qpair failed and we were unable to recover it. 00:38:30.119 [2024-10-01 17:38:28.471608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.119 [2024-10-01 17:38:28.471618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.119 qpair failed and we were unable to recover it. 00:38:30.119 [2024-10-01 17:38:28.471936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.119 [2024-10-01 17:38:28.471947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.119 qpair failed and we were unable to recover it. 00:38:30.119 [2024-10-01 17:38:28.472239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.119 [2024-10-01 17:38:28.472249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.119 qpair failed and we were unable to recover it. 00:38:30.119 [2024-10-01 17:38:28.472562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.119 [2024-10-01 17:38:28.472572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.119 qpair failed and we were unable to recover it. 00:38:30.119 [2024-10-01 17:38:28.472874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.119 [2024-10-01 17:38:28.472885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.119 qpair failed and we were unable to recover it. 00:38:30.119 [2024-10-01 17:38:28.473131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.119 [2024-10-01 17:38:28.473141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.119 qpair failed and we were unable to recover it. 00:38:30.119 [2024-10-01 17:38:28.473429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.119 [2024-10-01 17:38:28.473439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.120 qpair failed and we were unable to recover it. 00:38:30.120 [2024-10-01 17:38:28.473745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.120 [2024-10-01 17:38:28.473755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.120 qpair failed and we were unable to recover it. 00:38:30.120 [2024-10-01 17:38:28.474060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.120 [2024-10-01 17:38:28.474070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.120 qpair failed and we were unable to recover it. 00:38:30.120 [2024-10-01 17:38:28.474369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.120 [2024-10-01 17:38:28.474379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.120 qpair failed and we were unable to recover it. 00:38:30.120 [2024-10-01 17:38:28.474643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.120 [2024-10-01 17:38:28.474652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.120 qpair failed and we were unable to recover it. 00:38:30.120 [2024-10-01 17:38:28.474972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.120 [2024-10-01 17:38:28.474981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.120 qpair failed and we were unable to recover it. 00:38:30.120 [2024-10-01 17:38:28.475268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.120 [2024-10-01 17:38:28.475279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.120 qpair failed and we were unable to recover it. 00:38:30.120 [2024-10-01 17:38:28.475580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.120 [2024-10-01 17:38:28.475590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.120 qpair failed and we were unable to recover it. 00:38:30.120 [2024-10-01 17:38:28.475967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.120 [2024-10-01 17:38:28.475976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.120 qpair failed and we were unable to recover it. 00:38:30.120 [2024-10-01 17:38:28.476285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.120 [2024-10-01 17:38:28.476294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.120 qpair failed and we were unable to recover it. 00:38:30.120 [2024-10-01 17:38:28.476553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.120 [2024-10-01 17:38:28.476562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.120 qpair failed and we were unable to recover it. 00:38:30.120 [2024-10-01 17:38:28.476937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.120 [2024-10-01 17:38:28.476947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.120 qpair failed and we were unable to recover it. 00:38:30.120 [2024-10-01 17:38:28.477158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.120 [2024-10-01 17:38:28.477168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.120 qpair failed and we were unable to recover it. 00:38:30.120 [2024-10-01 17:38:28.477438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.120 [2024-10-01 17:38:28.477447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.120 qpair failed and we were unable to recover it. 00:38:30.120 [2024-10-01 17:38:28.477762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.120 [2024-10-01 17:38:28.477772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.120 qpair failed and we were unable to recover it. 00:38:30.120 [2024-10-01 17:38:28.478056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.120 [2024-10-01 17:38:28.478066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.120 qpair failed and we were unable to recover it. 00:38:30.120 [2024-10-01 17:38:28.478360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.120 [2024-10-01 17:38:28.478369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.120 qpair failed and we were unable to recover it. 00:38:30.120 [2024-10-01 17:38:28.478571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.120 [2024-10-01 17:38:28.478580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.120 qpair failed and we were unable to recover it. 00:38:30.120 [2024-10-01 17:38:28.478804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.120 [2024-10-01 17:38:28.478813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.120 qpair failed and we were unable to recover it. 00:38:30.120 [2024-10-01 17:38:28.479028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.120 [2024-10-01 17:38:28.479038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.120 qpair failed and we were unable to recover it. 00:38:30.120 [2024-10-01 17:38:28.479260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.120 [2024-10-01 17:38:28.479269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.120 qpair failed and we were unable to recover it. 00:38:30.120 [2024-10-01 17:38:28.479588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.120 [2024-10-01 17:38:28.479598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.120 qpair failed and we were unable to recover it. 00:38:30.120 [2024-10-01 17:38:28.479862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.120 [2024-10-01 17:38:28.479871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.120 qpair failed and we were unable to recover it. 00:38:30.120 [2024-10-01 17:38:28.480171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.120 [2024-10-01 17:38:28.480181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.120 qpair failed and we were unable to recover it. 00:38:30.120 [2024-10-01 17:38:28.480493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.120 [2024-10-01 17:38:28.480504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.120 qpair failed and we were unable to recover it. 00:38:30.120 [2024-10-01 17:38:28.480821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.120 [2024-10-01 17:38:28.480832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.120 qpair failed and we were unable to recover it. 00:38:30.120 [2024-10-01 17:38:28.481037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.120 [2024-10-01 17:38:28.481048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.120 qpair failed and we were unable to recover it. 00:38:30.120 [2024-10-01 17:38:28.481304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.120 [2024-10-01 17:38:28.481314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.120 qpair failed and we were unable to recover it. 00:38:30.120 [2024-10-01 17:38:28.481594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.120 [2024-10-01 17:38:28.481603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.120 qpair failed and we were unable to recover it. 00:38:30.120 [2024-10-01 17:38:28.481908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.120 [2024-10-01 17:38:28.481918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.120 qpair failed and we were unable to recover it. 00:38:30.120 [2024-10-01 17:38:28.482225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.120 [2024-10-01 17:38:28.482235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.120 qpair failed and we were unable to recover it. 00:38:30.120 [2024-10-01 17:38:28.482558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.120 [2024-10-01 17:38:28.482568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.120 qpair failed and we were unable to recover it. 00:38:30.120 [2024-10-01 17:38:28.482867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.120 [2024-10-01 17:38:28.482876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.120 qpair failed and we were unable to recover it. 00:38:30.120 [2024-10-01 17:38:28.483195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.120 [2024-10-01 17:38:28.483205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.120 qpair failed and we were unable to recover it. 00:38:30.120 [2024-10-01 17:38:28.483508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.120 [2024-10-01 17:38:28.483517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.120 qpair failed and we were unable to recover it. 00:38:30.120 [2024-10-01 17:38:28.483746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.121 [2024-10-01 17:38:28.483756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.121 qpair failed and we were unable to recover it. 00:38:30.121 [2024-10-01 17:38:28.484028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.121 [2024-10-01 17:38:28.484038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.121 qpair failed and we were unable to recover it. 00:38:30.121 [2024-10-01 17:38:28.484320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.121 [2024-10-01 17:38:28.484330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.121 qpair failed and we were unable to recover it. 00:38:30.121 [2024-10-01 17:38:28.484655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.121 [2024-10-01 17:38:28.484665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.121 qpair failed and we were unable to recover it. 00:38:30.121 [2024-10-01 17:38:28.484951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.121 [2024-10-01 17:38:28.484961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.121 qpair failed and we were unable to recover it. 00:38:30.121 [2024-10-01 17:38:28.485309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.121 [2024-10-01 17:38:28.485319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.121 qpair failed and we were unable to recover it. 00:38:30.121 [2024-10-01 17:38:28.485547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.121 [2024-10-01 17:38:28.485557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.121 qpair failed and we were unable to recover it. 00:38:30.121 [2024-10-01 17:38:28.485917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.121 [2024-10-01 17:38:28.485926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.121 qpair failed and we were unable to recover it. 00:38:30.121 [2024-10-01 17:38:28.486215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.121 [2024-10-01 17:38:28.486226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.121 qpair failed and we were unable to recover it. 00:38:30.121 [2024-10-01 17:38:28.486531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.121 [2024-10-01 17:38:28.486541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.121 qpair failed and we were unable to recover it. 00:38:30.121 [2024-10-01 17:38:28.486821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.121 [2024-10-01 17:38:28.486831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.121 qpair failed and we were unable to recover it. 00:38:30.121 [2024-10-01 17:38:28.487142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.121 [2024-10-01 17:38:28.487152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.121 qpair failed and we were unable to recover it. 00:38:30.121 [2024-10-01 17:38:28.487449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.121 [2024-10-01 17:38:28.487459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.121 qpair failed and we were unable to recover it. 00:38:30.121 [2024-10-01 17:38:28.487733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.121 [2024-10-01 17:38:28.487742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.121 qpair failed and we were unable to recover it. 00:38:30.121 [2024-10-01 17:38:28.488075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.121 [2024-10-01 17:38:28.488085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.121 qpair failed and we were unable to recover it. 00:38:30.121 [2024-10-01 17:38:28.488360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.121 [2024-10-01 17:38:28.488370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.121 qpair failed and we were unable to recover it. 00:38:30.121 [2024-10-01 17:38:28.488698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.121 [2024-10-01 17:38:28.488708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.121 qpair failed and we were unable to recover it. 00:38:30.121 [2024-10-01 17:38:28.489038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.121 [2024-10-01 17:38:28.489049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.121 qpair failed and we were unable to recover it. 00:38:30.121 [2024-10-01 17:38:28.489346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.121 [2024-10-01 17:38:28.489355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.121 qpair failed and we were unable to recover it. 00:38:30.121 [2024-10-01 17:38:28.489614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.121 [2024-10-01 17:38:28.489624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.121 qpair failed and we were unable to recover it. 00:38:30.121 [2024-10-01 17:38:28.489920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.121 [2024-10-01 17:38:28.489929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.121 qpair failed and we were unable to recover it. 00:38:30.121 [2024-10-01 17:38:28.490240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.121 [2024-10-01 17:38:28.490250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.121 qpair failed and we were unable to recover it. 00:38:30.121 [2024-10-01 17:38:28.490561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.121 [2024-10-01 17:38:28.490571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.121 qpair failed and we were unable to recover it. 00:38:30.121 [2024-10-01 17:38:28.490834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.121 [2024-10-01 17:38:28.490843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.121 qpair failed and we were unable to recover it. 00:38:30.121 [2024-10-01 17:38:28.491169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.121 [2024-10-01 17:38:28.491179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.121 qpair failed and we were unable to recover it. 00:38:30.121 [2024-10-01 17:38:28.491445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.121 [2024-10-01 17:38:28.491455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.121 qpair failed and we were unable to recover it. 00:38:30.121 [2024-10-01 17:38:28.491755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.121 [2024-10-01 17:38:28.491764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.121 qpair failed and we were unable to recover it. 00:38:30.121 [2024-10-01 17:38:28.492021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.121 [2024-10-01 17:38:28.492031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.121 qpair failed and we were unable to recover it. 00:38:30.121 [2024-10-01 17:38:28.492328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.121 [2024-10-01 17:38:28.492338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.121 qpair failed and we were unable to recover it. 00:38:30.121 [2024-10-01 17:38:28.492649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.121 [2024-10-01 17:38:28.492658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.121 qpair failed and we were unable to recover it. 00:38:30.121 [2024-10-01 17:38:28.492861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.121 [2024-10-01 17:38:28.492871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.121 qpair failed and we were unable to recover it. 00:38:30.121 [2024-10-01 17:38:28.493204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.121 [2024-10-01 17:38:28.493214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.121 qpair failed and we were unable to recover it. 00:38:30.121 [2024-10-01 17:38:28.493496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.121 [2024-10-01 17:38:28.493512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.121 qpair failed and we were unable to recover it. 00:38:30.121 [2024-10-01 17:38:28.493818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.121 [2024-10-01 17:38:28.493827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.121 qpair failed and we were unable to recover it. 00:38:30.121 [2024-10-01 17:38:28.494108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.121 [2024-10-01 17:38:28.494119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.121 qpair failed and we were unable to recover it. 00:38:30.121 [2024-10-01 17:38:28.494389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.121 [2024-10-01 17:38:28.494399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.121 qpair failed and we were unable to recover it. 00:38:30.121 [2024-10-01 17:38:28.494671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.121 [2024-10-01 17:38:28.494681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.121 qpair failed and we were unable to recover it. 00:38:30.121 [2024-10-01 17:38:28.495001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.121 [2024-10-01 17:38:28.495011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.121 qpair failed and we were unable to recover it. 00:38:30.121 [2024-10-01 17:38:28.495311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.121 [2024-10-01 17:38:28.495322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.121 qpair failed and we were unable to recover it. 00:38:30.122 [2024-10-01 17:38:28.495633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.122 [2024-10-01 17:38:28.495642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.122 qpair failed and we were unable to recover it. 00:38:30.122 [2024-10-01 17:38:28.495923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.122 [2024-10-01 17:38:28.495932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.122 qpair failed and we were unable to recover it. 00:38:30.122 [2024-10-01 17:38:28.496251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.122 [2024-10-01 17:38:28.496261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.122 qpair failed and we were unable to recover it. 00:38:30.122 [2024-10-01 17:38:28.496643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.122 [2024-10-01 17:38:28.496653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.122 qpair failed and we were unable to recover it. 00:38:30.122 [2024-10-01 17:38:28.496912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.122 [2024-10-01 17:38:28.496922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.122 qpair failed and we were unable to recover it. 00:38:30.122 [2024-10-01 17:38:28.497122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.122 [2024-10-01 17:38:28.497132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.122 qpair failed and we were unable to recover it. 00:38:30.122 [2024-10-01 17:38:28.497466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.122 [2024-10-01 17:38:28.497476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.122 qpair failed and we were unable to recover it. 00:38:30.122 [2024-10-01 17:38:28.497783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.122 [2024-10-01 17:38:28.497792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.122 qpair failed and we were unable to recover it. 00:38:30.122 [2024-10-01 17:38:28.498095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.122 [2024-10-01 17:38:28.498106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.122 qpair failed and we were unable to recover it. 00:38:30.122 [2024-10-01 17:38:28.498318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.122 [2024-10-01 17:38:28.498329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.122 qpair failed and we were unable to recover it. 00:38:30.122 [2024-10-01 17:38:28.498592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.122 [2024-10-01 17:38:28.498602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.122 qpair failed and we were unable to recover it. 00:38:30.122 [2024-10-01 17:38:28.498913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.122 [2024-10-01 17:38:28.498922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.122 qpair failed and we were unable to recover it. 00:38:30.122 [2024-10-01 17:38:28.499209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.122 [2024-10-01 17:38:28.499219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.122 qpair failed and we were unable to recover it. 00:38:30.122 [2024-10-01 17:38:28.499498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.122 [2024-10-01 17:38:28.499509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.122 qpair failed and we were unable to recover it. 00:38:30.122 [2024-10-01 17:38:28.499860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.122 [2024-10-01 17:38:28.499870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.122 qpair failed and we were unable to recover it. 00:38:30.122 [2024-10-01 17:38:28.500197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.122 [2024-10-01 17:38:28.500208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.122 qpair failed and we were unable to recover it. 00:38:30.122 [2024-10-01 17:38:28.500417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.122 [2024-10-01 17:38:28.500426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.122 qpair failed and we were unable to recover it. 00:38:30.122 [2024-10-01 17:38:28.500737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.122 [2024-10-01 17:38:28.500746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.122 qpair failed and we were unable to recover it. 00:38:30.122 [2024-10-01 17:38:28.500936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.122 [2024-10-01 17:38:28.500948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.122 qpair failed and we were unable to recover it. 00:38:30.122 [2024-10-01 17:38:28.501238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.122 [2024-10-01 17:38:28.501248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.122 qpair failed and we were unable to recover it. 00:38:30.122 [2024-10-01 17:38:28.501451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.122 [2024-10-01 17:38:28.501460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.122 qpair failed and we were unable to recover it. 00:38:30.122 [2024-10-01 17:38:28.501745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.122 [2024-10-01 17:38:28.501754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.122 qpair failed and we were unable to recover it. 00:38:30.122 [2024-10-01 17:38:28.502065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.122 [2024-10-01 17:38:28.502076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.122 qpair failed and we were unable to recover it. 00:38:30.122 [2024-10-01 17:38:28.502401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.122 [2024-10-01 17:38:28.502411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.122 qpair failed and we were unable to recover it. 00:38:30.122 [2024-10-01 17:38:28.502716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.122 [2024-10-01 17:38:28.502725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.122 qpair failed and we were unable to recover it. 00:38:30.122 [2024-10-01 17:38:28.503085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.122 [2024-10-01 17:38:28.503095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.122 qpair failed and we were unable to recover it. 00:38:30.122 [2024-10-01 17:38:28.503369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.122 [2024-10-01 17:38:28.503379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.122 qpair failed and we were unable to recover it. 00:38:30.122 [2024-10-01 17:38:28.503711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.122 [2024-10-01 17:38:28.503720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.122 qpair failed and we were unable to recover it. 00:38:30.122 [2024-10-01 17:38:28.504021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.122 [2024-10-01 17:38:28.504031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.122 qpair failed and we were unable to recover it. 00:38:30.122 [2024-10-01 17:38:28.504336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.122 [2024-10-01 17:38:28.504345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.122 qpair failed and we were unable to recover it. 00:38:30.122 [2024-10-01 17:38:28.504648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.122 [2024-10-01 17:38:28.504657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.122 qpair failed and we were unable to recover it. 00:38:30.122 [2024-10-01 17:38:28.504959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.122 [2024-10-01 17:38:28.504968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.122 qpair failed and we were unable to recover it. 00:38:30.122 [2024-10-01 17:38:28.505283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.122 [2024-10-01 17:38:28.505294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.122 qpair failed and we were unable to recover it. 00:38:30.122 [2024-10-01 17:38:28.505566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.122 [2024-10-01 17:38:28.505577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.122 qpair failed and we were unable to recover it. 00:38:30.122 [2024-10-01 17:38:28.505787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.122 [2024-10-01 17:38:28.505797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.122 qpair failed and we were unable to recover it. 00:38:30.122 [2024-10-01 17:38:28.506160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.122 [2024-10-01 17:38:28.506171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.122 qpair failed and we were unable to recover it. 00:38:30.122 [2024-10-01 17:38:28.506450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.122 [2024-10-01 17:38:28.506461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.122 qpair failed and we were unable to recover it. 00:38:30.122 [2024-10-01 17:38:28.506750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.122 [2024-10-01 17:38:28.506760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.122 qpair failed and we were unable to recover it. 00:38:30.122 [2024-10-01 17:38:28.507073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.122 [2024-10-01 17:38:28.507084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.122 qpair failed and we were unable to recover it. 00:38:30.123 [2024-10-01 17:38:28.507358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.123 [2024-10-01 17:38:28.507368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.123 qpair failed and we were unable to recover it. 00:38:30.123 [2024-10-01 17:38:28.507675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.123 [2024-10-01 17:38:28.507685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.123 qpair failed and we were unable to recover it. 00:38:30.123 [2024-10-01 17:38:28.508017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.123 [2024-10-01 17:38:28.508028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.123 qpair failed and we were unable to recover it. 00:38:30.123 [2024-10-01 17:38:28.508362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.123 [2024-10-01 17:38:28.508372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.123 qpair failed and we were unable to recover it. 00:38:30.123 [2024-10-01 17:38:28.508674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.123 [2024-10-01 17:38:28.508683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.123 qpair failed and we were unable to recover it. 00:38:30.123 [2024-10-01 17:38:28.508988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.123 [2024-10-01 17:38:28.509014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.123 qpair failed and we were unable to recover it. 00:38:30.123 [2024-10-01 17:38:28.509212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.123 [2024-10-01 17:38:28.509223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.123 qpair failed and we were unable to recover it. 00:38:30.123 [2024-10-01 17:38:28.509528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.123 [2024-10-01 17:38:28.509538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.123 qpair failed and we were unable to recover it. 00:38:30.123 [2024-10-01 17:38:28.509851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.123 [2024-10-01 17:38:28.509860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.123 qpair failed and we were unable to recover it. 00:38:30.123 [2024-10-01 17:38:28.510153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.123 [2024-10-01 17:38:28.510163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.123 qpair failed and we were unable to recover it. 00:38:30.123 [2024-10-01 17:38:28.510455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.123 [2024-10-01 17:38:28.510464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.123 qpair failed and we were unable to recover it. 00:38:30.123 [2024-10-01 17:38:28.510784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.123 [2024-10-01 17:38:28.510793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.123 qpair failed and we were unable to recover it. 00:38:30.123 [2024-10-01 17:38:28.510954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.123 [2024-10-01 17:38:28.510965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.123 qpair failed and we were unable to recover it. 00:38:30.123 [2024-10-01 17:38:28.511341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.123 [2024-10-01 17:38:28.511351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.123 qpair failed and we were unable to recover it. 00:38:30.123 [2024-10-01 17:38:28.511638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.123 [2024-10-01 17:38:28.511648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.123 qpair failed and we were unable to recover it. 00:38:30.123 [2024-10-01 17:38:28.511952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.123 [2024-10-01 17:38:28.511962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.123 qpair failed and we were unable to recover it. 00:38:30.123 [2024-10-01 17:38:28.512250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.123 [2024-10-01 17:38:28.512260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.123 qpair failed and we were unable to recover it. 00:38:30.123 [2024-10-01 17:38:28.512576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.123 [2024-10-01 17:38:28.512585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.123 qpair failed and we were unable to recover it. 00:38:30.123 [2024-10-01 17:38:28.512871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.123 [2024-10-01 17:38:28.512886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.123 qpair failed and we were unable to recover it. 00:38:30.123 [2024-10-01 17:38:28.513207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.123 [2024-10-01 17:38:28.513217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.123 qpair failed and we were unable to recover it. 00:38:30.123 [2024-10-01 17:38:28.513528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.123 [2024-10-01 17:38:28.513541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.123 qpair failed and we were unable to recover it. 00:38:30.123 [2024-10-01 17:38:28.513824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.123 [2024-10-01 17:38:28.513834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.123 qpair failed and we were unable to recover it. 00:38:30.123 [2024-10-01 17:38:28.514103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.123 [2024-10-01 17:38:28.514113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.123 qpair failed and we were unable to recover it. 00:38:30.123 [2024-10-01 17:38:28.514306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.123 [2024-10-01 17:38:28.514316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.123 qpair failed and we were unable to recover it. 00:38:30.123 [2024-10-01 17:38:28.514638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.123 [2024-10-01 17:38:28.514648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.123 qpair failed and we were unable to recover it. 00:38:30.123 [2024-10-01 17:38:28.514992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.123 [2024-10-01 17:38:28.515005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.123 qpair failed and we were unable to recover it. 00:38:30.123 [2024-10-01 17:38:28.515348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.123 [2024-10-01 17:38:28.515358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.123 qpair failed and we were unable to recover it. 00:38:30.123 [2024-10-01 17:38:28.515670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.123 [2024-10-01 17:38:28.515679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.123 qpair failed and we were unable to recover it. 00:38:30.123 [2024-10-01 17:38:28.516007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.123 [2024-10-01 17:38:28.516018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.123 qpair failed and we were unable to recover it. 00:38:30.123 [2024-10-01 17:38:28.516327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.123 [2024-10-01 17:38:28.516337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.123 qpair failed and we were unable to recover it. 00:38:30.123 [2024-10-01 17:38:28.516614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.123 [2024-10-01 17:38:28.516624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.123 qpair failed and we were unable to recover it. 00:38:30.123 [2024-10-01 17:38:28.516936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.123 [2024-10-01 17:38:28.516946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.123 qpair failed and we were unable to recover it. 00:38:30.123 [2024-10-01 17:38:28.517248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.123 [2024-10-01 17:38:28.517258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.123 qpair failed and we were unable to recover it. 00:38:30.123 [2024-10-01 17:38:28.517566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.123 [2024-10-01 17:38:28.517575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.123 qpair failed and we were unable to recover it. 00:38:30.123 [2024-10-01 17:38:28.517860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.123 [2024-10-01 17:38:28.517871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.123 qpair failed and we were unable to recover it. 00:38:30.123 [2024-10-01 17:38:28.518160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.123 [2024-10-01 17:38:28.518170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.123 qpair failed and we were unable to recover it. 00:38:30.123 [2024-10-01 17:38:28.518460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.123 [2024-10-01 17:38:28.518469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.123 qpair failed and we were unable to recover it. 00:38:30.123 [2024-10-01 17:38:28.518783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.123 [2024-10-01 17:38:28.518792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.123 qpair failed and we were unable to recover it. 00:38:30.123 [2024-10-01 17:38:28.519066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.123 [2024-10-01 17:38:28.519076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.123 qpair failed and we were unable to recover it. 00:38:30.124 [2024-10-01 17:38:28.519395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.124 [2024-10-01 17:38:28.519404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.124 qpair failed and we were unable to recover it. 00:38:30.124 [2024-10-01 17:38:28.519683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.124 [2024-10-01 17:38:28.519693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.124 qpair failed and we were unable to recover it. 00:38:30.124 [2024-10-01 17:38:28.520004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.124 [2024-10-01 17:38:28.520013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.124 qpair failed and we were unable to recover it. 00:38:30.124 [2024-10-01 17:38:28.520321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.124 [2024-10-01 17:38:28.520330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.124 qpair failed and we were unable to recover it. 00:38:30.124 [2024-10-01 17:38:28.520639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.124 [2024-10-01 17:38:28.520648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.124 qpair failed and we were unable to recover it. 00:38:30.124 [2024-10-01 17:38:28.520952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.124 [2024-10-01 17:38:28.520961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.124 qpair failed and we were unable to recover it. 00:38:30.124 [2024-10-01 17:38:28.521269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.124 [2024-10-01 17:38:28.521279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.124 qpair failed and we were unable to recover it. 00:38:30.124 [2024-10-01 17:38:28.521564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.124 [2024-10-01 17:38:28.521574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.124 qpair failed and we were unable to recover it. 00:38:30.124 [2024-10-01 17:38:28.521878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.124 [2024-10-01 17:38:28.521890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.124 qpair failed and we were unable to recover it. 00:38:30.124 [2024-10-01 17:38:28.522172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.124 [2024-10-01 17:38:28.522182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.124 qpair failed and we were unable to recover it. 00:38:30.124 [2024-10-01 17:38:28.522504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.124 [2024-10-01 17:38:28.522513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.124 qpair failed and we were unable to recover it. 00:38:30.124 [2024-10-01 17:38:28.522806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.124 [2024-10-01 17:38:28.522817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.124 qpair failed and we were unable to recover it. 00:38:30.124 [2024-10-01 17:38:28.523115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.124 [2024-10-01 17:38:28.523125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.124 qpair failed and we were unable to recover it. 00:38:30.124 [2024-10-01 17:38:28.523429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.124 [2024-10-01 17:38:28.523439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.124 qpair failed and we were unable to recover it. 00:38:30.124 [2024-10-01 17:38:28.523757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.124 [2024-10-01 17:38:28.523768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.124 qpair failed and we were unable to recover it. 00:38:30.124 [2024-10-01 17:38:28.524043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.124 [2024-10-01 17:38:28.524053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.124 qpair failed and we were unable to recover it. 00:38:30.124 [2024-10-01 17:38:28.524345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.124 [2024-10-01 17:38:28.524363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.124 qpair failed and we were unable to recover it. 00:38:30.124 [2024-10-01 17:38:28.524671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.124 [2024-10-01 17:38:28.524681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.124 qpair failed and we were unable to recover it. 00:38:30.124 [2024-10-01 17:38:28.524939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.124 [2024-10-01 17:38:28.524949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.124 qpair failed and we were unable to recover it. 00:38:30.124 [2024-10-01 17:38:28.525276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.124 [2024-10-01 17:38:28.525285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.124 qpair failed and we were unable to recover it. 00:38:30.124 [2024-10-01 17:38:28.525591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.124 [2024-10-01 17:38:28.525601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.124 qpair failed and we were unable to recover it. 00:38:30.124 [2024-10-01 17:38:28.525909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.124 [2024-10-01 17:38:28.525919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.124 qpair failed and we were unable to recover it. 00:38:30.124 [2024-10-01 17:38:28.526196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.124 [2024-10-01 17:38:28.526206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.124 qpair failed and we were unable to recover it. 00:38:30.124 [2024-10-01 17:38:28.526559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.124 [2024-10-01 17:38:28.526568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.124 qpair failed and we were unable to recover it. 00:38:30.124 [2024-10-01 17:38:28.526880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.124 [2024-10-01 17:38:28.526890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.124 qpair failed and we were unable to recover it. 00:38:30.124 [2024-10-01 17:38:28.527220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.124 [2024-10-01 17:38:28.527231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.124 qpair failed and we were unable to recover it. 00:38:30.124 [2024-10-01 17:38:28.527456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.124 [2024-10-01 17:38:28.527466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.124 qpair failed and we were unable to recover it. 00:38:30.124 [2024-10-01 17:38:28.527794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.124 [2024-10-01 17:38:28.527804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.124 qpair failed and we were unable to recover it. 00:38:30.124 [2024-10-01 17:38:28.527989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.124 [2024-10-01 17:38:28.528002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.124 qpair failed and we were unable to recover it. 00:38:30.124 [2024-10-01 17:38:28.528226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.124 [2024-10-01 17:38:28.528236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.124 qpair failed and we were unable to recover it. 00:38:30.124 [2024-10-01 17:38:28.528596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.124 [2024-10-01 17:38:28.528605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.124 qpair failed and we were unable to recover it. 00:38:30.124 [2024-10-01 17:38:28.528902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.124 [2024-10-01 17:38:28.528912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.124 qpair failed and we were unable to recover it. 00:38:30.124 [2024-10-01 17:38:28.529213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.124 [2024-10-01 17:38:28.529223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.124 qpair failed and we were unable to recover it. 00:38:30.124 [2024-10-01 17:38:28.529501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.124 [2024-10-01 17:38:28.529511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.124 qpair failed and we were unable to recover it. 00:38:30.124 [2024-10-01 17:38:28.529816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.124 [2024-10-01 17:38:28.529826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.124 qpair failed and we were unable to recover it. 00:38:30.124 [2024-10-01 17:38:28.529988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.124 [2024-10-01 17:38:28.530003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.124 qpair failed and we were unable to recover it. 00:38:30.124 [2024-10-01 17:38:28.530326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.124 [2024-10-01 17:38:28.530336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.124 qpair failed and we were unable to recover it. 00:38:30.124 [2024-10-01 17:38:28.530615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.124 [2024-10-01 17:38:28.530632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.124 qpair failed and we were unable to recover it. 00:38:30.124 [2024-10-01 17:38:28.530935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.124 [2024-10-01 17:38:28.530945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.125 qpair failed and we were unable to recover it. 00:38:30.125 [2024-10-01 17:38:28.531246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.125 [2024-10-01 17:38:28.531257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.125 qpair failed and we were unable to recover it. 00:38:30.125 [2024-10-01 17:38:28.531566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.125 [2024-10-01 17:38:28.531575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.125 qpair failed and we were unable to recover it. 00:38:30.125 [2024-10-01 17:38:28.531888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.125 [2024-10-01 17:38:28.531897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.125 qpair failed and we were unable to recover it. 00:38:30.125 [2024-10-01 17:38:28.532161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.125 [2024-10-01 17:38:28.532171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.125 qpair failed and we were unable to recover it. 00:38:30.125 [2024-10-01 17:38:28.532465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.125 [2024-10-01 17:38:28.532483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.125 qpair failed and we were unable to recover it. 00:38:30.125 [2024-10-01 17:38:28.532806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.125 [2024-10-01 17:38:28.532816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.125 qpair failed and we were unable to recover it. 00:38:30.125 [2024-10-01 17:38:28.533092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.125 [2024-10-01 17:38:28.533101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.125 qpair failed and we were unable to recover it. 00:38:30.125 [2024-10-01 17:38:28.533410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.125 [2024-10-01 17:38:28.533420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.125 qpair failed and we were unable to recover it. 00:38:30.125 [2024-10-01 17:38:28.533746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.125 [2024-10-01 17:38:28.533755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.125 qpair failed and we were unable to recover it. 00:38:30.125 [2024-10-01 17:38:28.533960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.125 [2024-10-01 17:38:28.533969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.125 qpair failed and we were unable to recover it. 00:38:30.125 [2024-10-01 17:38:28.534270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.125 [2024-10-01 17:38:28.534282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.125 qpair failed and we were unable to recover it. 00:38:30.125 [2024-10-01 17:38:28.534594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.125 [2024-10-01 17:38:28.534604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.125 qpair failed and we were unable to recover it. 00:38:30.125 [2024-10-01 17:38:28.534795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.125 [2024-10-01 17:38:28.534805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.125 qpair failed and we were unable to recover it. 00:38:30.125 [2024-10-01 17:38:28.535136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.125 [2024-10-01 17:38:28.535146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.125 qpair failed and we were unable to recover it. 00:38:30.125 [2024-10-01 17:38:28.535471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.125 [2024-10-01 17:38:28.535481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.125 qpair failed and we were unable to recover it. 00:38:30.125 [2024-10-01 17:38:28.535792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.125 [2024-10-01 17:38:28.535802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.125 qpair failed and we were unable to recover it. 00:38:30.125 [2024-10-01 17:38:28.536101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.125 [2024-10-01 17:38:28.536111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.125 qpair failed and we were unable to recover it. 00:38:30.125 [2024-10-01 17:38:28.536395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.125 [2024-10-01 17:38:28.536404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.125 qpair failed and we were unable to recover it. 00:38:30.125 [2024-10-01 17:38:28.536713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.125 [2024-10-01 17:38:28.536722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.125 qpair failed and we were unable to recover it. 00:38:30.125 [2024-10-01 17:38:28.537047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.125 [2024-10-01 17:38:28.537057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.125 qpair failed and we were unable to recover it. 00:38:30.125 [2024-10-01 17:38:28.537369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.125 [2024-10-01 17:38:28.537378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.125 qpair failed and we were unable to recover it. 00:38:30.125 [2024-10-01 17:38:28.537579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.125 [2024-10-01 17:38:28.537588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.125 qpair failed and we were unable to recover it. 00:38:30.125 [2024-10-01 17:38:28.537919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.125 [2024-10-01 17:38:28.537928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.125 qpair failed and we were unable to recover it. 00:38:30.125 [2024-10-01 17:38:28.538231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.125 [2024-10-01 17:38:28.538241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.125 qpair failed and we were unable to recover it. 00:38:30.125 [2024-10-01 17:38:28.538564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.125 [2024-10-01 17:38:28.538574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.125 qpair failed and we were unable to recover it. 00:38:30.125 [2024-10-01 17:38:28.538877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.125 [2024-10-01 17:38:28.538886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.125 qpair failed and we were unable to recover it. 00:38:30.125 [2024-10-01 17:38:28.539187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.125 [2024-10-01 17:38:28.539198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.125 qpair failed and we were unable to recover it. 00:38:30.125 [2024-10-01 17:38:28.539470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.125 [2024-10-01 17:38:28.539480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.125 qpair failed and we were unable to recover it. 00:38:30.125 [2024-10-01 17:38:28.539760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.125 [2024-10-01 17:38:28.539769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.125 qpair failed and we were unable to recover it. 00:38:30.125 [2024-10-01 17:38:28.540140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.125 [2024-10-01 17:38:28.540150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.125 qpair failed and we were unable to recover it. 00:38:30.125 [2024-10-01 17:38:28.540491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.125 [2024-10-01 17:38:28.540500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.125 qpair failed and we were unable to recover it. 00:38:30.125 [2024-10-01 17:38:28.540779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.125 [2024-10-01 17:38:28.540788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.125 qpair failed and we were unable to recover it. 00:38:30.125 [2024-10-01 17:38:28.541066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.125 [2024-10-01 17:38:28.541077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.125 qpair failed and we were unable to recover it. 00:38:30.125 [2024-10-01 17:38:28.541343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.125 [2024-10-01 17:38:28.541354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.125 qpair failed and we were unable to recover it. 00:38:30.125 [2024-10-01 17:38:28.541660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.126 [2024-10-01 17:38:28.541669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.126 qpair failed and we were unable to recover it. 00:38:30.126 [2024-10-01 17:38:28.541925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.126 [2024-10-01 17:38:28.541934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.126 qpair failed and we were unable to recover it. 00:38:30.126 [2024-10-01 17:38:28.542243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.126 [2024-10-01 17:38:28.542253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.126 qpair failed and we were unable to recover it. 00:38:30.126 [2024-10-01 17:38:28.542573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.126 [2024-10-01 17:38:28.542585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.126 qpair failed and we were unable to recover it. 00:38:30.126 [2024-10-01 17:38:28.542887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.126 [2024-10-01 17:38:28.542897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.126 qpair failed and we were unable to recover it. 00:38:30.126 [2024-10-01 17:38:28.543195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.126 [2024-10-01 17:38:28.543205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.126 qpair failed and we were unable to recover it. 00:38:30.126 [2024-10-01 17:38:28.543520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.126 [2024-10-01 17:38:28.543530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.126 qpair failed and we were unable to recover it. 00:38:30.126 [2024-10-01 17:38:28.543822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.126 [2024-10-01 17:38:28.543839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.126 qpair failed and we were unable to recover it. 00:38:30.126 [2024-10-01 17:38:28.544146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.126 [2024-10-01 17:38:28.544156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.126 qpair failed and we were unable to recover it. 00:38:30.126 [2024-10-01 17:38:28.544433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.126 [2024-10-01 17:38:28.544443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.126 qpair failed and we were unable to recover it. 00:38:30.126 [2024-10-01 17:38:28.544774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.126 [2024-10-01 17:38:28.544783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.126 qpair failed and we were unable to recover it. 00:38:30.126 [2024-10-01 17:38:28.545095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.126 [2024-10-01 17:38:28.545105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.126 qpair failed and we were unable to recover it. 00:38:30.126 [2024-10-01 17:38:28.545430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.126 [2024-10-01 17:38:28.545440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.126 qpair failed and we were unable to recover it. 00:38:30.126 [2024-10-01 17:38:28.545773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.126 [2024-10-01 17:38:28.545783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.126 qpair failed and we were unable to recover it. 00:38:30.126 [2024-10-01 17:38:28.546124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.126 [2024-10-01 17:38:28.546134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.126 qpair failed and we were unable to recover it. 00:38:30.126 [2024-10-01 17:38:28.546336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.126 [2024-10-01 17:38:28.546346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.126 qpair failed and we were unable to recover it. 00:38:30.126 [2024-10-01 17:38:28.546664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.126 [2024-10-01 17:38:28.546673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.126 qpair failed and we were unable to recover it. 00:38:30.126 [2024-10-01 17:38:28.547000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.126 [2024-10-01 17:38:28.547011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.126 qpair failed and we were unable to recover it. 00:38:30.126 [2024-10-01 17:38:28.547293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.126 [2024-10-01 17:38:28.547302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.126 qpair failed and we were unable to recover it. 00:38:30.126 [2024-10-01 17:38:28.547582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.126 [2024-10-01 17:38:28.547591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.126 qpair failed and we were unable to recover it. 00:38:30.126 [2024-10-01 17:38:28.547894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.126 [2024-10-01 17:38:28.547903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.126 qpair failed and we were unable to recover it. 00:38:30.126 [2024-10-01 17:38:28.548227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.126 [2024-10-01 17:38:28.548237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.126 qpair failed and we were unable to recover it. 00:38:30.126 [2024-10-01 17:38:28.548563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.126 [2024-10-01 17:38:28.548573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.126 qpair failed and we were unable to recover it. 00:38:30.126 [2024-10-01 17:38:28.548845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.126 [2024-10-01 17:38:28.548855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.126 qpair failed and we were unable to recover it. 00:38:30.126 [2024-10-01 17:38:28.549168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.126 [2024-10-01 17:38:28.549177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.126 qpair failed and we were unable to recover it. 00:38:30.126 [2024-10-01 17:38:28.549491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.126 [2024-10-01 17:38:28.549508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.126 qpair failed and we were unable to recover it. 00:38:30.126 [2024-10-01 17:38:28.549725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.126 [2024-10-01 17:38:28.549735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.126 qpair failed and we were unable to recover it. 00:38:30.126 [2024-10-01 17:38:28.550015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.126 [2024-10-01 17:38:28.550025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.126 qpair failed and we were unable to recover it. 00:38:30.126 [2024-10-01 17:38:28.550338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.126 [2024-10-01 17:38:28.550347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.126 qpair failed and we were unable to recover it. 00:38:30.126 [2024-10-01 17:38:28.550656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.126 [2024-10-01 17:38:28.550666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.126 qpair failed and we were unable to recover it. 00:38:30.126 [2024-10-01 17:38:28.550895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.126 [2024-10-01 17:38:28.550904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.126 qpair failed and we were unable to recover it. 00:38:30.126 [2024-10-01 17:38:28.551224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.126 [2024-10-01 17:38:28.551234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.126 qpair failed and we were unable to recover it. 00:38:30.126 [2024-10-01 17:38:28.551433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.126 [2024-10-01 17:38:28.551443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.126 qpair failed and we were unable to recover it. 00:38:30.126 [2024-10-01 17:38:28.551755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.126 [2024-10-01 17:38:28.551764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.126 qpair failed and we were unable to recover it. 00:38:30.126 [2024-10-01 17:38:28.552056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.126 [2024-10-01 17:38:28.552066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.126 qpair failed and we were unable to recover it. 00:38:30.126 [2024-10-01 17:38:28.552377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.126 [2024-10-01 17:38:28.552387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.126 qpair failed and we were unable to recover it. 00:38:30.126 [2024-10-01 17:38:28.552675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.126 [2024-10-01 17:38:28.552684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.126 qpair failed and we were unable to recover it. 00:38:30.126 [2024-10-01 17:38:28.553008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.126 [2024-10-01 17:38:28.553019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.126 qpair failed and we were unable to recover it. 00:38:30.126 [2024-10-01 17:38:28.553381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.126 [2024-10-01 17:38:28.553391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.126 qpair failed and we were unable to recover it. 00:38:30.126 [2024-10-01 17:38:28.553707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.127 [2024-10-01 17:38:28.553717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.127 qpair failed and we were unable to recover it. 00:38:30.127 [2024-10-01 17:38:28.553992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.127 [2024-10-01 17:38:28.554005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.127 qpair failed and we were unable to recover it. 00:38:30.127 [2024-10-01 17:38:28.554324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.127 [2024-10-01 17:38:28.554333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.127 qpair failed and we were unable to recover it. 00:38:30.127 [2024-10-01 17:38:28.554610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.127 [2024-10-01 17:38:28.554620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.127 qpair failed and we were unable to recover it. 00:38:30.127 [2024-10-01 17:38:28.554880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.127 [2024-10-01 17:38:28.554889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.127 qpair failed and we were unable to recover it. 00:38:30.127 [2024-10-01 17:38:28.555206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.127 [2024-10-01 17:38:28.555220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.127 qpair failed and we were unable to recover it. 00:38:30.127 [2024-10-01 17:38:28.555529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.127 [2024-10-01 17:38:28.555547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.127 qpair failed and we were unable to recover it. 00:38:30.127 [2024-10-01 17:38:28.555877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.127 [2024-10-01 17:38:28.555887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.127 qpair failed and we were unable to recover it. 00:38:30.127 [2024-10-01 17:38:28.556251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.127 [2024-10-01 17:38:28.556262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.127 qpair failed and we were unable to recover it. 00:38:30.127 [2024-10-01 17:38:28.556559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.127 [2024-10-01 17:38:28.556570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.127 qpair failed and we were unable to recover it. 00:38:30.127 [2024-10-01 17:38:28.556834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.127 [2024-10-01 17:38:28.556844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.127 qpair failed and we were unable to recover it. 00:38:30.127 [2024-10-01 17:38:28.557137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.127 [2024-10-01 17:38:28.557147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.127 qpair failed and we were unable to recover it. 00:38:30.127 [2024-10-01 17:38:28.557449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.127 [2024-10-01 17:38:28.557458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.127 qpair failed and we were unable to recover it. 00:38:30.127 [2024-10-01 17:38:28.557756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.127 [2024-10-01 17:38:28.557765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.127 qpair failed and we were unable to recover it. 00:38:30.127 [2024-10-01 17:38:28.557924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.127 [2024-10-01 17:38:28.557935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.127 qpair failed and we were unable to recover it. 00:38:30.127 [2024-10-01 17:38:28.558309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.127 [2024-10-01 17:38:28.558319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.127 qpair failed and we were unable to recover it. 00:38:30.127 [2024-10-01 17:38:28.558717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.127 [2024-10-01 17:38:28.558728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.127 qpair failed and we were unable to recover it. 00:38:30.127 [2024-10-01 17:38:28.559030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.127 [2024-10-01 17:38:28.559039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.127 qpair failed and we were unable to recover it. 00:38:30.127 [2024-10-01 17:38:28.559317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.127 [2024-10-01 17:38:28.559326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.127 qpair failed and we were unable to recover it. 00:38:30.127 [2024-10-01 17:38:28.559659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.127 [2024-10-01 17:38:28.559669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.127 qpair failed and we were unable to recover it. 00:38:30.127 [2024-10-01 17:38:28.559961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.127 [2024-10-01 17:38:28.559970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.127 qpair failed and we were unable to recover it. 00:38:30.127 [2024-10-01 17:38:28.560174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.127 [2024-10-01 17:38:28.560184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.127 qpair failed and we were unable to recover it. 00:38:30.127 [2024-10-01 17:38:28.560511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.127 [2024-10-01 17:38:28.560521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.127 qpair failed and we were unable to recover it. 00:38:30.127 [2024-10-01 17:38:28.560848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.127 [2024-10-01 17:38:28.560858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.127 qpair failed and we were unable to recover it. 00:38:30.127 [2024-10-01 17:38:28.561243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.127 [2024-10-01 17:38:28.561252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.127 qpair failed and we were unable to recover it. 00:38:30.127 [2024-10-01 17:38:28.561549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.127 [2024-10-01 17:38:28.561559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.127 qpair failed and we were unable to recover it. 00:38:30.127 [2024-10-01 17:38:28.561867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.127 [2024-10-01 17:38:28.561876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.127 qpair failed and we were unable to recover it. 00:38:30.127 [2024-10-01 17:38:28.562117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.127 [2024-10-01 17:38:28.562127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.127 qpair failed and we were unable to recover it. 00:38:30.127 [2024-10-01 17:38:28.562317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.127 [2024-10-01 17:38:28.562326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.127 qpair failed and we were unable to recover it. 00:38:30.127 [2024-10-01 17:38:28.562668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.127 [2024-10-01 17:38:28.562678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.127 qpair failed and we were unable to recover it. 00:38:30.127 [2024-10-01 17:38:28.562950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.127 [2024-10-01 17:38:28.562960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.127 qpair failed and we were unable to recover it. 00:38:30.127 [2024-10-01 17:38:28.563157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.127 [2024-10-01 17:38:28.563167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.127 qpair failed and we were unable to recover it. 00:38:30.127 [2024-10-01 17:38:28.563499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.127 [2024-10-01 17:38:28.563511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.127 qpair failed and we were unable to recover it. 00:38:30.127 [2024-10-01 17:38:28.563865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.127 [2024-10-01 17:38:28.563875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.127 qpair failed and we were unable to recover it. 00:38:30.127 [2024-10-01 17:38:28.564170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.127 [2024-10-01 17:38:28.564181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.127 qpair failed and we were unable to recover it. 00:38:30.127 [2024-10-01 17:38:28.564517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.127 [2024-10-01 17:38:28.564527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.127 qpair failed and we were unable to recover it. 00:38:30.127 [2024-10-01 17:38:28.564748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.127 [2024-10-01 17:38:28.564757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.127 qpair failed and we were unable to recover it. 00:38:30.127 [2024-10-01 17:38:28.565065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.127 [2024-10-01 17:38:28.565076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.127 qpair failed and we were unable to recover it. 00:38:30.127 [2024-10-01 17:38:28.565389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.127 [2024-10-01 17:38:28.565398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.127 qpair failed and we were unable to recover it. 00:38:30.127 [2024-10-01 17:38:28.565726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.128 [2024-10-01 17:38:28.565736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.128 qpair failed and we were unable to recover it. 00:38:30.128 [2024-10-01 17:38:28.566033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.128 [2024-10-01 17:38:28.566043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.128 qpair failed and we were unable to recover it. 00:38:30.128 [2024-10-01 17:38:28.566324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.128 [2024-10-01 17:38:28.566333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.128 qpair failed and we were unable to recover it. 00:38:30.128 [2024-10-01 17:38:28.566549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.128 [2024-10-01 17:38:28.566559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.128 qpair failed and we were unable to recover it. 00:38:30.128 [2024-10-01 17:38:28.566938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.128 [2024-10-01 17:38:28.566948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.128 qpair failed and we were unable to recover it. 00:38:30.128 [2024-10-01 17:38:28.567237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.128 [2024-10-01 17:38:28.567247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.128 qpair failed and we were unable to recover it. 00:38:30.128 [2024-10-01 17:38:28.567551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.128 [2024-10-01 17:38:28.567561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.128 qpair failed and we were unable to recover it. 00:38:30.128 [2024-10-01 17:38:28.567833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.128 [2024-10-01 17:38:28.567843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.128 qpair failed and we were unable to recover it. 00:38:30.128 [2024-10-01 17:38:28.568120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.128 [2024-10-01 17:38:28.568130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.128 qpair failed and we were unable to recover it. 00:38:30.128 [2024-10-01 17:38:28.568440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.128 [2024-10-01 17:38:28.568450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.128 qpair failed and we were unable to recover it. 00:38:30.128 [2024-10-01 17:38:28.568754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.128 [2024-10-01 17:38:28.568763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.128 qpair failed and we were unable to recover it. 00:38:30.128 [2024-10-01 17:38:28.569038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.128 [2024-10-01 17:38:28.569049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.128 qpair failed and we were unable to recover it. 00:38:30.128 [2024-10-01 17:38:28.569357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.128 [2024-10-01 17:38:28.569366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.128 qpair failed and we were unable to recover it. 00:38:30.128 [2024-10-01 17:38:28.569669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.128 [2024-10-01 17:38:28.569678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.128 qpair failed and we were unable to recover it. 00:38:30.128 [2024-10-01 17:38:28.569985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.128 [2024-10-01 17:38:28.570044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.128 qpair failed and we were unable to recover it. 00:38:30.128 [2024-10-01 17:38:28.570271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.128 [2024-10-01 17:38:28.570281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.128 qpair failed and we were unable to recover it. 00:38:30.128 [2024-10-01 17:38:28.570579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.128 [2024-10-01 17:38:28.570588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.128 qpair failed and we were unable to recover it. 00:38:30.128 [2024-10-01 17:38:28.570894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.128 [2024-10-01 17:38:28.570904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.128 qpair failed and we were unable to recover it. 00:38:30.128 [2024-10-01 17:38:28.571203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.128 [2024-10-01 17:38:28.571213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.128 qpair failed and we were unable to recover it. 00:38:30.128 [2024-10-01 17:38:28.571528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.128 [2024-10-01 17:38:28.571538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.128 qpair failed and we were unable to recover it. 00:38:30.128 [2024-10-01 17:38:28.571755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.128 [2024-10-01 17:38:28.571765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.128 qpair failed and we were unable to recover it. 00:38:30.128 [2024-10-01 17:38:28.572078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.128 [2024-10-01 17:38:28.572088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.128 qpair failed and we were unable to recover it. 00:38:30.128 [2024-10-01 17:38:28.572403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.128 [2024-10-01 17:38:28.572414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.128 qpair failed and we were unable to recover it. 00:38:30.128 [2024-10-01 17:38:28.572715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.128 [2024-10-01 17:38:28.572725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.128 qpair failed and we were unable to recover it. 00:38:30.128 [2024-10-01 17:38:28.573010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.128 [2024-10-01 17:38:28.573019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.128 qpair failed and we were unable to recover it. 00:38:30.128 [2024-10-01 17:38:28.573334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.128 [2024-10-01 17:38:28.573343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.128 qpair failed and we were unable to recover it. 00:38:30.128 [2024-10-01 17:38:28.573613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.128 [2024-10-01 17:38:28.573623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.128 qpair failed and we were unable to recover it. 00:38:30.128 [2024-10-01 17:38:28.573846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.128 [2024-10-01 17:38:28.573858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.128 qpair failed and we were unable to recover it. 00:38:30.128 [2024-10-01 17:38:28.574189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.128 [2024-10-01 17:38:28.574199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.128 qpair failed and we were unable to recover it. 00:38:30.128 [2024-10-01 17:38:28.574393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.128 [2024-10-01 17:38:28.574403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.128 qpair failed and we were unable to recover it. 00:38:30.128 [2024-10-01 17:38:28.574722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.128 [2024-10-01 17:38:28.574732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.128 qpair failed and we were unable to recover it. 00:38:30.128 [2024-10-01 17:38:28.574925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.128 [2024-10-01 17:38:28.574935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.128 qpair failed and we were unable to recover it. 00:38:30.128 [2024-10-01 17:38:28.575122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.128 [2024-10-01 17:38:28.575132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.128 qpair failed and we were unable to recover it. 00:38:30.128 [2024-10-01 17:38:28.575340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.128 [2024-10-01 17:38:28.575349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.128 qpair failed and we were unable to recover it. 00:38:30.128 [2024-10-01 17:38:28.575621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.128 [2024-10-01 17:38:28.575633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.128 qpair failed and we were unable to recover it. 00:38:30.128 [2024-10-01 17:38:28.575934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.128 [2024-10-01 17:38:28.575943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.128 qpair failed and we were unable to recover it. 00:38:30.128 [2024-10-01 17:38:28.576236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.128 [2024-10-01 17:38:28.576246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.128 qpair failed and we were unable to recover it. 00:38:30.128 [2024-10-01 17:38:28.576553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.128 [2024-10-01 17:38:28.576563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.128 qpair failed and we were unable to recover it. 00:38:30.128 [2024-10-01 17:38:28.576868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.128 [2024-10-01 17:38:28.576879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.128 qpair failed and we were unable to recover it. 00:38:30.128 [2024-10-01 17:38:28.577163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.129 [2024-10-01 17:38:28.577174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.129 qpair failed and we were unable to recover it. 00:38:30.129 [2024-10-01 17:38:28.577494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.129 [2024-10-01 17:38:28.577503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.129 qpair failed and we were unable to recover it. 00:38:30.129 [2024-10-01 17:38:28.577832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.129 [2024-10-01 17:38:28.577844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.129 qpair failed and we were unable to recover it. 00:38:30.129 [2024-10-01 17:38:28.578150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.129 [2024-10-01 17:38:28.578163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.129 qpair failed and we were unable to recover it. 00:38:30.129 [2024-10-01 17:38:28.578468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.129 [2024-10-01 17:38:28.578479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.129 qpair failed and we were unable to recover it. 00:38:30.129 [2024-10-01 17:38:28.578815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.129 [2024-10-01 17:38:28.578826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.129 qpair failed and we were unable to recover it. 00:38:30.129 [2024-10-01 17:38:28.579135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.129 [2024-10-01 17:38:28.579145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.129 qpair failed and we were unable to recover it. 00:38:30.129 [2024-10-01 17:38:28.579331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.129 [2024-10-01 17:38:28.579340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.129 qpair failed and we were unable to recover it. 00:38:30.129 [2024-10-01 17:38:28.579687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.129 [2024-10-01 17:38:28.579698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.129 qpair failed and we were unable to recover it. 00:38:30.129 [2024-10-01 17:38:28.579999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.129 [2024-10-01 17:38:28.580010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.129 qpair failed and we were unable to recover it. 00:38:30.129 [2024-10-01 17:38:28.580313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.129 [2024-10-01 17:38:28.580322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.129 qpair failed and we were unable to recover it. 00:38:30.129 [2024-10-01 17:38:28.580613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.129 [2024-10-01 17:38:28.580623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.129 qpair failed and we were unable to recover it. 00:38:30.129 [2024-10-01 17:38:28.580886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.129 [2024-10-01 17:38:28.580895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.129 qpair failed and we were unable to recover it. 00:38:30.129 [2024-10-01 17:38:28.581280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.129 [2024-10-01 17:38:28.581290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.129 qpair failed and we were unable to recover it. 00:38:30.129 [2024-10-01 17:38:28.581477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.129 [2024-10-01 17:38:28.581487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.129 qpair failed and we were unable to recover it. 00:38:30.129 [2024-10-01 17:38:28.581680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.129 [2024-10-01 17:38:28.581692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.129 qpair failed and we were unable to recover it. 00:38:30.129 [2024-10-01 17:38:28.582020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.129 [2024-10-01 17:38:28.582031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.129 qpair failed and we were unable to recover it. 00:38:30.129 [2024-10-01 17:38:28.582311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.129 [2024-10-01 17:38:28.582321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.129 qpair failed and we were unable to recover it. 00:38:30.129 [2024-10-01 17:38:28.582643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.129 [2024-10-01 17:38:28.582653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.129 qpair failed and we were unable to recover it. 00:38:30.129 [2024-10-01 17:38:28.582963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.129 [2024-10-01 17:38:28.582973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.129 qpair failed and we were unable to recover it. 00:38:30.129 [2024-10-01 17:38:28.583251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.129 [2024-10-01 17:38:28.583261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.129 qpair failed and we were unable to recover it. 00:38:30.129 [2024-10-01 17:38:28.583451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.129 [2024-10-01 17:38:28.583461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.129 qpair failed and we were unable to recover it. 00:38:30.129 [2024-10-01 17:38:28.583769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.129 [2024-10-01 17:38:28.583785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.129 qpair failed and we were unable to recover it. 00:38:30.129 [2024-10-01 17:38:28.584182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.129 [2024-10-01 17:38:28.584193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.129 qpair failed and we were unable to recover it. 00:38:30.129 [2024-10-01 17:38:28.584485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.129 [2024-10-01 17:38:28.584494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.129 qpair failed and we were unable to recover it. 00:38:30.129 [2024-10-01 17:38:28.584773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.129 [2024-10-01 17:38:28.584782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.129 qpair failed and we were unable to recover it. 00:38:30.129 [2024-10-01 17:38:28.585048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.129 [2024-10-01 17:38:28.585059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.129 qpair failed and we were unable to recover it. 00:38:30.129 [2024-10-01 17:38:28.585372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.129 [2024-10-01 17:38:28.585382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.129 qpair failed and we were unable to recover it. 00:38:30.129 [2024-10-01 17:38:28.585697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.129 [2024-10-01 17:38:28.585707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.129 qpair failed and we were unable to recover it. 00:38:30.129 [2024-10-01 17:38:28.586035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.129 [2024-10-01 17:38:28.586045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.129 qpair failed and we were unable to recover it. 00:38:30.129 [2024-10-01 17:38:28.586353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.129 [2024-10-01 17:38:28.586363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.129 qpair failed and we were unable to recover it. 00:38:30.129 [2024-10-01 17:38:28.586667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.129 [2024-10-01 17:38:28.586678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.129 qpair failed and we were unable to recover it. 00:38:30.129 [2024-10-01 17:38:28.586985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.129 [2024-10-01 17:38:28.586998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.129 qpair failed and we were unable to recover it. 00:38:30.129 [2024-10-01 17:38:28.587348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.129 [2024-10-01 17:38:28.587358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.129 qpair failed and we were unable to recover it. 00:38:30.129 [2024-10-01 17:38:28.587624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.129 [2024-10-01 17:38:28.587634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.129 qpair failed and we were unable to recover it. 00:38:30.129 [2024-10-01 17:38:28.587988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.130 [2024-10-01 17:38:28.588001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.130 qpair failed and we were unable to recover it. 00:38:30.130 [2024-10-01 17:38:28.588299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.130 [2024-10-01 17:38:28.588310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.130 qpair failed and we were unable to recover it. 00:38:30.130 [2024-10-01 17:38:28.588497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.130 [2024-10-01 17:38:28.588508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.130 qpair failed and we were unable to recover it. 00:38:30.130 [2024-10-01 17:38:28.588824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.130 [2024-10-01 17:38:28.588836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.130 qpair failed and we were unable to recover it. 00:38:30.130 [2024-10-01 17:38:28.589185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.130 [2024-10-01 17:38:28.589195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.130 qpair failed and we were unable to recover it. 00:38:30.130 [2024-10-01 17:38:28.589472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.130 [2024-10-01 17:38:28.589482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.130 qpair failed and we were unable to recover it. 00:38:30.130 [2024-10-01 17:38:28.589768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.130 [2024-10-01 17:38:28.589777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.130 qpair failed and we were unable to recover it. 00:38:30.130 [2024-10-01 17:38:28.590083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.130 [2024-10-01 17:38:28.590093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.130 qpair failed and we were unable to recover it. 00:38:30.130 [2024-10-01 17:38:28.590418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.130 [2024-10-01 17:38:28.590428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.130 qpair failed and we were unable to recover it. 00:38:30.130 [2024-10-01 17:38:28.590748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.130 [2024-10-01 17:38:28.590759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.130 qpair failed and we were unable to recover it. 00:38:30.130 [2024-10-01 17:38:28.591080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.130 [2024-10-01 17:38:28.591090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.130 qpair failed and we were unable to recover it. 00:38:30.130 [2024-10-01 17:38:28.591476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.130 [2024-10-01 17:38:28.591486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.130 qpair failed and we were unable to recover it. 00:38:30.130 [2024-10-01 17:38:28.591750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.130 [2024-10-01 17:38:28.591759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.130 qpair failed and we were unable to recover it. 00:38:30.130 [2024-10-01 17:38:28.592049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.130 [2024-10-01 17:38:28.592059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.130 qpair failed and we were unable to recover it. 00:38:30.130 [2024-10-01 17:38:28.592416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.130 [2024-10-01 17:38:28.592427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.130 qpair failed and we were unable to recover it. 00:38:30.130 [2024-10-01 17:38:28.592731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.130 [2024-10-01 17:38:28.592741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.130 qpair failed and we were unable to recover it. 00:38:30.130 [2024-10-01 17:38:28.592932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.130 [2024-10-01 17:38:28.592943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.130 qpair failed and we were unable to recover it. 00:38:30.130 [2024-10-01 17:38:28.593290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.130 [2024-10-01 17:38:28.593301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.130 qpair failed and we were unable to recover it. 00:38:30.130 [2024-10-01 17:38:28.593641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.130 [2024-10-01 17:38:28.593651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.130 qpair failed and we were unable to recover it. 00:38:30.130 [2024-10-01 17:38:28.594053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.130 [2024-10-01 17:38:28.594064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.130 qpair failed and we were unable to recover it. 00:38:30.130 [2024-10-01 17:38:28.594364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.130 [2024-10-01 17:38:28.594374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.130 qpair failed and we were unable to recover it. 00:38:30.130 [2024-10-01 17:38:28.594759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.130 [2024-10-01 17:38:28.594769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.130 qpair failed and we were unable to recover it. 00:38:30.130 [2024-10-01 17:38:28.595063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.130 [2024-10-01 17:38:28.595073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.130 qpair failed and we were unable to recover it. 00:38:30.130 [2024-10-01 17:38:28.595237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.130 [2024-10-01 17:38:28.595253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.130 qpair failed and we were unable to recover it. 00:38:30.130 [2024-10-01 17:38:28.595520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.130 [2024-10-01 17:38:28.595529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.130 qpair failed and we were unable to recover it. 00:38:30.130 [2024-10-01 17:38:28.595816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.130 [2024-10-01 17:38:28.595828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.130 qpair failed and we were unable to recover it. 00:38:30.130 [2024-10-01 17:38:28.596161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.130 [2024-10-01 17:38:28.596171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.130 qpair failed and we were unable to recover it. 00:38:30.130 [2024-10-01 17:38:28.596365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.130 [2024-10-01 17:38:28.596375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.130 qpair failed and we were unable to recover it. 00:38:30.130 [2024-10-01 17:38:28.596646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.130 [2024-10-01 17:38:28.596658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.130 qpair failed and we were unable to recover it. 00:38:30.130 [2024-10-01 17:38:28.596965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.130 [2024-10-01 17:38:28.596975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.130 qpair failed and we were unable to recover it. 00:38:30.130 [2024-10-01 17:38:28.597267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.130 [2024-10-01 17:38:28.597278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.130 qpair failed and we were unable to recover it. 00:38:30.130 [2024-10-01 17:38:28.597599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.130 [2024-10-01 17:38:28.597609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.130 qpair failed and we were unable to recover it. 00:38:30.130 [2024-10-01 17:38:28.597914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.130 [2024-10-01 17:38:28.597924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.130 qpair failed and we were unable to recover it. 00:38:30.130 [2024-10-01 17:38:28.598204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.130 [2024-10-01 17:38:28.598215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.130 qpair failed and we were unable to recover it. 00:38:30.130 [2024-10-01 17:38:28.598497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.130 [2024-10-01 17:38:28.598508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.130 qpair failed and we were unable to recover it. 00:38:30.130 [2024-10-01 17:38:28.598787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.130 [2024-10-01 17:38:28.598797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.130 qpair failed and we were unable to recover it. 00:38:30.130 [2024-10-01 17:38:28.599008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.130 [2024-10-01 17:38:28.599018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.130 qpair failed and we were unable to recover it. 00:38:30.130 [2024-10-01 17:38:28.599317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.130 [2024-10-01 17:38:28.599327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.130 qpair failed and we were unable to recover it. 00:38:30.130 [2024-10-01 17:38:28.599604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.130 [2024-10-01 17:38:28.599614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.130 qpair failed and we were unable to recover it. 00:38:30.131 [2024-10-01 17:38:28.599815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.131 [2024-10-01 17:38:28.599825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.131 qpair failed and we were unable to recover it. 00:38:30.131 [2024-10-01 17:38:28.600169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.131 [2024-10-01 17:38:28.600179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.131 qpair failed and we were unable to recover it. 00:38:30.131 [2024-10-01 17:38:28.600489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.131 [2024-10-01 17:38:28.600498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.131 qpair failed and we were unable to recover it. 00:38:30.131 [2024-10-01 17:38:28.600729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.131 [2024-10-01 17:38:28.600739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.131 qpair failed and we were unable to recover it. 00:38:30.131 [2024-10-01 17:38:28.601055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.131 [2024-10-01 17:38:28.601066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.131 qpair failed and we were unable to recover it. 00:38:30.131 [2024-10-01 17:38:28.601366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.131 [2024-10-01 17:38:28.601376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.131 qpair failed and we were unable to recover it. 00:38:30.131 [2024-10-01 17:38:28.601642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.131 [2024-10-01 17:38:28.601653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.131 qpair failed and we were unable to recover it. 00:38:30.131 [2024-10-01 17:38:28.601969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.131 [2024-10-01 17:38:28.601979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.131 qpair failed and we were unable to recover it. 00:38:30.131 [2024-10-01 17:38:28.602273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.131 [2024-10-01 17:38:28.602283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.131 qpair failed and we were unable to recover it. 00:38:30.131 [2024-10-01 17:38:28.602599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.131 [2024-10-01 17:38:28.602609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.131 qpair failed and we were unable to recover it. 00:38:30.131 [2024-10-01 17:38:28.602921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.131 [2024-10-01 17:38:28.602930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.131 qpair failed and we were unable to recover it. 00:38:30.131 [2024-10-01 17:38:28.603279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.131 [2024-10-01 17:38:28.603289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.131 qpair failed and we were unable to recover it. 00:38:30.131 [2024-10-01 17:38:28.603601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.131 [2024-10-01 17:38:28.603611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.131 qpair failed and we were unable to recover it. 00:38:30.131 [2024-10-01 17:38:28.603880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.131 [2024-10-01 17:38:28.603890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.131 qpair failed and we were unable to recover it. 00:38:30.131 [2024-10-01 17:38:28.604189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.131 [2024-10-01 17:38:28.604201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.131 qpair failed and we were unable to recover it. 00:38:30.131 [2024-10-01 17:38:28.604499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.131 [2024-10-01 17:38:28.604509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.131 qpair failed and we were unable to recover it. 00:38:30.131 [2024-10-01 17:38:28.604800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.131 [2024-10-01 17:38:28.604809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.131 qpair failed and we were unable to recover it. 00:38:30.131 [2024-10-01 17:38:28.605120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.131 [2024-10-01 17:38:28.605131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.131 qpair failed and we were unable to recover it. 00:38:30.131 [2024-10-01 17:38:28.605451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.131 [2024-10-01 17:38:28.605461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.131 qpair failed and we were unable to recover it. 00:38:30.131 [2024-10-01 17:38:28.605800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.131 [2024-10-01 17:38:28.605810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.131 qpair failed and we were unable to recover it. 00:38:30.131 [2024-10-01 17:38:28.606094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.131 [2024-10-01 17:38:28.606105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.131 qpair failed and we were unable to recover it. 00:38:30.131 [2024-10-01 17:38:28.606461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.131 [2024-10-01 17:38:28.606472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.131 qpair failed and we were unable to recover it. 00:38:30.131 [2024-10-01 17:38:28.606733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.131 [2024-10-01 17:38:28.606744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.131 qpair failed and we were unable to recover it. 00:38:30.131 [2024-10-01 17:38:28.607038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.131 [2024-10-01 17:38:28.607049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.131 qpair failed and we were unable to recover it. 00:38:30.131 [2024-10-01 17:38:28.607386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.131 [2024-10-01 17:38:28.607397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.131 qpair failed and we were unable to recover it. 00:38:30.131 [2024-10-01 17:38:28.607702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.131 [2024-10-01 17:38:28.607712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.131 qpair failed and we were unable to recover it. 00:38:30.131 [2024-10-01 17:38:28.608065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.131 [2024-10-01 17:38:28.608075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.131 qpair failed and we were unable to recover it. 00:38:30.131 [2024-10-01 17:38:28.608362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.131 [2024-10-01 17:38:28.608373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.131 qpair failed and we were unable to recover it. 00:38:30.131 [2024-10-01 17:38:28.608677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.131 [2024-10-01 17:38:28.608687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.131 qpair failed and we were unable to recover it. 00:38:30.131 [2024-10-01 17:38:28.608971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.131 [2024-10-01 17:38:28.608981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.131 qpair failed and we were unable to recover it. 00:38:30.131 [2024-10-01 17:38:28.609295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.131 [2024-10-01 17:38:28.609306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.131 qpair failed and we were unable to recover it. 00:38:30.131 [2024-10-01 17:38:28.609641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.131 [2024-10-01 17:38:28.609652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.131 qpair failed and we were unable to recover it. 00:38:30.131 [2024-10-01 17:38:28.609957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.131 [2024-10-01 17:38:28.609968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.131 qpair failed and we were unable to recover it. 00:38:30.131 [2024-10-01 17:38:28.610268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.131 [2024-10-01 17:38:28.610279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.131 qpair failed and we were unable to recover it. 00:38:30.131 [2024-10-01 17:38:28.610567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.131 [2024-10-01 17:38:28.610577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.131 qpair failed and we were unable to recover it. 00:38:30.131 [2024-10-01 17:38:28.610868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.131 [2024-10-01 17:38:28.610879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.131 qpair failed and we were unable to recover it. 00:38:30.131 [2024-10-01 17:38:28.611165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.131 [2024-10-01 17:38:28.611175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.131 qpair failed and we were unable to recover it. 00:38:30.131 [2024-10-01 17:38:28.611474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.131 [2024-10-01 17:38:28.611485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.131 qpair failed and we were unable to recover it. 00:38:30.131 [2024-10-01 17:38:28.611799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.131 [2024-10-01 17:38:28.611809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.131 qpair failed and we were unable to recover it. 00:38:30.132 [2024-10-01 17:38:28.612091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.132 [2024-10-01 17:38:28.612101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.132 qpair failed and we were unable to recover it. 00:38:30.132 [2024-10-01 17:38:28.612406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.132 [2024-10-01 17:38:28.612416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.132 qpair failed and we were unable to recover it. 00:38:30.132 [2024-10-01 17:38:28.612697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.132 [2024-10-01 17:38:28.612707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.132 qpair failed and we were unable to recover it. 00:38:30.132 [2024-10-01 17:38:28.612984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.132 [2024-10-01 17:38:28.612999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.132 qpair failed and we were unable to recover it. 00:38:30.132 [2024-10-01 17:38:28.613338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.132 [2024-10-01 17:38:28.613348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.132 qpair failed and we were unable to recover it. 00:38:30.132 [2024-10-01 17:38:28.613563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.132 [2024-10-01 17:38:28.613574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.132 qpair failed and we were unable to recover it. 00:38:30.132 [2024-10-01 17:38:28.613866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.132 [2024-10-01 17:38:28.613876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.132 qpair failed and we were unable to recover it. 00:38:30.132 [2024-10-01 17:38:28.614196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.132 [2024-10-01 17:38:28.614207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.132 qpair failed and we were unable to recover it. 00:38:30.132 [2024-10-01 17:38:28.614489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.132 [2024-10-01 17:38:28.614499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.132 qpair failed and we were unable to recover it. 00:38:30.132 [2024-10-01 17:38:28.614812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.132 [2024-10-01 17:38:28.614821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.132 qpair failed and we were unable to recover it. 00:38:30.132 [2024-10-01 17:38:28.615114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.132 [2024-10-01 17:38:28.615124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.132 qpair failed and we were unable to recover it. 00:38:30.132 [2024-10-01 17:38:28.615381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.132 [2024-10-01 17:38:28.615391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.132 qpair failed and we were unable to recover it. 00:38:30.132 [2024-10-01 17:38:28.615716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.132 [2024-10-01 17:38:28.615726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.132 qpair failed and we were unable to recover it. 00:38:30.132 [2024-10-01 17:38:28.616045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.132 [2024-10-01 17:38:28.616055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.132 qpair failed and we were unable to recover it. 00:38:30.132 [2024-10-01 17:38:28.616377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.132 [2024-10-01 17:38:28.616388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.132 qpair failed and we were unable to recover it. 00:38:30.132 [2024-10-01 17:38:28.616670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.132 [2024-10-01 17:38:28.616679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.132 qpair failed and we were unable to recover it. 00:38:30.132 [2024-10-01 17:38:28.616981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.132 [2024-10-01 17:38:28.616991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.132 qpair failed and we were unable to recover it. 00:38:30.132 [2024-10-01 17:38:28.617242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.132 [2024-10-01 17:38:28.617252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.132 qpair failed and we were unable to recover it. 00:38:30.132 [2024-10-01 17:38:28.617551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.132 [2024-10-01 17:38:28.617564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.132 qpair failed and we were unable to recover it. 00:38:30.132 [2024-10-01 17:38:28.617842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.132 [2024-10-01 17:38:28.617853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.132 qpair failed and we were unable to recover it. 00:38:30.132 [2024-10-01 17:38:28.618174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.132 [2024-10-01 17:38:28.618184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.132 qpair failed and we were unable to recover it. 00:38:30.132 [2024-10-01 17:38:28.618482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.132 [2024-10-01 17:38:28.618492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.132 qpair failed and we were unable to recover it. 00:38:30.132 [2024-10-01 17:38:28.618803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.132 [2024-10-01 17:38:28.618813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.132 qpair failed and we were unable to recover it. 00:38:30.132 [2024-10-01 17:38:28.619131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.132 [2024-10-01 17:38:28.619142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.132 qpair failed and we were unable to recover it. 00:38:30.132 [2024-10-01 17:38:28.619423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.132 [2024-10-01 17:38:28.619433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.132 qpair failed and we were unable to recover it. 00:38:30.132 [2024-10-01 17:38:28.619751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.132 [2024-10-01 17:38:28.619762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.132 qpair failed and we were unable to recover it. 00:38:30.132 [2024-10-01 17:38:28.620071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.132 [2024-10-01 17:38:28.620082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.132 qpair failed and we were unable to recover it. 00:38:30.132 [2024-10-01 17:38:28.620406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.132 [2024-10-01 17:38:28.620417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.132 qpair failed and we were unable to recover it. 00:38:30.132 [2024-10-01 17:38:28.620765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.132 [2024-10-01 17:38:28.620776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.132 qpair failed and we were unable to recover it. 00:38:30.132 [2024-10-01 17:38:28.620990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.132 [2024-10-01 17:38:28.621004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.132 qpair failed and we were unable to recover it. 00:38:30.132 [2024-10-01 17:38:28.621326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.132 [2024-10-01 17:38:28.621336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.132 qpair failed and we were unable to recover it. 00:38:30.132 [2024-10-01 17:38:28.621644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.132 [2024-10-01 17:38:28.621654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.132 qpair failed and we were unable to recover it. 00:38:30.132 [2024-10-01 17:38:28.621986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.132 [2024-10-01 17:38:28.622000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.132 qpair failed and we were unable to recover it. 00:38:30.132 [2024-10-01 17:38:28.622419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.132 [2024-10-01 17:38:28.622429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.132 qpair failed and we were unable to recover it. 00:38:30.132 [2024-10-01 17:38:28.622708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.132 [2024-10-01 17:38:28.622718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.132 qpair failed and we were unable to recover it. 00:38:30.132 [2024-10-01 17:38:28.622974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.132 [2024-10-01 17:38:28.622984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.132 qpair failed and we were unable to recover it. 00:38:30.132 [2024-10-01 17:38:28.623297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.132 [2024-10-01 17:38:28.623308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.132 qpair failed and we were unable to recover it. 00:38:30.132 [2024-10-01 17:38:28.623613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.132 [2024-10-01 17:38:28.623623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.132 qpair failed and we were unable to recover it. 00:38:30.132 [2024-10-01 17:38:28.623948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.132 [2024-10-01 17:38:28.623958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.132 qpair failed and we were unable to recover it. 00:38:30.133 [2024-10-01 17:38:28.624299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.133 [2024-10-01 17:38:28.624310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.133 qpair failed and we were unable to recover it. 00:38:30.133 [2024-10-01 17:38:28.624640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.133 [2024-10-01 17:38:28.624651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.133 qpair failed and we were unable to recover it. 00:38:30.133 [2024-10-01 17:38:28.624962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.133 [2024-10-01 17:38:28.624971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.133 qpair failed and we were unable to recover it. 00:38:30.133 [2024-10-01 17:38:28.625368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.133 [2024-10-01 17:38:28.625378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.133 qpair failed and we were unable to recover it. 00:38:30.133 [2024-10-01 17:38:28.625681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.133 [2024-10-01 17:38:28.625690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.133 qpair failed and we were unable to recover it. 00:38:30.133 [2024-10-01 17:38:28.626086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.133 [2024-10-01 17:38:28.626095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.133 qpair failed and we were unable to recover it. 00:38:30.133 [2024-10-01 17:38:28.626418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.133 [2024-10-01 17:38:28.626428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.133 qpair failed and we were unable to recover it. 00:38:30.133 [2024-10-01 17:38:28.626720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.133 [2024-10-01 17:38:28.626730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.133 qpair failed and we were unable to recover it. 00:38:30.133 [2024-10-01 17:38:28.626927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.133 [2024-10-01 17:38:28.626936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.133 qpair failed and we were unable to recover it. 00:38:30.133 [2024-10-01 17:38:28.627228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.133 [2024-10-01 17:38:28.627237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.133 qpair failed and we were unable to recover it. 00:38:30.133 [2024-10-01 17:38:28.627527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.133 [2024-10-01 17:38:28.627537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.133 qpair failed and we were unable to recover it. 00:38:30.133 [2024-10-01 17:38:28.627843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.133 [2024-10-01 17:38:28.627853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.133 qpair failed and we were unable to recover it. 00:38:30.133 [2024-10-01 17:38:28.627953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.133 [2024-10-01 17:38:28.627964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.133 qpair failed and we were unable to recover it. 00:38:30.133 [2024-10-01 17:38:28.628231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.133 [2024-10-01 17:38:28.628241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.133 qpair failed and we were unable to recover it. 00:38:30.133 [2024-10-01 17:38:28.628576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.133 [2024-10-01 17:38:28.628586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.133 qpair failed and we were unable to recover it. 00:38:30.133 [2024-10-01 17:38:28.628891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.133 [2024-10-01 17:38:28.628901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.133 qpair failed and we were unable to recover it. 00:38:30.133 [2024-10-01 17:38:28.629189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.133 [2024-10-01 17:38:28.629199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.133 qpair failed and we were unable to recover it. 00:38:30.133 [2024-10-01 17:38:28.629532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.133 [2024-10-01 17:38:28.629543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.133 qpair failed and we were unable to recover it. 00:38:30.133 [2024-10-01 17:38:28.629822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.133 [2024-10-01 17:38:28.629833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.133 qpair failed and we were unable to recover it. 00:38:30.133 [2024-10-01 17:38:28.630137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.133 [2024-10-01 17:38:28.630146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.133 qpair failed and we were unable to recover it. 00:38:30.133 [2024-10-01 17:38:28.630437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.133 [2024-10-01 17:38:28.630449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.133 qpair failed and we were unable to recover it. 00:38:30.133 [2024-10-01 17:38:28.630759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.133 [2024-10-01 17:38:28.630769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.133 qpair failed and we were unable to recover it. 00:38:30.133 [2024-10-01 17:38:28.631068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.133 [2024-10-01 17:38:28.631078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.133 qpair failed and we were unable to recover it. 00:38:30.133 [2024-10-01 17:38:28.631386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.133 [2024-10-01 17:38:28.631395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.133 qpair failed and we were unable to recover it. 00:38:30.133 [2024-10-01 17:38:28.631678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.133 [2024-10-01 17:38:28.631687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.133 qpair failed and we were unable to recover it. 00:38:30.133 [2024-10-01 17:38:28.632001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.133 [2024-10-01 17:38:28.632010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.133 qpair failed and we were unable to recover it. 00:38:30.133 [2024-10-01 17:38:28.632365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.133 [2024-10-01 17:38:28.632375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.133 qpair failed and we were unable to recover it. 00:38:30.133 [2024-10-01 17:38:28.632676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.133 [2024-10-01 17:38:28.632686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.133 qpair failed and we were unable to recover it. 00:38:30.133 [2024-10-01 17:38:28.633003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.133 [2024-10-01 17:38:28.633013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.133 qpair failed and we were unable to recover it. 00:38:30.468 [2024-10-01 17:38:28.633354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.468 [2024-10-01 17:38:28.633364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.468 qpair failed and we were unable to recover it. 00:38:30.468 [2024-10-01 17:38:28.633649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.468 [2024-10-01 17:38:28.633658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.468 qpair failed and we were unable to recover it. 00:38:30.468 [2024-10-01 17:38:28.633960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.468 [2024-10-01 17:38:28.633969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.468 qpair failed and we were unable to recover it. 00:38:30.468 [2024-10-01 17:38:28.634297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.468 [2024-10-01 17:38:28.634307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.468 qpair failed and we were unable to recover it. 00:38:30.468 [2024-10-01 17:38:28.634583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.468 [2024-10-01 17:38:28.634592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.468 qpair failed and we were unable to recover it. 00:38:30.468 [2024-10-01 17:38:28.634879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.468 [2024-10-01 17:38:28.634888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.468 qpair failed and we were unable to recover it. 00:38:30.468 [2024-10-01 17:38:28.635161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.468 [2024-10-01 17:38:28.635171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.468 qpair failed and we were unable to recover it. 00:38:30.468 [2024-10-01 17:38:28.635482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.468 [2024-10-01 17:38:28.635491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.468 qpair failed and we were unable to recover it. 00:38:30.468 [2024-10-01 17:38:28.635810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.468 [2024-10-01 17:38:28.635820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.468 qpair failed and we were unable to recover it. 00:38:30.468 [2024-10-01 17:38:28.636048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.468 [2024-10-01 17:38:28.636058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.468 qpair failed and we were unable to recover it. 00:38:30.468 [2024-10-01 17:38:28.636376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.468 [2024-10-01 17:38:28.636386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.468 qpair failed and we were unable to recover it. 00:38:30.468 [2024-10-01 17:38:28.636665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.469 [2024-10-01 17:38:28.636675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.469 qpair failed and we were unable to recover it. 00:38:30.469 [2024-10-01 17:38:28.636979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.469 [2024-10-01 17:38:28.636989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.469 qpair failed and we were unable to recover it. 00:38:30.469 [2024-10-01 17:38:28.637291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.469 [2024-10-01 17:38:28.637300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.469 qpair failed and we were unable to recover it. 00:38:30.469 [2024-10-01 17:38:28.637679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.469 [2024-10-01 17:38:28.637688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.469 qpair failed and we were unable to recover it. 00:38:30.469 [2024-10-01 17:38:28.638014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.469 [2024-10-01 17:38:28.638024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.469 qpair failed and we were unable to recover it. 00:38:30.469 [2024-10-01 17:38:28.638278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.469 [2024-10-01 17:38:28.638287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.469 qpair failed and we were unable to recover it. 00:38:30.469 [2024-10-01 17:38:28.638594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.469 [2024-10-01 17:38:28.638605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.469 qpair failed and we were unable to recover it. 00:38:30.469 [2024-10-01 17:38:28.638871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.469 [2024-10-01 17:38:28.638885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.469 qpair failed and we were unable to recover it. 00:38:30.469 [2024-10-01 17:38:28.639175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.469 [2024-10-01 17:38:28.639185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.469 qpair failed and we were unable to recover it. 00:38:30.469 [2024-10-01 17:38:28.639467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.469 [2024-10-01 17:38:28.639476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.469 qpair failed and we were unable to recover it. 00:38:30.469 [2024-10-01 17:38:28.639746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.469 [2024-10-01 17:38:28.639755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.469 qpair failed and we were unable to recover it. 00:38:30.469 [2024-10-01 17:38:28.640070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.469 [2024-10-01 17:38:28.640080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.469 qpair failed and we were unable to recover it. 00:38:30.469 [2024-10-01 17:38:28.640355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.469 [2024-10-01 17:38:28.640365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.469 qpair failed and we were unable to recover it. 00:38:30.469 [2024-10-01 17:38:28.640678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.469 [2024-10-01 17:38:28.640687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.469 qpair failed and we were unable to recover it. 00:38:30.469 [2024-10-01 17:38:28.640989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.469 [2024-10-01 17:38:28.641003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.469 qpair failed and we were unable to recover it. 00:38:30.469 [2024-10-01 17:38:28.641302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.469 [2024-10-01 17:38:28.641312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.469 qpair failed and we were unable to recover it. 00:38:30.469 [2024-10-01 17:38:28.641615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.469 [2024-10-01 17:38:28.641624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.469 qpair failed and we were unable to recover it. 00:38:30.469 [2024-10-01 17:38:28.641949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.469 [2024-10-01 17:38:28.641959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.469 qpair failed and we were unable to recover it. 00:38:30.469 [2024-10-01 17:38:28.642282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.469 [2024-10-01 17:38:28.642292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.469 qpair failed and we were unable to recover it. 00:38:30.469 [2024-10-01 17:38:28.642593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.469 [2024-10-01 17:38:28.642602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.469 qpair failed and we were unable to recover it. 00:38:30.469 [2024-10-01 17:38:28.642804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.469 [2024-10-01 17:38:28.642814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.469 qpair failed and we were unable to recover it. 00:38:30.469 [2024-10-01 17:38:28.643145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.469 [2024-10-01 17:38:28.643155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.469 qpair failed and we were unable to recover it. 00:38:30.469 [2024-10-01 17:38:28.643483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.469 [2024-10-01 17:38:28.643493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.469 qpair failed and we were unable to recover it. 00:38:30.469 [2024-10-01 17:38:28.643692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.469 [2024-10-01 17:38:28.643702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.469 qpair failed and we were unable to recover it. 00:38:30.469 [2024-10-01 17:38:28.644044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.469 [2024-10-01 17:38:28.644054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.469 qpair failed and we were unable to recover it. 00:38:30.469 [2024-10-01 17:38:28.644358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.469 [2024-10-01 17:38:28.644367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.469 qpair failed and we were unable to recover it. 00:38:30.469 [2024-10-01 17:38:28.644646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.469 [2024-10-01 17:38:28.644656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.469 qpair failed and we were unable to recover it. 00:38:30.469 [2024-10-01 17:38:28.644960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.469 [2024-10-01 17:38:28.644969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.469 qpair failed and we were unable to recover it. 00:38:30.469 [2024-10-01 17:38:28.645291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.469 [2024-10-01 17:38:28.645301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.469 qpair failed and we were unable to recover it. 00:38:30.469 [2024-10-01 17:38:28.645576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.469 [2024-10-01 17:38:28.645587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.469 qpair failed and we were unable to recover it. 00:38:30.469 [2024-10-01 17:38:28.645860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.469 [2024-10-01 17:38:28.645871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.469 qpair failed and we were unable to recover it. 00:38:30.469 [2024-10-01 17:38:28.646157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.469 [2024-10-01 17:38:28.646167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.469 qpair failed and we were unable to recover it. 00:38:30.469 [2024-10-01 17:38:28.646456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.469 [2024-10-01 17:38:28.646466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.469 qpair failed and we were unable to recover it. 00:38:30.469 [2024-10-01 17:38:28.646651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.469 [2024-10-01 17:38:28.646661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.469 qpair failed and we were unable to recover it. 00:38:30.469 [2024-10-01 17:38:28.646958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.469 [2024-10-01 17:38:28.646968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.469 qpair failed and we were unable to recover it. 00:38:30.469 [2024-10-01 17:38:28.647165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.469 [2024-10-01 17:38:28.647175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.469 qpair failed and we were unable to recover it. 00:38:30.469 [2024-10-01 17:38:28.647502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.469 [2024-10-01 17:38:28.647511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.469 qpair failed and we were unable to recover it. 00:38:30.469 [2024-10-01 17:38:28.647775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.469 [2024-10-01 17:38:28.647785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.469 qpair failed and we were unable to recover it. 00:38:30.469 [2024-10-01 17:38:28.648073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.469 [2024-10-01 17:38:28.648083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.469 qpair failed and we were unable to recover it. 00:38:30.469 [2024-10-01 17:38:28.648371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.470 [2024-10-01 17:38:28.648381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.470 qpair failed and we were unable to recover it. 00:38:30.470 [2024-10-01 17:38:28.648689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.470 [2024-10-01 17:38:28.648699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.470 qpair failed and we were unable to recover it. 00:38:30.470 [2024-10-01 17:38:28.649006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.470 [2024-10-01 17:38:28.649016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.470 qpair failed and we were unable to recover it. 00:38:30.470 [2024-10-01 17:38:28.649306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.470 [2024-10-01 17:38:28.649316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.470 qpair failed and we were unable to recover it. 00:38:30.470 [2024-10-01 17:38:28.649621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.470 [2024-10-01 17:38:28.649631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.470 qpair failed and we were unable to recover it. 00:38:30.470 [2024-10-01 17:38:28.649936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.470 [2024-10-01 17:38:28.649946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.470 qpair failed and we were unable to recover it. 00:38:30.470 [2024-10-01 17:38:28.650213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.470 [2024-10-01 17:38:28.650224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.470 qpair failed and we were unable to recover it. 00:38:30.470 [2024-10-01 17:38:28.650541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.470 [2024-10-01 17:38:28.650551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.470 qpair failed and we were unable to recover it. 00:38:30.470 [2024-10-01 17:38:28.650818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.470 [2024-10-01 17:38:28.650828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.470 qpair failed and we were unable to recover it. 00:38:30.470 [2024-10-01 17:38:28.651150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.470 [2024-10-01 17:38:28.651163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.470 qpair failed and we were unable to recover it. 00:38:30.470 [2024-10-01 17:38:28.651478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.470 [2024-10-01 17:38:28.651487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.470 qpair failed and we were unable to recover it. 00:38:30.470 [2024-10-01 17:38:28.651771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.470 [2024-10-01 17:38:28.651781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.470 qpair failed and we were unable to recover it. 00:38:30.470 [2024-10-01 17:38:28.652057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.470 [2024-10-01 17:38:28.652067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.470 qpair failed and we were unable to recover it. 00:38:30.470 [2024-10-01 17:38:28.652377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.470 [2024-10-01 17:38:28.652387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.470 qpair failed and we were unable to recover it. 00:38:30.470 [2024-10-01 17:38:28.652578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.470 [2024-10-01 17:38:28.652587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.470 qpair failed and we were unable to recover it. 00:38:30.470 [2024-10-01 17:38:28.652875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.470 [2024-10-01 17:38:28.652885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.470 qpair failed and we were unable to recover it. 00:38:30.470 [2024-10-01 17:38:28.653169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.470 [2024-10-01 17:38:28.653179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.470 qpair failed and we were unable to recover it. 00:38:30.470 [2024-10-01 17:38:28.653481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.470 [2024-10-01 17:38:28.653491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.470 qpair failed and we were unable to recover it. 00:38:30.470 [2024-10-01 17:38:28.653765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.470 [2024-10-01 17:38:28.653774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.470 qpair failed and we were unable to recover it. 00:38:30.470 [2024-10-01 17:38:28.654142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.470 [2024-10-01 17:38:28.654154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.470 qpair failed and we were unable to recover it. 00:38:30.470 [2024-10-01 17:38:28.654459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.470 [2024-10-01 17:38:28.654469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.470 qpair failed and we were unable to recover it. 00:38:30.470 [2024-10-01 17:38:28.654789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.470 [2024-10-01 17:38:28.654799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.470 qpair failed and we were unable to recover it. 00:38:30.470 [2024-10-01 17:38:28.655113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.470 [2024-10-01 17:38:28.655123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.470 qpair failed and we were unable to recover it. 00:38:30.470 [2024-10-01 17:38:28.655411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.470 [2024-10-01 17:38:28.655421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.470 qpair failed and we were unable to recover it. 00:38:30.470 [2024-10-01 17:38:28.655725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.470 [2024-10-01 17:38:28.655735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.470 qpair failed and we were unable to recover it. 00:38:30.470 [2024-10-01 17:38:28.656046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.470 [2024-10-01 17:38:28.656057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.470 qpair failed and we were unable to recover it. 00:38:30.470 [2024-10-01 17:38:28.656357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.470 [2024-10-01 17:38:28.656366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.470 qpair failed and we were unable to recover it. 00:38:30.470 [2024-10-01 17:38:28.656646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.470 [2024-10-01 17:38:28.656656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.470 qpair failed and we were unable to recover it. 00:38:30.470 [2024-10-01 17:38:28.656955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.470 [2024-10-01 17:38:28.656965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.470 qpair failed and we were unable to recover it. 00:38:30.470 [2024-10-01 17:38:28.657185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.470 [2024-10-01 17:38:28.657196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.470 qpair failed and we were unable to recover it. 00:38:30.470 [2024-10-01 17:38:28.657495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.470 [2024-10-01 17:38:28.657504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.470 qpair failed and we were unable to recover it. 00:38:30.470 [2024-10-01 17:38:28.657898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.470 [2024-10-01 17:38:28.657908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.470 qpair failed and we were unable to recover it. 00:38:30.470 [2024-10-01 17:38:28.658183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.470 [2024-10-01 17:38:28.658193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.470 qpair failed and we were unable to recover it. 00:38:30.470 [2024-10-01 17:38:28.658394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.470 [2024-10-01 17:38:28.658404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.470 qpair failed and we were unable to recover it. 00:38:30.470 [2024-10-01 17:38:28.658671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.470 [2024-10-01 17:38:28.658681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.470 qpair failed and we were unable to recover it. 00:38:30.470 [2024-10-01 17:38:28.658956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.470 [2024-10-01 17:38:28.658965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.470 qpair failed and we were unable to recover it. 00:38:30.470 [2024-10-01 17:38:28.659268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.470 [2024-10-01 17:38:28.659280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.470 qpair failed and we were unable to recover it. 00:38:30.470 [2024-10-01 17:38:28.659588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.471 [2024-10-01 17:38:28.659597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.471 qpair failed and we were unable to recover it. 00:38:30.471 [2024-10-01 17:38:28.659864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.471 [2024-10-01 17:38:28.659874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.471 qpair failed and we were unable to recover it. 00:38:30.471 [2024-10-01 17:38:28.660054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.471 [2024-10-01 17:38:28.660065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.471 qpair failed and we were unable to recover it. 00:38:30.471 [2024-10-01 17:38:28.660328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.471 [2024-10-01 17:38:28.660338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.471 qpair failed and we were unable to recover it. 00:38:30.471 [2024-10-01 17:38:28.660652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.471 [2024-10-01 17:38:28.660662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.471 qpair failed and we were unable to recover it. 00:38:30.471 [2024-10-01 17:38:28.660943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.471 [2024-10-01 17:38:28.660954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.471 qpair failed and we were unable to recover it. 00:38:30.471 [2024-10-01 17:38:28.661185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.471 [2024-10-01 17:38:28.661195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.471 qpair failed and we were unable to recover it. 00:38:30.471 [2024-10-01 17:38:28.661492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.471 [2024-10-01 17:38:28.661501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.471 qpair failed and we were unable to recover it. 00:38:30.471 [2024-10-01 17:38:28.661804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.471 [2024-10-01 17:38:28.661813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.471 qpair failed and we were unable to recover it. 00:38:30.471 [2024-10-01 17:38:28.662129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.471 [2024-10-01 17:38:28.662139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.471 qpair failed and we were unable to recover it. 00:38:30.471 [2024-10-01 17:38:28.662464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.471 [2024-10-01 17:38:28.662473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.471 qpair failed and we were unable to recover it. 00:38:30.471 [2024-10-01 17:38:28.662777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.471 [2024-10-01 17:38:28.662787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.471 qpair failed and we were unable to recover it. 00:38:30.471 [2024-10-01 17:38:28.663068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.471 [2024-10-01 17:38:28.663078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.471 qpair failed and we were unable to recover it. 00:38:30.471 [2024-10-01 17:38:28.663423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.471 [2024-10-01 17:38:28.663433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.471 qpair failed and we were unable to recover it. 00:38:30.471 [2024-10-01 17:38:28.663717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.471 [2024-10-01 17:38:28.663727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.471 qpair failed and we were unable to recover it. 00:38:30.471 [2024-10-01 17:38:28.664008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.471 [2024-10-01 17:38:28.664018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.471 qpair failed and we were unable to recover it. 00:38:30.471 [2024-10-01 17:38:28.664346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.471 [2024-10-01 17:38:28.664357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.471 qpair failed and we were unable to recover it. 00:38:30.471 [2024-10-01 17:38:28.664653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.471 [2024-10-01 17:38:28.664663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.471 qpair failed and we were unable to recover it. 00:38:30.471 [2024-10-01 17:38:28.664952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.471 [2024-10-01 17:38:28.664962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.471 qpair failed and we were unable to recover it. 00:38:30.471 [2024-10-01 17:38:28.665257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.471 [2024-10-01 17:38:28.665267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.471 qpair failed and we were unable to recover it. 00:38:30.471 [2024-10-01 17:38:28.665530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.471 [2024-10-01 17:38:28.665540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.471 qpair failed and we were unable to recover it. 00:38:30.471 [2024-10-01 17:38:28.665834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.471 [2024-10-01 17:38:28.665843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.471 qpair failed and we were unable to recover it. 00:38:30.471 [2024-10-01 17:38:28.666155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.471 [2024-10-01 17:38:28.666166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.471 qpair failed and we were unable to recover it. 00:38:30.471 [2024-10-01 17:38:28.666454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.471 [2024-10-01 17:38:28.666463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.471 qpair failed and we were unable to recover it. 00:38:30.471 [2024-10-01 17:38:28.666753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.471 [2024-10-01 17:38:28.666763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.471 qpair failed and we were unable to recover it. 00:38:30.471 [2024-10-01 17:38:28.667021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.471 [2024-10-01 17:38:28.667031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.471 qpair failed and we were unable to recover it. 00:38:30.471 [2024-10-01 17:38:28.667341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.471 [2024-10-01 17:38:28.667350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.471 qpair failed and we were unable to recover it. 00:38:30.471 [2024-10-01 17:38:28.667523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.471 [2024-10-01 17:38:28.667534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.471 qpair failed and we were unable to recover it. 00:38:30.471 [2024-10-01 17:38:28.667870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.471 [2024-10-01 17:38:28.667879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.471 qpair failed and we were unable to recover it. 00:38:30.471 [2024-10-01 17:38:28.668148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.471 [2024-10-01 17:38:28.668158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.471 qpair failed and we were unable to recover it. 00:38:30.471 [2024-10-01 17:38:28.668454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.471 [2024-10-01 17:38:28.668463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.471 qpair failed and we were unable to recover it. 00:38:30.471 [2024-10-01 17:38:28.668776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.471 [2024-10-01 17:38:28.668786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.471 qpair failed and we were unable to recover it. 00:38:30.471 [2024-10-01 17:38:28.669105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.471 [2024-10-01 17:38:28.669116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.471 qpair failed and we were unable to recover it. 00:38:30.471 [2024-10-01 17:38:28.669431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.471 [2024-10-01 17:38:28.669442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.471 qpair failed and we were unable to recover it. 00:38:30.471 [2024-10-01 17:38:28.669741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.471 [2024-10-01 17:38:28.669750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.471 qpair failed and we were unable to recover it. 00:38:30.471 [2024-10-01 17:38:28.670057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.471 [2024-10-01 17:38:28.670067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.471 qpair failed and we were unable to recover it. 00:38:30.471 [2024-10-01 17:38:28.670360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.471 [2024-10-01 17:38:28.670369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.471 qpair failed and we were unable to recover it. 00:38:30.471 [2024-10-01 17:38:28.670661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.471 [2024-10-01 17:38:28.670670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.471 qpair failed and we were unable to recover it. 00:38:30.471 [2024-10-01 17:38:28.670871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.471 [2024-10-01 17:38:28.670880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.472 qpair failed and we were unable to recover it. 00:38:30.472 [2024-10-01 17:38:28.671184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.472 [2024-10-01 17:38:28.671194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.472 qpair failed and we were unable to recover it. 00:38:30.472 [2024-10-01 17:38:28.671378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.472 [2024-10-01 17:38:28.671392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.472 qpair failed and we were unable to recover it. 00:38:30.472 [2024-10-01 17:38:28.671706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.472 [2024-10-01 17:38:28.671715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.472 qpair failed and we were unable to recover it. 00:38:30.472 [2024-10-01 17:38:28.672003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.472 [2024-10-01 17:38:28.672013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.472 qpair failed and we were unable to recover it. 00:38:30.472 [2024-10-01 17:38:28.672175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.472 [2024-10-01 17:38:28.672185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.472 qpair failed and we were unable to recover it. 00:38:30.472 [2024-10-01 17:38:28.672504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.472 [2024-10-01 17:38:28.672513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.472 qpair failed and we were unable to recover it. 00:38:30.472 [2024-10-01 17:38:28.672801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.472 [2024-10-01 17:38:28.672810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.472 qpair failed and we were unable to recover it. 00:38:30.472 [2024-10-01 17:38:28.673120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.472 [2024-10-01 17:38:28.673130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.472 qpair failed and we were unable to recover it. 00:38:30.472 [2024-10-01 17:38:28.673435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.472 [2024-10-01 17:38:28.673445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.472 qpair failed and we were unable to recover it. 00:38:30.472 [2024-10-01 17:38:28.673756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.472 [2024-10-01 17:38:28.673766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.472 qpair failed and we were unable to recover it. 00:38:30.472 [2024-10-01 17:38:28.674082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.472 [2024-10-01 17:38:28.674092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.472 qpair failed and we were unable to recover it. 00:38:30.472 [2024-10-01 17:38:28.674389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.472 [2024-10-01 17:38:28.674399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.472 qpair failed and we were unable to recover it. 00:38:30.472 [2024-10-01 17:38:28.674700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.472 [2024-10-01 17:38:28.674710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.472 qpair failed and we were unable to recover it. 00:38:30.472 [2024-10-01 17:38:28.675038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.472 [2024-10-01 17:38:28.675048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.472 qpair failed and we were unable to recover it. 00:38:30.472 [2024-10-01 17:38:28.675312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.472 [2024-10-01 17:38:28.675321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.472 qpair failed and we were unable to recover it. 00:38:30.472 [2024-10-01 17:38:28.675635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.472 [2024-10-01 17:38:28.675645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.472 qpair failed and we were unable to recover it. 00:38:30.472 [2024-10-01 17:38:28.675956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.472 [2024-10-01 17:38:28.675966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.472 qpair failed and we were unable to recover it. 00:38:30.472 [2024-10-01 17:38:28.676346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.472 [2024-10-01 17:38:28.676356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.472 qpair failed and we were unable to recover it. 00:38:30.472 [2024-10-01 17:38:28.676617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.472 [2024-10-01 17:38:28.676627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.472 qpair failed and we were unable to recover it. 00:38:30.472 [2024-10-01 17:38:28.676914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.472 [2024-10-01 17:38:28.676924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.472 qpair failed and we were unable to recover it. 00:38:30.472 [2024-10-01 17:38:28.677230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.472 [2024-10-01 17:38:28.677240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.472 qpair failed and we were unable to recover it. 00:38:30.472 [2024-10-01 17:38:28.677549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.472 [2024-10-01 17:38:28.677560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.472 qpair failed and we were unable to recover it. 00:38:30.472 [2024-10-01 17:38:28.677890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.472 [2024-10-01 17:38:28.677900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.472 qpair failed and we were unable to recover it. 00:38:30.472 [2024-10-01 17:38:28.678198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.472 [2024-10-01 17:38:28.678208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.472 qpair failed and we were unable to recover it. 00:38:30.472 [2024-10-01 17:38:28.678522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.472 [2024-10-01 17:38:28.678531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.472 qpair failed and we were unable to recover it. 00:38:30.472 [2024-10-01 17:38:28.678846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.472 [2024-10-01 17:38:28.678855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.472 qpair failed and we were unable to recover it. 00:38:30.472 [2024-10-01 17:38:28.679131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.472 [2024-10-01 17:38:28.679141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.472 qpair failed and we were unable to recover it. 00:38:30.472 [2024-10-01 17:38:28.679528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.472 [2024-10-01 17:38:28.679537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.472 qpair failed and we were unable to recover it. 00:38:30.472 [2024-10-01 17:38:28.679857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.472 [2024-10-01 17:38:28.679868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.472 qpair failed and we were unable to recover it. 00:38:30.472 [2024-10-01 17:38:28.680167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.472 [2024-10-01 17:38:28.680177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.472 qpair failed and we were unable to recover it. 00:38:30.472 [2024-10-01 17:38:28.680465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.472 [2024-10-01 17:38:28.680475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.472 qpair failed and we were unable to recover it. 00:38:30.472 [2024-10-01 17:38:28.680757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.472 [2024-10-01 17:38:28.680767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.472 qpair failed and we were unable to recover it. 00:38:30.472 [2024-10-01 17:38:28.681069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.472 [2024-10-01 17:38:28.681080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.472 qpair failed and we were unable to recover it. 00:38:30.472 [2024-10-01 17:38:28.681348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.472 [2024-10-01 17:38:28.681357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.472 qpair failed and we were unable to recover it. 00:38:30.472 [2024-10-01 17:38:28.681671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.472 [2024-10-01 17:38:28.681680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.472 qpair failed and we were unable to recover it. 00:38:30.472 [2024-10-01 17:38:28.681954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.472 [2024-10-01 17:38:28.681963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.472 qpair failed and we were unable to recover it. 00:38:30.472 [2024-10-01 17:38:28.682289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.472 [2024-10-01 17:38:28.682299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.472 qpair failed and we were unable to recover it. 00:38:30.472 [2024-10-01 17:38:28.682595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.472 [2024-10-01 17:38:28.682604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.472 qpair failed and we were unable to recover it. 00:38:30.472 [2024-10-01 17:38:28.682870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.473 [2024-10-01 17:38:28.682879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.473 qpair failed and we were unable to recover it. 00:38:30.473 [2024-10-01 17:38:28.683106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.473 [2024-10-01 17:38:28.683117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.473 qpair failed and we were unable to recover it. 00:38:30.473 [2024-10-01 17:38:28.683334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.473 [2024-10-01 17:38:28.683345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.473 qpair failed and we were unable to recover it. 00:38:30.473 [2024-10-01 17:38:28.683678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.473 [2024-10-01 17:38:28.683689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.473 qpair failed and we were unable to recover it. 00:38:30.473 [2024-10-01 17:38:28.684009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.473 [2024-10-01 17:38:28.684020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.473 qpair failed and we were unable to recover it. 00:38:30.473 [2024-10-01 17:38:28.684212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.473 [2024-10-01 17:38:28.684221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.473 qpair failed and we were unable to recover it. 00:38:30.473 [2024-10-01 17:38:28.684555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.473 [2024-10-01 17:38:28.684564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.473 qpair failed and we were unable to recover it. 00:38:30.473 [2024-10-01 17:38:28.684871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.473 [2024-10-01 17:38:28.684881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.473 qpair failed and we were unable to recover it. 00:38:30.473 [2024-10-01 17:38:28.685088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.473 [2024-10-01 17:38:28.685100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.473 qpair failed and we were unable to recover it. 00:38:30.473 [2024-10-01 17:38:28.685324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.473 [2024-10-01 17:38:28.685333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.473 qpair failed and we were unable to recover it. 00:38:30.473 [2024-10-01 17:38:28.685643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.473 [2024-10-01 17:38:28.685653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.473 qpair failed and we were unable to recover it. 00:38:30.473 [2024-10-01 17:38:28.685956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.473 [2024-10-01 17:38:28.685966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.473 qpair failed and we were unable to recover it. 00:38:30.473 [2024-10-01 17:38:28.686273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.473 [2024-10-01 17:38:28.686283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.473 qpair failed and we were unable to recover it. 00:38:30.473 [2024-10-01 17:38:28.686476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.473 [2024-10-01 17:38:28.686487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.473 qpair failed and we were unable to recover it. 00:38:30.473 [2024-10-01 17:38:28.686824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.473 [2024-10-01 17:38:28.686834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.473 qpair failed and we were unable to recover it. 00:38:30.473 [2024-10-01 17:38:28.687142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.473 [2024-10-01 17:38:28.687152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.473 qpair failed and we were unable to recover it. 00:38:30.473 [2024-10-01 17:38:28.687444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.473 [2024-10-01 17:38:28.687454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.473 qpair failed and we were unable to recover it. 00:38:30.473 [2024-10-01 17:38:28.687734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.473 [2024-10-01 17:38:28.687744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.473 qpair failed and we were unable to recover it. 00:38:30.473 [2024-10-01 17:38:28.688011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.473 [2024-10-01 17:38:28.688022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.473 qpair failed and we were unable to recover it. 00:38:30.473 [2024-10-01 17:38:28.688322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.473 [2024-10-01 17:38:28.688332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.473 qpair failed and we were unable to recover it. 00:38:30.473 [2024-10-01 17:38:28.688640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.473 [2024-10-01 17:38:28.688649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.473 qpair failed and we were unable to recover it. 00:38:30.473 [2024-10-01 17:38:28.688950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.473 [2024-10-01 17:38:28.688959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.473 qpair failed and we were unable to recover it. 00:38:30.473 [2024-10-01 17:38:28.689265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.473 [2024-10-01 17:38:28.689275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.473 qpair failed and we were unable to recover it. 00:38:30.473 [2024-10-01 17:38:28.689564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.473 [2024-10-01 17:38:28.689573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.473 qpair failed and we were unable to recover it. 00:38:30.473 [2024-10-01 17:38:28.689875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.473 [2024-10-01 17:38:28.689884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.473 qpair failed and we were unable to recover it. 00:38:30.473 [2024-10-01 17:38:28.690244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.473 [2024-10-01 17:38:28.690254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.473 qpair failed and we were unable to recover it. 00:38:30.473 [2024-10-01 17:38:28.690558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.473 [2024-10-01 17:38:28.690567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.473 qpair failed and we were unable to recover it. 00:38:30.473 [2024-10-01 17:38:28.690877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.473 [2024-10-01 17:38:28.690887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.473 qpair failed and we were unable to recover it. 00:38:30.473 [2024-10-01 17:38:28.691158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.473 [2024-10-01 17:38:28.691168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.473 qpair failed and we were unable to recover it. 00:38:30.473 [2024-10-01 17:38:28.691465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.473 [2024-10-01 17:38:28.691475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.473 qpair failed and we were unable to recover it. 00:38:30.473 [2024-10-01 17:38:28.691802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.473 [2024-10-01 17:38:28.691811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.473 qpair failed and we were unable to recover it. 00:38:30.473 [2024-10-01 17:38:28.692093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.473 [2024-10-01 17:38:28.692105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.473 qpair failed and we were unable to recover it. 00:38:30.473 [2024-10-01 17:38:28.692396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.473 [2024-10-01 17:38:28.692406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.473 qpair failed and we were unable to recover it. 00:38:30.473 [2024-10-01 17:38:28.692682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.473 [2024-10-01 17:38:28.692692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.473 qpair failed and we were unable to recover it. 00:38:30.473 [2024-10-01 17:38:28.693003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.474 [2024-10-01 17:38:28.693013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.474 qpair failed and we were unable to recover it. 00:38:30.474 [2024-10-01 17:38:28.693302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.474 [2024-10-01 17:38:28.693311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.474 qpair failed and we were unable to recover it. 00:38:30.474 [2024-10-01 17:38:28.693620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.474 [2024-10-01 17:38:28.693630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.474 qpair failed and we were unable to recover it. 00:38:30.474 [2024-10-01 17:38:28.693918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.474 [2024-10-01 17:38:28.693929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.474 qpair failed and we were unable to recover it. 00:38:30.474 [2024-10-01 17:38:28.694237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.474 [2024-10-01 17:38:28.694248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.474 qpair failed and we were unable to recover it. 00:38:30.474 [2024-10-01 17:38:28.694550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.474 [2024-10-01 17:38:28.694560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.474 qpair failed and we were unable to recover it. 00:38:30.474 [2024-10-01 17:38:28.694885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.474 [2024-10-01 17:38:28.694894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.474 qpair failed and we were unable to recover it. 00:38:30.474 [2024-10-01 17:38:28.695165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.474 [2024-10-01 17:38:28.695175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.474 qpair failed and we were unable to recover it. 00:38:30.474 [2024-10-01 17:38:28.695487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.474 [2024-10-01 17:38:28.695496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.474 qpair failed and we were unable to recover it. 00:38:30.474 [2024-10-01 17:38:28.695782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.474 [2024-10-01 17:38:28.695791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.474 qpair failed and we were unable to recover it. 00:38:30.474 [2024-10-01 17:38:28.696103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.474 [2024-10-01 17:38:28.696113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.474 qpair failed and we were unable to recover it. 00:38:30.474 [2024-10-01 17:38:28.696403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.474 [2024-10-01 17:38:28.696413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.474 qpair failed and we were unable to recover it. 00:38:30.474 [2024-10-01 17:38:28.696691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.474 [2024-10-01 17:38:28.696700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.474 qpair failed and we were unable to recover it. 00:38:30.474 [2024-10-01 17:38:28.696912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.474 [2024-10-01 17:38:28.696922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.474 qpair failed and we were unable to recover it. 00:38:30.474 [2024-10-01 17:38:28.697278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.474 [2024-10-01 17:38:28.697289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.474 qpair failed and we were unable to recover it. 00:38:30.474 [2024-10-01 17:38:28.697473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.474 [2024-10-01 17:38:28.697484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.474 qpair failed and we were unable to recover it. 00:38:30.474 [2024-10-01 17:38:28.697852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.474 [2024-10-01 17:38:28.697863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.474 qpair failed and we were unable to recover it. 00:38:30.474 [2024-10-01 17:38:28.698162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.474 [2024-10-01 17:38:28.698172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.474 qpair failed and we were unable to recover it. 00:38:30.474 [2024-10-01 17:38:28.698452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.474 [2024-10-01 17:38:28.698461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.474 qpair failed and we were unable to recover it. 00:38:30.474 [2024-10-01 17:38:28.698741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.474 [2024-10-01 17:38:28.698751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.474 qpair failed and we were unable to recover it. 00:38:30.474 [2024-10-01 17:38:28.699062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.474 [2024-10-01 17:38:28.699072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.474 qpair failed and we were unable to recover it. 00:38:30.474 [2024-10-01 17:38:28.699371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.474 [2024-10-01 17:38:28.699380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.474 qpair failed and we were unable to recover it. 00:38:30.474 [2024-10-01 17:38:28.699653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.474 [2024-10-01 17:38:28.699663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.474 qpair failed and we were unable to recover it. 00:38:30.474 [2024-10-01 17:38:28.699960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.474 [2024-10-01 17:38:28.699971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.474 qpair failed and we were unable to recover it. 00:38:30.474 [2024-10-01 17:38:28.700149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.474 [2024-10-01 17:38:28.700159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.474 qpair failed and we were unable to recover it. 00:38:30.474 [2024-10-01 17:38:28.700433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.474 [2024-10-01 17:38:28.700442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.474 qpair failed and we were unable to recover it. 00:38:30.474 [2024-10-01 17:38:28.700719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.474 [2024-10-01 17:38:28.700730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.474 qpair failed and we were unable to recover it. 00:38:30.474 [2024-10-01 17:38:28.701038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.474 [2024-10-01 17:38:28.701048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.474 qpair failed and we were unable to recover it. 00:38:30.474 [2024-10-01 17:38:28.701352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.474 [2024-10-01 17:38:28.701361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.474 qpair failed and we were unable to recover it. 00:38:30.474 [2024-10-01 17:38:28.701567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.474 [2024-10-01 17:38:28.701584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.474 qpair failed and we were unable to recover it. 00:38:30.474 [2024-10-01 17:38:28.701875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.474 [2024-10-01 17:38:28.701884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.474 qpair failed and we were unable to recover it. 00:38:30.474 [2024-10-01 17:38:28.702115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.474 [2024-10-01 17:38:28.702126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.474 qpair failed and we were unable to recover it. 00:38:30.474 [2024-10-01 17:38:28.702438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.474 [2024-10-01 17:38:28.702447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.474 qpair failed and we were unable to recover it. 00:38:30.474 [2024-10-01 17:38:28.702747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.474 [2024-10-01 17:38:28.702756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.474 qpair failed and we were unable to recover it. 00:38:30.474 [2024-10-01 17:38:28.703059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.474 [2024-10-01 17:38:28.703069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.474 qpair failed and we were unable to recover it. 00:38:30.474 [2024-10-01 17:38:28.703380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.474 [2024-10-01 17:38:28.703390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.474 qpair failed and we were unable to recover it. 00:38:30.474 [2024-10-01 17:38:28.703704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.474 [2024-10-01 17:38:28.703714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.474 qpair failed and we were unable to recover it. 00:38:30.474 [2024-10-01 17:38:28.704010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.474 [2024-10-01 17:38:28.704020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.475 qpair failed and we were unable to recover it. 00:38:30.475 [2024-10-01 17:38:28.704246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.475 [2024-10-01 17:38:28.704256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.475 qpair failed and we were unable to recover it. 00:38:30.475 [2024-10-01 17:38:28.704566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.475 [2024-10-01 17:38:28.704575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.475 qpair failed and we were unable to recover it. 00:38:30.475 [2024-10-01 17:38:28.704895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.475 [2024-10-01 17:38:28.704905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.475 qpair failed and we were unable to recover it. 00:38:30.475 [2024-10-01 17:38:28.705161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.475 [2024-10-01 17:38:28.705172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.475 qpair failed and we were unable to recover it. 00:38:30.475 [2024-10-01 17:38:28.705536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.475 [2024-10-01 17:38:28.705546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.475 qpair failed and we were unable to recover it. 00:38:30.475 [2024-10-01 17:38:28.705759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.475 [2024-10-01 17:38:28.705768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.475 qpair failed and we were unable to recover it. 00:38:30.475 [2024-10-01 17:38:28.705850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.475 [2024-10-01 17:38:28.705860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.475 qpair failed and we were unable to recover it. 00:38:30.475 [2024-10-01 17:38:28.706076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.475 [2024-10-01 17:38:28.706086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.475 qpair failed and we were unable to recover it. 00:38:30.475 [2024-10-01 17:38:28.706291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.475 [2024-10-01 17:38:28.706300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.475 qpair failed and we were unable to recover it. 00:38:30.475 [2024-10-01 17:38:28.706463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.475 [2024-10-01 17:38:28.706473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.475 qpair failed and we were unable to recover it. 00:38:30.475 [2024-10-01 17:38:28.706761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.475 [2024-10-01 17:38:28.706771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.475 qpair failed and we were unable to recover it. 00:38:30.475 [2024-10-01 17:38:28.707060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.475 [2024-10-01 17:38:28.707069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.475 qpair failed and we were unable to recover it. 00:38:30.475 [2024-10-01 17:38:28.707241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.475 [2024-10-01 17:38:28.707252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.475 qpair failed and we were unable to recover it. 00:38:30.475 [2024-10-01 17:38:28.707618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.475 [2024-10-01 17:38:28.707627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.475 qpair failed and we were unable to recover it. 00:38:30.475 [2024-10-01 17:38:28.707901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.475 [2024-10-01 17:38:28.707910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.475 qpair failed and we were unable to recover it. 00:38:30.475 [2024-10-01 17:38:28.708178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.475 [2024-10-01 17:38:28.708188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.475 qpair failed and we were unable to recover it. 00:38:30.475 [2024-10-01 17:38:28.708400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.475 [2024-10-01 17:38:28.708409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.475 qpair failed and we were unable to recover it. 00:38:30.475 [2024-10-01 17:38:28.708739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.475 [2024-10-01 17:38:28.708749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.475 qpair failed and we were unable to recover it. 00:38:30.475 [2024-10-01 17:38:28.709057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.475 [2024-10-01 17:38:28.709067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.475 qpair failed and we were unable to recover it. 00:38:30.475 [2024-10-01 17:38:28.709371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.475 [2024-10-01 17:38:28.709380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.475 qpair failed and we were unable to recover it. 00:38:30.475 [2024-10-01 17:38:28.709785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.475 [2024-10-01 17:38:28.709794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.475 qpair failed and we were unable to recover it. 00:38:30.475 [2024-10-01 17:38:28.709955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.475 [2024-10-01 17:38:28.709966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.475 qpair failed and we were unable to recover it. 00:38:30.475 [2024-10-01 17:38:28.710187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.475 [2024-10-01 17:38:28.710198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.475 qpair failed and we were unable to recover it. 00:38:30.475 [2024-10-01 17:38:28.710483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.475 [2024-10-01 17:38:28.710493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.475 qpair failed and we were unable to recover it. 00:38:30.475 [2024-10-01 17:38:28.710797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.475 [2024-10-01 17:38:28.710807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.475 qpair failed and we were unable to recover it. 00:38:30.475 [2024-10-01 17:38:28.711150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.475 [2024-10-01 17:38:28.711160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.475 qpair failed and we were unable to recover it. 00:38:30.475 [2024-10-01 17:38:28.711443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.475 [2024-10-01 17:38:28.711452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.475 qpair failed and we were unable to recover it. 00:38:30.475 [2024-10-01 17:38:28.711771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.475 [2024-10-01 17:38:28.711783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.475 qpair failed and we were unable to recover it. 00:38:30.475 [2024-10-01 17:38:28.712073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.475 [2024-10-01 17:38:28.712083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.475 qpair failed and we were unable to recover it. 00:38:30.475 [2024-10-01 17:38:28.712374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.475 [2024-10-01 17:38:28.712384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.475 qpair failed and we were unable to recover it. 00:38:30.475 [2024-10-01 17:38:28.712665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.475 [2024-10-01 17:38:28.712674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.475 qpair failed and we were unable to recover it. 00:38:30.475 [2024-10-01 17:38:28.712990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.475 [2024-10-01 17:38:28.713003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.475 qpair failed and we were unable to recover it. 00:38:30.475 [2024-10-01 17:38:28.713285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.475 [2024-10-01 17:38:28.713295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.475 qpair failed and we were unable to recover it. 00:38:30.475 [2024-10-01 17:38:28.713614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.475 [2024-10-01 17:38:28.713624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.475 qpair failed and we were unable to recover it. 00:38:30.475 [2024-10-01 17:38:28.713952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.475 [2024-10-01 17:38:28.713962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.475 qpair failed and we were unable to recover it. 00:38:30.475 [2024-10-01 17:38:28.714168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.475 [2024-10-01 17:38:28.714178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.475 qpair failed and we were unable to recover it. 00:38:30.475 [2024-10-01 17:38:28.714444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.475 [2024-10-01 17:38:28.714454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.475 qpair failed and we were unable to recover it. 00:38:30.475 [2024-10-01 17:38:28.714656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.475 [2024-10-01 17:38:28.714665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.475 qpair failed and we were unable to recover it. 00:38:30.475 [2024-10-01 17:38:28.714940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.476 [2024-10-01 17:38:28.714949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.476 qpair failed and we were unable to recover it. 00:38:30.476 [2024-10-01 17:38:28.715264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.476 [2024-10-01 17:38:28.715274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.476 qpair failed and we were unable to recover it. 00:38:30.476 [2024-10-01 17:38:28.715588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.476 [2024-10-01 17:38:28.715598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.476 qpair failed and we were unable to recover it. 00:38:30.476 [2024-10-01 17:38:28.715911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.476 [2024-10-01 17:38:28.715920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.476 qpair failed and we were unable to recover it. 00:38:30.476 [2024-10-01 17:38:28.716212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.476 [2024-10-01 17:38:28.716222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.476 qpair failed and we were unable to recover it. 00:38:30.476 [2024-10-01 17:38:28.716417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.476 [2024-10-01 17:38:28.716427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.476 qpair failed and we were unable to recover it. 00:38:30.476 [2024-10-01 17:38:28.716765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.476 [2024-10-01 17:38:28.716774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.476 qpair failed and we were unable to recover it. 00:38:30.476 [2024-10-01 17:38:28.717047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.476 [2024-10-01 17:38:28.717057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.476 qpair failed and we were unable to recover it. 00:38:30.476 [2024-10-01 17:38:28.717375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.476 [2024-10-01 17:38:28.717385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.476 qpair failed and we were unable to recover it. 00:38:30.476 [2024-10-01 17:38:28.717702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.476 [2024-10-01 17:38:28.717711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.476 qpair failed and we were unable to recover it. 00:38:30.476 [2024-10-01 17:38:28.717992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.476 [2024-10-01 17:38:28.718006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.476 qpair failed and we were unable to recover it. 00:38:30.476 [2024-10-01 17:38:28.718319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.476 [2024-10-01 17:38:28.718328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.476 qpair failed and we were unable to recover it. 00:38:30.476 [2024-10-01 17:38:28.718598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.476 [2024-10-01 17:38:28.718608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.476 qpair failed and we were unable to recover it. 00:38:30.476 [2024-10-01 17:38:28.718885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.476 [2024-10-01 17:38:28.718895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.476 qpair failed and we were unable to recover it. 00:38:30.476 [2024-10-01 17:38:28.719068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.476 [2024-10-01 17:38:28.719080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.476 qpair failed and we were unable to recover it. 00:38:30.476 [2024-10-01 17:38:28.719394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.476 [2024-10-01 17:38:28.719403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.476 qpair failed and we were unable to recover it. 00:38:30.476 [2024-10-01 17:38:28.719703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.476 [2024-10-01 17:38:28.719712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.476 qpair failed and we were unable to recover it. 00:38:30.476 [2024-10-01 17:38:28.720025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.476 [2024-10-01 17:38:28.720035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.476 qpair failed and we were unable to recover it. 00:38:30.476 [2024-10-01 17:38:28.720332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.476 [2024-10-01 17:38:28.720341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.476 qpair failed and we were unable to recover it. 00:38:30.476 [2024-10-01 17:38:28.720613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.476 [2024-10-01 17:38:28.720623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.476 qpair failed and we were unable to recover it. 00:38:30.476 [2024-10-01 17:38:28.720906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.476 [2024-10-01 17:38:28.720915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.476 qpair failed and we were unable to recover it. 00:38:30.476 [2024-10-01 17:38:28.721185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.476 [2024-10-01 17:38:28.721195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.476 qpair failed and we were unable to recover it. 00:38:30.476 [2024-10-01 17:38:28.721500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.476 [2024-10-01 17:38:28.721510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.476 qpair failed and we were unable to recover it. 00:38:30.476 [2024-10-01 17:38:28.721788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.476 [2024-10-01 17:38:28.721798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.476 qpair failed and we were unable to recover it. 00:38:30.476 [2024-10-01 17:38:28.722126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.476 [2024-10-01 17:38:28.722136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.476 qpair failed and we were unable to recover it. 00:38:30.476 [2024-10-01 17:38:28.722437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.476 [2024-10-01 17:38:28.722447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.476 qpair failed and we were unable to recover it. 00:38:30.476 [2024-10-01 17:38:28.722751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.476 [2024-10-01 17:38:28.722760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.476 qpair failed and we were unable to recover it. 00:38:30.476 [2024-10-01 17:38:28.723067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.476 [2024-10-01 17:38:28.723077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.476 qpair failed and we were unable to recover it. 00:38:30.476 [2024-10-01 17:38:28.723370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.476 [2024-10-01 17:38:28.723380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.476 qpair failed and we were unable to recover it. 00:38:30.476 [2024-10-01 17:38:28.723648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.476 [2024-10-01 17:38:28.723657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.476 qpair failed and we were unable to recover it. 00:38:30.476 [2024-10-01 17:38:28.723855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.476 [2024-10-01 17:38:28.723866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.476 qpair failed and we were unable to recover it. 00:38:30.476 [2024-10-01 17:38:28.724161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.476 [2024-10-01 17:38:28.724171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.476 qpair failed and we were unable to recover it. 00:38:30.476 [2024-10-01 17:38:28.724503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.476 [2024-10-01 17:38:28.724512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.476 qpair failed and we were unable to recover it. 00:38:30.476 [2024-10-01 17:38:28.724815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.476 [2024-10-01 17:38:28.724826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.476 qpair failed and we were unable to recover it. 00:38:30.476 [2024-10-01 17:38:28.725111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.476 [2024-10-01 17:38:28.725121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.476 qpair failed and we were unable to recover it. 00:38:30.476 [2024-10-01 17:38:28.725426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.476 [2024-10-01 17:38:28.725435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.476 qpair failed and we were unable to recover it. 00:38:30.476 [2024-10-01 17:38:28.725708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.476 [2024-10-01 17:38:28.725718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.476 qpair failed and we were unable to recover it. 00:38:30.476 [2024-10-01 17:38:28.725912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.476 [2024-10-01 17:38:28.725922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.476 qpair failed and we were unable to recover it. 00:38:30.476 [2024-10-01 17:38:28.726238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.476 [2024-10-01 17:38:28.726248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.476 qpair failed and we were unable to recover it. 00:38:30.476 [2024-10-01 17:38:28.726523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.477 [2024-10-01 17:38:28.726532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.477 qpair failed and we were unable to recover it. 00:38:30.477 [2024-10-01 17:38:28.726861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.477 [2024-10-01 17:38:28.726872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.477 qpair failed and we were unable to recover it. 00:38:30.477 [2024-10-01 17:38:28.727061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.477 [2024-10-01 17:38:28.727071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.477 qpair failed and we were unable to recover it. 00:38:30.477 [2024-10-01 17:38:28.727374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.477 [2024-10-01 17:38:28.727383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.477 qpair failed and we were unable to recover it. 00:38:30.477 [2024-10-01 17:38:28.727643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.477 [2024-10-01 17:38:28.727652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.477 qpair failed and we were unable to recover it. 00:38:30.477 [2024-10-01 17:38:28.727958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.477 [2024-10-01 17:38:28.727968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.477 qpair failed and we were unable to recover it. 00:38:30.477 [2024-10-01 17:38:28.728278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.477 [2024-10-01 17:38:28.728288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.477 qpair failed and we were unable to recover it. 00:38:30.477 [2024-10-01 17:38:28.728589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.477 [2024-10-01 17:38:28.728599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.477 qpair failed and we were unable to recover it. 00:38:30.477 [2024-10-01 17:38:28.728925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.477 [2024-10-01 17:38:28.728935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.477 qpair failed and we were unable to recover it. 00:38:30.477 [2024-10-01 17:38:28.729221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.477 [2024-10-01 17:38:28.729231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.477 qpair failed and we were unable to recover it. 00:38:30.477 [2024-10-01 17:38:28.729545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.477 [2024-10-01 17:38:28.729555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.477 qpair failed and we were unable to recover it. 00:38:30.477 [2024-10-01 17:38:28.729880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.477 [2024-10-01 17:38:28.729890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.477 qpair failed and we were unable to recover it. 00:38:30.477 [2024-10-01 17:38:28.730193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.477 [2024-10-01 17:38:28.730202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.477 qpair failed and we were unable to recover it. 00:38:30.477 [2024-10-01 17:38:28.730494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.477 [2024-10-01 17:38:28.730503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.477 qpair failed and we were unable to recover it. 00:38:30.477 [2024-10-01 17:38:28.730696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.477 [2024-10-01 17:38:28.730707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.477 qpair failed and we were unable to recover it. 00:38:30.477 [2024-10-01 17:38:28.731041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.477 [2024-10-01 17:38:28.731051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.477 qpair failed and we were unable to recover it. 00:38:30.477 [2024-10-01 17:38:28.731363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.477 [2024-10-01 17:38:28.731373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.477 qpair failed and we were unable to recover it. 00:38:30.477 [2024-10-01 17:38:28.731660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.477 [2024-10-01 17:38:28.731670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.477 qpair failed and we were unable to recover it. 00:38:30.477 [2024-10-01 17:38:28.731973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.477 [2024-10-01 17:38:28.731985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.477 qpair failed and we were unable to recover it. 00:38:30.477 [2024-10-01 17:38:28.732289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.477 [2024-10-01 17:38:28.732299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.477 qpair failed and we were unable to recover it. 00:38:30.477 [2024-10-01 17:38:28.732600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.477 [2024-10-01 17:38:28.732610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.477 qpair failed and we were unable to recover it. 00:38:30.477 [2024-10-01 17:38:28.732921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.477 [2024-10-01 17:38:28.732931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.477 qpair failed and we were unable to recover it. 00:38:30.477 [2024-10-01 17:38:28.733270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.477 [2024-10-01 17:38:28.733280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.477 qpair failed and we were unable to recover it. 00:38:30.477 [2024-10-01 17:38:28.733578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.477 [2024-10-01 17:38:28.733587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.477 qpair failed and we were unable to recover it. 00:38:30.477 [2024-10-01 17:38:28.733859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.477 [2024-10-01 17:38:28.733869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.477 qpair failed and we were unable to recover it. 00:38:30.477 [2024-10-01 17:38:28.734157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.477 [2024-10-01 17:38:28.734166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.477 qpair failed and we were unable to recover it. 00:38:30.477 [2024-10-01 17:38:28.734485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.477 [2024-10-01 17:38:28.734495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.477 qpair failed and we were unable to recover it. 00:38:30.477 [2024-10-01 17:38:28.734812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.477 [2024-10-01 17:38:28.734821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.477 qpair failed and we were unable to recover it. 00:38:30.477 [2024-10-01 17:38:28.735139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.477 [2024-10-01 17:38:28.735149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.477 qpair failed and we were unable to recover it. 00:38:30.477 [2024-10-01 17:38:28.735438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.477 [2024-10-01 17:38:28.735448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.477 qpair failed and we were unable to recover it. 00:38:30.477 [2024-10-01 17:38:28.735643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.477 [2024-10-01 17:38:28.735654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.477 qpair failed and we were unable to recover it. 00:38:30.477 [2024-10-01 17:38:28.735923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.477 [2024-10-01 17:38:28.735933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.477 qpair failed and we were unable to recover it. 00:38:30.477 [2024-10-01 17:38:28.736206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.477 [2024-10-01 17:38:28.736216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.477 qpair failed and we were unable to recover it. 00:38:30.477 [2024-10-01 17:38:28.736376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.477 [2024-10-01 17:38:28.736388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.477 qpair failed and we were unable to recover it. 00:38:30.477 [2024-10-01 17:38:28.736698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.477 [2024-10-01 17:38:28.736707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.477 qpair failed and we were unable to recover it. 00:38:30.478 [2024-10-01 17:38:28.736991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.478 [2024-10-01 17:38:28.737003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.478 qpair failed and we were unable to recover it. 00:38:30.478 [2024-10-01 17:38:28.737299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.478 [2024-10-01 17:38:28.737308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.478 qpair failed and we were unable to recover it. 00:38:30.478 [2024-10-01 17:38:28.737584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.478 [2024-10-01 17:38:28.737593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.478 qpair failed and we were unable to recover it. 00:38:30.478 [2024-10-01 17:38:28.737917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.478 [2024-10-01 17:38:28.737926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.478 qpair failed and we were unable to recover it. 00:38:30.478 [2024-10-01 17:38:28.738215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.478 [2024-10-01 17:38:28.738225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.478 qpair failed and we were unable to recover it. 00:38:30.478 [2024-10-01 17:38:28.738483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.478 [2024-10-01 17:38:28.738493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.478 qpair failed and we were unable to recover it. 00:38:30.478 [2024-10-01 17:38:28.738849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.478 [2024-10-01 17:38:28.738858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.478 qpair failed and we were unable to recover it. 00:38:30.478 [2024-10-01 17:38:28.739142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.478 [2024-10-01 17:38:28.739152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.478 qpair failed and we were unable to recover it. 00:38:30.478 [2024-10-01 17:38:28.739495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.478 [2024-10-01 17:38:28.739504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.478 qpair failed and we were unable to recover it. 00:38:30.478 [2024-10-01 17:38:28.739697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.478 [2024-10-01 17:38:28.739707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.478 qpair failed and we were unable to recover it. 00:38:30.478 [2024-10-01 17:38:28.740051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.478 [2024-10-01 17:38:28.740060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.478 qpair failed and we were unable to recover it. 00:38:30.478 [2024-10-01 17:38:28.740369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.478 [2024-10-01 17:38:28.740379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.478 qpair failed and we were unable to recover it. 00:38:30.478 [2024-10-01 17:38:28.740686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.478 [2024-10-01 17:38:28.740696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.478 qpair failed and we were unable to recover it. 00:38:30.478 [2024-10-01 17:38:28.741011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.478 [2024-10-01 17:38:28.741020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.478 qpair failed and we were unable to recover it. 00:38:30.478 [2024-10-01 17:38:28.741339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.478 [2024-10-01 17:38:28.741348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.478 qpair failed and we were unable to recover it. 00:38:30.478 [2024-10-01 17:38:28.741664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.478 [2024-10-01 17:38:28.741674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.478 qpair failed and we were unable to recover it. 00:38:30.478 [2024-10-01 17:38:28.741981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.478 [2024-10-01 17:38:28.741991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.478 qpair failed and we were unable to recover it. 00:38:30.478 [2024-10-01 17:38:28.742329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.478 [2024-10-01 17:38:28.742339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.478 qpair failed and we were unable to recover it. 00:38:30.478 [2024-10-01 17:38:28.742638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.478 [2024-10-01 17:38:28.742648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.478 qpair failed and we were unable to recover it. 00:38:30.478 [2024-10-01 17:38:28.742969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.478 [2024-10-01 17:38:28.742979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.478 qpair failed and we were unable to recover it. 00:38:30.478 [2024-10-01 17:38:28.743282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.478 [2024-10-01 17:38:28.743292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.478 qpair failed and we were unable to recover it. 00:38:30.478 [2024-10-01 17:38:28.743592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.478 [2024-10-01 17:38:28.743603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.478 qpair failed and we were unable to recover it. 00:38:30.478 [2024-10-01 17:38:28.743929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.478 [2024-10-01 17:38:28.743940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.478 qpair failed and we were unable to recover it. 00:38:30.478 [2024-10-01 17:38:28.744121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.478 [2024-10-01 17:38:28.744133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.478 qpair failed and we were unable to recover it. 00:38:30.478 [2024-10-01 17:38:28.744472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.478 [2024-10-01 17:38:28.744485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.478 qpair failed and we were unable to recover it. 00:38:30.478 [2024-10-01 17:38:28.744815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.478 [2024-10-01 17:38:28.744826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.478 qpair failed and we were unable to recover it. 00:38:30.478 [2024-10-01 17:38:28.745043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.478 [2024-10-01 17:38:28.745054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.478 qpair failed and we were unable to recover it. 00:38:30.478 [2024-10-01 17:38:28.745339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.478 [2024-10-01 17:38:28.745350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.478 qpair failed and we were unable to recover it. 00:38:30.478 [2024-10-01 17:38:28.745641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.478 [2024-10-01 17:38:28.745652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.478 qpair failed and we were unable to recover it. 00:38:30.478 [2024-10-01 17:38:28.745878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.478 [2024-10-01 17:38:28.745888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.478 qpair failed and we were unable to recover it. 00:38:30.478 [2024-10-01 17:38:28.746179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.478 [2024-10-01 17:38:28.746189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.478 qpair failed and we were unable to recover it. 00:38:30.478 [2024-10-01 17:38:28.746498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.478 [2024-10-01 17:38:28.746508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.478 qpair failed and we were unable to recover it. 00:38:30.478 [2024-10-01 17:38:28.746804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.478 [2024-10-01 17:38:28.746814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.478 qpair failed and we were unable to recover it. 00:38:30.478 [2024-10-01 17:38:28.747103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.478 [2024-10-01 17:38:28.747114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.478 qpair failed and we were unable to recover it. 00:38:30.478 [2024-10-01 17:38:28.747434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.478 [2024-10-01 17:38:28.747445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.478 qpair failed and we were unable to recover it. 00:38:30.478 [2024-10-01 17:38:28.747745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.478 [2024-10-01 17:38:28.747755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.478 qpair failed and we were unable to recover it. 00:38:30.478 [2024-10-01 17:38:28.748068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.478 [2024-10-01 17:38:28.748079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.478 qpair failed and we were unable to recover it. 00:38:30.478 [2024-10-01 17:38:28.748294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.478 [2024-10-01 17:38:28.748306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.478 qpair failed and we were unable to recover it. 00:38:30.479 [2024-10-01 17:38:28.748637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.479 [2024-10-01 17:38:28.748649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.479 qpair failed and we were unable to recover it. 00:38:30.479 [2024-10-01 17:38:28.748951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.479 [2024-10-01 17:38:28.748962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.479 qpair failed and we were unable to recover it. 00:38:30.479 [2024-10-01 17:38:28.749279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.479 [2024-10-01 17:38:28.749290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.479 qpair failed and we were unable to recover it. 00:38:30.479 [2024-10-01 17:38:28.749622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.479 [2024-10-01 17:38:28.749633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.479 qpair failed and we were unable to recover it. 00:38:30.479 [2024-10-01 17:38:28.749960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.479 [2024-10-01 17:38:28.749970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.479 qpair failed and we were unable to recover it. 00:38:30.479 [2024-10-01 17:38:28.750273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.479 [2024-10-01 17:38:28.750284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.479 qpair failed and we were unable to recover it. 00:38:30.479 [2024-10-01 17:38:28.750582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.479 [2024-10-01 17:38:28.750592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.479 qpair failed and we were unable to recover it. 00:38:30.479 [2024-10-01 17:38:28.750742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.479 [2024-10-01 17:38:28.750753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.479 qpair failed and we were unable to recover it. 00:38:30.479 [2024-10-01 17:38:28.750938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.479 [2024-10-01 17:38:28.750949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.479 qpair failed and we were unable to recover it. 00:38:30.479 [2024-10-01 17:38:28.751219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.479 [2024-10-01 17:38:28.751230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.479 qpair failed and we were unable to recover it. 00:38:30.479 [2024-10-01 17:38:28.751493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.479 [2024-10-01 17:38:28.751504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.479 qpair failed and we were unable to recover it. 00:38:30.479 [2024-10-01 17:38:28.751751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.479 [2024-10-01 17:38:28.751762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.479 qpair failed and we were unable to recover it. 00:38:30.479 [2024-10-01 17:38:28.752086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.479 [2024-10-01 17:38:28.752098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.479 qpair failed and we were unable to recover it. 00:38:30.479 [2024-10-01 17:38:28.752413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.479 [2024-10-01 17:38:28.752427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.479 qpair failed and we were unable to recover it. 00:38:30.479 [2024-10-01 17:38:28.752750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.479 [2024-10-01 17:38:28.752761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.479 qpair failed and we were unable to recover it. 00:38:30.479 [2024-10-01 17:38:28.753082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.479 [2024-10-01 17:38:28.753093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.479 qpair failed and we were unable to recover it. 00:38:30.479 [2024-10-01 17:38:28.753438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.479 [2024-10-01 17:38:28.753449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.479 qpair failed and we were unable to recover it. 00:38:30.479 [2024-10-01 17:38:28.753746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.479 [2024-10-01 17:38:28.753756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.479 qpair failed and we were unable to recover it. 00:38:30.479 [2024-10-01 17:38:28.754052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.479 [2024-10-01 17:38:28.754063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.479 qpair failed and we were unable to recover it. 00:38:30.479 [2024-10-01 17:38:28.754392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.479 [2024-10-01 17:38:28.754403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.479 qpair failed and we were unable to recover it. 00:38:30.479 [2024-10-01 17:38:28.754730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.479 [2024-10-01 17:38:28.754740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.479 qpair failed and we were unable to recover it. 00:38:30.479 [2024-10-01 17:38:28.755039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.479 [2024-10-01 17:38:28.755050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.479 qpair failed and we were unable to recover it. 00:38:30.479 [2024-10-01 17:38:28.755351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.479 [2024-10-01 17:38:28.755363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.479 qpair failed and we were unable to recover it. 00:38:30.479 [2024-10-01 17:38:28.755692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.479 [2024-10-01 17:38:28.755704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.479 qpair failed and we were unable to recover it. 00:38:30.479 [2024-10-01 17:38:28.756032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.479 [2024-10-01 17:38:28.756043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.479 qpair failed and we were unable to recover it. 00:38:30.479 [2024-10-01 17:38:28.756355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.479 [2024-10-01 17:38:28.756365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.479 qpair failed and we were unable to recover it. 00:38:30.479 [2024-10-01 17:38:28.756663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.479 [2024-10-01 17:38:28.756673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.479 qpair failed and we were unable to recover it. 00:38:30.479 [2024-10-01 17:38:28.757007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.479 [2024-10-01 17:38:28.757018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.479 qpair failed and we were unable to recover it. 00:38:30.479 [2024-10-01 17:38:28.757328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.479 [2024-10-01 17:38:28.757339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.479 qpair failed and we were unable to recover it. 00:38:30.479 [2024-10-01 17:38:28.757644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.479 [2024-10-01 17:38:28.757655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.479 qpair failed and we were unable to recover it. 00:38:30.479 [2024-10-01 17:38:28.757957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.479 [2024-10-01 17:38:28.757967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.479 qpair failed and we were unable to recover it. 00:38:30.479 [2024-10-01 17:38:28.758279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.479 [2024-10-01 17:38:28.758290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.479 qpair failed and we were unable to recover it. 00:38:30.479 [2024-10-01 17:38:28.758617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.479 [2024-10-01 17:38:28.758627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.479 qpair failed and we were unable to recover it. 00:38:30.479 [2024-10-01 17:38:28.758889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.479 [2024-10-01 17:38:28.758901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.479 qpair failed and we were unable to recover it. 00:38:30.479 [2024-10-01 17:38:28.759226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.479 [2024-10-01 17:38:28.759238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.479 qpair failed and we were unable to recover it. 00:38:30.479 [2024-10-01 17:38:28.759570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.479 [2024-10-01 17:38:28.759581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.479 qpair failed and we were unable to recover it. 00:38:30.479 [2024-10-01 17:38:28.759905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.479 [2024-10-01 17:38:28.759916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.479 qpair failed and we were unable to recover it. 00:38:30.479 [2024-10-01 17:38:28.760221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.479 [2024-10-01 17:38:28.760232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.479 qpair failed and we were unable to recover it. 00:38:30.479 [2024-10-01 17:38:28.760536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.479 [2024-10-01 17:38:28.760547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.479 qpair failed and we were unable to recover it. 00:38:30.480 [2024-10-01 17:38:28.760859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.480 [2024-10-01 17:38:28.760869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.480 qpair failed and we were unable to recover it. 00:38:30.480 [2024-10-01 17:38:28.761171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.480 [2024-10-01 17:38:28.761181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.480 qpair failed and we were unable to recover it. 00:38:30.480 [2024-10-01 17:38:28.761486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.480 [2024-10-01 17:38:28.761496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.480 qpair failed and we were unable to recover it. 00:38:30.480 [2024-10-01 17:38:28.761779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.480 [2024-10-01 17:38:28.761790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.480 qpair failed and we were unable to recover it. 00:38:30.480 [2024-10-01 17:38:28.762108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.480 [2024-10-01 17:38:28.762119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.480 qpair failed and we were unable to recover it. 00:38:30.480 [2024-10-01 17:38:28.762438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.480 [2024-10-01 17:38:28.762450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.480 qpair failed and we were unable to recover it. 00:38:30.480 [2024-10-01 17:38:28.762756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.480 [2024-10-01 17:38:28.762767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.480 qpair failed and we were unable to recover it. 00:38:30.480 [2024-10-01 17:38:28.763067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.480 [2024-10-01 17:38:28.763078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.480 qpair failed and we were unable to recover it. 00:38:30.480 [2024-10-01 17:38:28.763379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.480 [2024-10-01 17:38:28.763390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.480 qpair failed and we were unable to recover it. 00:38:30.480 [2024-10-01 17:38:28.763595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.480 [2024-10-01 17:38:28.763605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.480 qpair failed and we were unable to recover it. 00:38:30.480 [2024-10-01 17:38:28.763871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.480 [2024-10-01 17:38:28.763882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.480 qpair failed and we were unable to recover it. 00:38:30.480 [2024-10-01 17:38:28.764152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.480 [2024-10-01 17:38:28.764163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.480 qpair failed and we were unable to recover it. 00:38:30.480 [2024-10-01 17:38:28.764477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.480 [2024-10-01 17:38:28.764487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.480 qpair failed and we were unable to recover it. 00:38:30.480 [2024-10-01 17:38:28.764772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.480 [2024-10-01 17:38:28.764783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.480 qpair failed and we were unable to recover it. 00:38:30.480 [2024-10-01 17:38:28.765009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.480 [2024-10-01 17:38:28.765021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.480 qpair failed and we were unable to recover it. 00:38:30.480 [2024-10-01 17:38:28.765313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.480 [2024-10-01 17:38:28.765326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.480 qpair failed and we were unable to recover it. 00:38:30.480 [2024-10-01 17:38:28.765625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.480 [2024-10-01 17:38:28.765635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.480 qpair failed and we were unable to recover it. 00:38:30.480 [2024-10-01 17:38:28.765967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.480 [2024-10-01 17:38:28.765978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.480 qpair failed and we were unable to recover it. 00:38:30.480 [2024-10-01 17:38:28.766327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.480 [2024-10-01 17:38:28.766338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.480 qpair failed and we were unable to recover it. 00:38:30.480 [2024-10-01 17:38:28.766642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.480 [2024-10-01 17:38:28.766653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.480 qpair failed and we were unable to recover it. 00:38:30.480 [2024-10-01 17:38:28.766915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.480 [2024-10-01 17:38:28.766926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.480 qpair failed and we were unable to recover it. 00:38:30.480 [2024-10-01 17:38:28.767214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.480 [2024-10-01 17:38:28.767225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.480 qpair failed and we were unable to recover it. 00:38:30.480 [2024-10-01 17:38:28.767472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.480 [2024-10-01 17:38:28.767482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.480 qpair failed and we were unable to recover it. 00:38:30.480 [2024-10-01 17:38:28.767789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.480 [2024-10-01 17:38:28.767801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.480 qpair failed and we were unable to recover it. 00:38:30.480 [2024-10-01 17:38:28.768119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.480 [2024-10-01 17:38:28.768131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.480 qpair failed and we were unable to recover it. 00:38:30.480 [2024-10-01 17:38:28.768455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.480 [2024-10-01 17:38:28.768467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.480 qpair failed and we were unable to recover it. 00:38:30.480 [2024-10-01 17:38:28.768771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.480 [2024-10-01 17:38:28.768782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.480 qpair failed and we were unable to recover it. 00:38:30.480 [2024-10-01 17:38:28.769057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.480 [2024-10-01 17:38:28.769068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.480 qpair failed and we were unable to recover it. 00:38:30.480 [2024-10-01 17:38:28.769375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.480 [2024-10-01 17:38:28.769385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.480 qpair failed and we were unable to recover it. 00:38:30.480 [2024-10-01 17:38:28.769716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.480 [2024-10-01 17:38:28.769727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.480 qpair failed and we were unable to recover it. 00:38:30.480 [2024-10-01 17:38:28.770032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.480 [2024-10-01 17:38:28.770043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.480 qpair failed and we were unable to recover it. 00:38:30.480 [2024-10-01 17:38:28.770338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.480 [2024-10-01 17:38:28.770349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.480 qpair failed and we were unable to recover it. 00:38:30.480 [2024-10-01 17:38:28.770675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.480 [2024-10-01 17:38:28.770686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.480 qpair failed and we were unable to recover it. 00:38:30.480 [2024-10-01 17:38:28.770970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.480 [2024-10-01 17:38:28.770980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.480 qpair failed and we were unable to recover it. 00:38:30.480 [2024-10-01 17:38:28.771293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.480 [2024-10-01 17:38:28.771304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.480 qpair failed and we were unable to recover it. 00:38:30.480 [2024-10-01 17:38:28.771649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.480 [2024-10-01 17:38:28.771660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.480 qpair failed and we were unable to recover it. 00:38:30.480 [2024-10-01 17:38:28.771978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.480 [2024-10-01 17:38:28.771990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.480 qpair failed and we were unable to recover it. 00:38:30.480 [2024-10-01 17:38:28.772323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.480 [2024-10-01 17:38:28.772334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.480 qpair failed and we were unable to recover it. 00:38:30.480 [2024-10-01 17:38:28.772638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.480 [2024-10-01 17:38:28.772648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.480 qpair failed and we were unable to recover it. 00:38:30.481 [2024-10-01 17:38:28.772807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.481 [2024-10-01 17:38:28.772820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.481 qpair failed and we were unable to recover it. 00:38:30.481 [2024-10-01 17:38:28.773001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.481 [2024-10-01 17:38:28.773013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.481 qpair failed and we were unable to recover it. 00:38:30.481 [2024-10-01 17:38:28.773296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.481 [2024-10-01 17:38:28.773306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.481 qpair failed and we were unable to recover it. 00:38:30.481 [2024-10-01 17:38:28.773597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.481 [2024-10-01 17:38:28.773609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.481 qpair failed and we were unable to recover it. 00:38:30.481 [2024-10-01 17:38:28.773909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.481 [2024-10-01 17:38:28.773921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.481 qpair failed and we were unable to recover it. 00:38:30.481 [2024-10-01 17:38:28.774221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.481 [2024-10-01 17:38:28.774233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.481 qpair failed and we were unable to recover it. 00:38:30.481 [2024-10-01 17:38:28.774557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.481 [2024-10-01 17:38:28.774569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.481 qpair failed and we were unable to recover it. 00:38:30.481 [2024-10-01 17:38:28.774872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.481 [2024-10-01 17:38:28.774883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.481 qpair failed and we were unable to recover it. 00:38:30.481 [2024-10-01 17:38:28.775165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.481 [2024-10-01 17:38:28.775176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.481 qpair failed and we were unable to recover it. 00:38:30.481 [2024-10-01 17:38:28.775440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.481 [2024-10-01 17:38:28.775450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.481 qpair failed and we were unable to recover it. 00:38:30.481 [2024-10-01 17:38:28.775773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.481 [2024-10-01 17:38:28.775783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.481 qpair failed and we were unable to recover it. 00:38:30.481 [2024-10-01 17:38:28.776085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.481 [2024-10-01 17:38:28.776096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.481 qpair failed and we were unable to recover it. 00:38:30.481 [2024-10-01 17:38:28.776394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.481 [2024-10-01 17:38:28.776404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.481 qpair failed and we were unable to recover it. 00:38:30.481 [2024-10-01 17:38:28.776705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.481 [2024-10-01 17:38:28.776716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.481 qpair failed and we were unable to recover it. 00:38:30.481 [2024-10-01 17:38:28.776898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.481 [2024-10-01 17:38:28.776910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.481 qpair failed and we were unable to recover it. 00:38:30.481 [2024-10-01 17:38:28.777210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.481 [2024-10-01 17:38:28.777221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.481 qpair failed and we were unable to recover it. 00:38:30.481 [2024-10-01 17:38:28.777478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.481 [2024-10-01 17:38:28.777489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.481 qpair failed and we were unable to recover it. 00:38:30.481 [2024-10-01 17:38:28.777760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.481 [2024-10-01 17:38:28.777771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.481 qpair failed and we were unable to recover it. 00:38:30.481 [2024-10-01 17:38:28.778066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.481 [2024-10-01 17:38:28.778077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.481 qpair failed and we were unable to recover it. 00:38:30.481 [2024-10-01 17:38:28.778389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.481 [2024-10-01 17:38:28.778400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.481 qpair failed and we were unable to recover it. 00:38:30.481 [2024-10-01 17:38:28.778646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.481 [2024-10-01 17:38:28.778657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.481 qpair failed and we were unable to recover it. 00:38:30.481 [2024-10-01 17:38:28.778976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.481 [2024-10-01 17:38:28.778986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.481 qpair failed and we were unable to recover it. 00:38:30.481 [2024-10-01 17:38:28.779299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.481 [2024-10-01 17:38:28.779310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.481 qpair failed and we were unable to recover it. 00:38:30.481 [2024-10-01 17:38:28.779611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.481 [2024-10-01 17:38:28.779622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.481 qpair failed and we were unable to recover it. 00:38:30.481 [2024-10-01 17:38:28.779923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.481 [2024-10-01 17:38:28.779934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.481 qpair failed and we were unable to recover it. 00:38:30.481 [2024-10-01 17:38:28.780214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.481 [2024-10-01 17:38:28.780226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.481 qpair failed and we were unable to recover it. 00:38:30.481 [2024-10-01 17:38:28.780553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.481 [2024-10-01 17:38:28.780564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.481 qpair failed and we were unable to recover it. 00:38:30.481 [2024-10-01 17:38:28.780864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.481 [2024-10-01 17:38:28.780874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.481 qpair failed and we were unable to recover it. 00:38:30.481 [2024-10-01 17:38:28.781153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.481 [2024-10-01 17:38:28.781164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.481 qpair failed and we were unable to recover it. 00:38:30.481 [2024-10-01 17:38:28.781459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.481 [2024-10-01 17:38:28.781470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.481 qpair failed and we were unable to recover it. 00:38:30.481 [2024-10-01 17:38:28.781804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.481 [2024-10-01 17:38:28.781814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.481 qpair failed and we were unable to recover it. 00:38:30.481 [2024-10-01 17:38:28.782112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.481 [2024-10-01 17:38:28.782123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.481 qpair failed and we were unable to recover it. 00:38:30.481 [2024-10-01 17:38:28.782306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.481 [2024-10-01 17:38:28.782319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.481 qpair failed and we were unable to recover it. 00:38:30.481 [2024-10-01 17:38:28.782633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.481 [2024-10-01 17:38:28.782644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.481 qpair failed and we were unable to recover it. 00:38:30.481 [2024-10-01 17:38:28.782925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.481 [2024-10-01 17:38:28.782936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.481 qpair failed and we were unable to recover it. 00:38:30.481 [2024-10-01 17:38:28.783238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.481 [2024-10-01 17:38:28.783250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.481 qpair failed and we were unable to recover it. 00:38:30.481 [2024-10-01 17:38:28.783551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.481 [2024-10-01 17:38:28.783563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.481 qpair failed and we were unable to recover it. 00:38:30.481 [2024-10-01 17:38:28.783840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.481 [2024-10-01 17:38:28.783851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.481 qpair failed and we were unable to recover it. 00:38:30.481 [2024-10-01 17:38:28.784250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.481 [2024-10-01 17:38:28.784261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.481 qpair failed and we were unable to recover it. 00:38:30.481 [2024-10-01 17:38:28.784483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.482 [2024-10-01 17:38:28.784493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.482 qpair failed and we were unable to recover it. 00:38:30.482 [2024-10-01 17:38:28.784810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.482 [2024-10-01 17:38:28.784821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.482 qpair failed and we were unable to recover it. 00:38:30.482 [2024-10-01 17:38:28.785153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.482 [2024-10-01 17:38:28.785165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.482 qpair failed and we were unable to recover it. 00:38:30.482 [2024-10-01 17:38:28.785469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.482 [2024-10-01 17:38:28.785480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.482 qpair failed and we were unable to recover it. 00:38:30.482 [2024-10-01 17:38:28.785776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.482 [2024-10-01 17:38:28.785788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.482 qpair failed and we were unable to recover it. 00:38:30.482 [2024-10-01 17:38:28.786085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.482 [2024-10-01 17:38:28.786099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.482 qpair failed and we were unable to recover it. 00:38:30.482 [2024-10-01 17:38:28.786407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.482 [2024-10-01 17:38:28.786417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.482 qpair failed and we were unable to recover it. 00:38:30.482 [2024-10-01 17:38:28.786742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.482 [2024-10-01 17:38:28.786753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.482 qpair failed and we were unable to recover it. 00:38:30.482 [2024-10-01 17:38:28.787049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.482 [2024-10-01 17:38:28.787060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.482 qpair failed and we were unable to recover it. 00:38:30.482 [2024-10-01 17:38:28.787430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.482 [2024-10-01 17:38:28.787441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.482 qpair failed and we were unable to recover it. 00:38:30.482 [2024-10-01 17:38:28.787765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.482 [2024-10-01 17:38:28.787776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.482 qpair failed and we were unable to recover it. 00:38:30.482 [2024-10-01 17:38:28.788101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.482 [2024-10-01 17:38:28.788112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.482 qpair failed and we were unable to recover it. 00:38:30.482 [2024-10-01 17:38:28.788307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.482 [2024-10-01 17:38:28.788319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.482 qpair failed and we were unable to recover it. 00:38:30.482 [2024-10-01 17:38:28.788645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.482 [2024-10-01 17:38:28.788656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.482 qpair failed and we were unable to recover it. 00:38:30.482 [2024-10-01 17:38:28.788948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.482 [2024-10-01 17:38:28.788959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.482 qpair failed and we were unable to recover it. 00:38:30.482 [2024-10-01 17:38:28.789283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.482 [2024-10-01 17:38:28.789294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.482 qpair failed and we were unable to recover it. 00:38:30.482 [2024-10-01 17:38:28.789480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.482 [2024-10-01 17:38:28.789491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.482 qpair failed and we were unable to recover it. 00:38:30.482 [2024-10-01 17:38:28.789796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.482 [2024-10-01 17:38:28.789807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.482 qpair failed and we were unable to recover it. 00:38:30.482 [2024-10-01 17:38:28.790109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.482 [2024-10-01 17:38:28.790121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.482 qpair failed and we were unable to recover it. 00:38:30.482 [2024-10-01 17:38:28.790449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.482 [2024-10-01 17:38:28.790460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.482 qpair failed and we were unable to recover it. 00:38:30.482 [2024-10-01 17:38:28.790764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.482 [2024-10-01 17:38:28.790775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.482 qpair failed and we were unable to recover it. 00:38:30.482 [2024-10-01 17:38:28.791078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.482 [2024-10-01 17:38:28.791089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.482 qpair failed and we were unable to recover it. 00:38:30.482 [2024-10-01 17:38:28.791425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.482 [2024-10-01 17:38:28.791435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.482 qpair failed and we were unable to recover it. 00:38:30.482 [2024-10-01 17:38:28.791719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.482 [2024-10-01 17:38:28.791730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.482 qpair failed and we were unable to recover it. 00:38:30.482 [2024-10-01 17:38:28.792031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.482 [2024-10-01 17:38:28.792042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.482 qpair failed and we were unable to recover it. 00:38:30.482 [2024-10-01 17:38:28.792333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.482 [2024-10-01 17:38:28.792344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.482 qpair failed and we were unable to recover it. 00:38:30.482 [2024-10-01 17:38:28.792678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.482 [2024-10-01 17:38:28.792690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.482 qpair failed and we were unable to recover it. 00:38:30.482 [2024-10-01 17:38:28.793023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.482 [2024-10-01 17:38:28.793034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.482 qpair failed and we were unable to recover it. 00:38:30.482 [2024-10-01 17:38:28.793353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.482 [2024-10-01 17:38:28.793363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.482 qpair failed and we were unable to recover it. 00:38:30.482 [2024-10-01 17:38:28.793664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.482 [2024-10-01 17:38:28.793674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.482 qpair failed and we were unable to recover it. 00:38:30.482 [2024-10-01 17:38:28.794007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.482 [2024-10-01 17:38:28.794018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.482 qpair failed and we were unable to recover it. 00:38:30.482 [2024-10-01 17:38:28.794321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.482 [2024-10-01 17:38:28.794331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.482 qpair failed and we were unable to recover it. 00:38:30.482 [2024-10-01 17:38:28.794636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.482 [2024-10-01 17:38:28.794646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.482 qpair failed and we were unable to recover it. 00:38:30.482 [2024-10-01 17:38:28.794953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.482 [2024-10-01 17:38:28.794964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.482 qpair failed and we were unable to recover it. 00:38:30.482 [2024-10-01 17:38:28.795295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.482 [2024-10-01 17:38:28.795306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.482 qpair failed and we were unable to recover it. 00:38:30.482 [2024-10-01 17:38:28.795575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.483 [2024-10-01 17:38:28.795586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.483 qpair failed and we were unable to recover it. 00:38:30.483 [2024-10-01 17:38:28.795843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.483 [2024-10-01 17:38:28.795855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.483 qpair failed and we were unable to recover it. 00:38:30.483 [2024-10-01 17:38:28.796200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.483 [2024-10-01 17:38:28.796212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.483 qpair failed and we were unable to recover it. 00:38:30.483 [2024-10-01 17:38:28.796531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.483 [2024-10-01 17:38:28.796542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.483 qpair failed and we were unable to recover it. 00:38:30.483 [2024-10-01 17:38:28.796816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.483 [2024-10-01 17:38:28.796827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.483 qpair failed and we were unable to recover it. 00:38:30.483 [2024-10-01 17:38:28.797122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.483 [2024-10-01 17:38:28.797132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.483 qpair failed and we were unable to recover it. 00:38:30.483 [2024-10-01 17:38:28.797437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.483 [2024-10-01 17:38:28.797448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.483 qpair failed and we were unable to recover it. 00:38:30.483 [2024-10-01 17:38:28.797749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.483 [2024-10-01 17:38:28.797759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.483 qpair failed and we were unable to recover it. 00:38:30.483 [2024-10-01 17:38:28.798086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.483 [2024-10-01 17:38:28.798097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.483 qpair failed and we were unable to recover it. 00:38:30.483 [2024-10-01 17:38:28.798404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.483 [2024-10-01 17:38:28.798415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.483 qpair failed and we were unable to recover it. 00:38:30.483 [2024-10-01 17:38:28.798726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.483 [2024-10-01 17:38:28.798736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.483 qpair failed and we were unable to recover it. 00:38:30.483 [2024-10-01 17:38:28.799044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.483 [2024-10-01 17:38:28.799056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.483 qpair failed and we were unable to recover it. 00:38:30.483 [2024-10-01 17:38:28.799342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.483 [2024-10-01 17:38:28.799352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.483 qpair failed and we were unable to recover it. 00:38:30.483 [2024-10-01 17:38:28.799657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.483 [2024-10-01 17:38:28.799667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.483 qpair failed and we were unable to recover it. 00:38:30.483 [2024-10-01 17:38:28.799939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.483 [2024-10-01 17:38:28.799949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.483 qpair failed and we were unable to recover it. 00:38:30.483 [2024-10-01 17:38:28.800273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.483 [2024-10-01 17:38:28.800284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.483 qpair failed and we were unable to recover it. 00:38:30.483 [2024-10-01 17:38:28.800556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.483 [2024-10-01 17:38:28.800567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.483 qpair failed and we were unable to recover it. 00:38:30.483 [2024-10-01 17:38:28.800866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.483 [2024-10-01 17:38:28.800877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.483 qpair failed and we were unable to recover it. 00:38:30.483 [2024-10-01 17:38:28.801165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.483 [2024-10-01 17:38:28.801176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.483 qpair failed and we were unable to recover it. 00:38:30.483 [2024-10-01 17:38:28.801441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.483 [2024-10-01 17:38:28.801453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.483 qpair failed and we were unable to recover it. 00:38:30.483 [2024-10-01 17:38:28.801729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.483 [2024-10-01 17:38:28.801740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.483 qpair failed and we were unable to recover it. 00:38:30.483 [2024-10-01 17:38:28.802033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.483 [2024-10-01 17:38:28.802044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.483 qpair failed and we were unable to recover it. 00:38:30.483 [2024-10-01 17:38:28.802327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.483 [2024-10-01 17:38:28.802337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.483 qpair failed and we were unable to recover it. 00:38:30.483 [2024-10-01 17:38:28.802636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.483 [2024-10-01 17:38:28.802647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.483 qpair failed and we were unable to recover it. 00:38:30.483 [2024-10-01 17:38:28.802971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.483 [2024-10-01 17:38:28.802982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.483 qpair failed and we were unable to recover it. 00:38:30.483 [2024-10-01 17:38:28.803289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.483 [2024-10-01 17:38:28.803300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.483 qpair failed and we were unable to recover it. 00:38:30.483 [2024-10-01 17:38:28.803596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.483 [2024-10-01 17:38:28.803607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.483 qpair failed and we were unable to recover it. 00:38:30.483 [2024-10-01 17:38:28.803908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.483 [2024-10-01 17:38:28.803918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.483 qpair failed and we were unable to recover it. 00:38:30.483 [2024-10-01 17:38:28.804231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.483 [2024-10-01 17:38:28.804242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.483 qpair failed and we were unable to recover it. 00:38:30.483 [2024-10-01 17:38:28.804542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.483 [2024-10-01 17:38:28.804552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.483 qpair failed and we were unable to recover it. 00:38:30.483 [2024-10-01 17:38:28.804857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.483 [2024-10-01 17:38:28.804868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.483 qpair failed and we were unable to recover it. 00:38:30.483 [2024-10-01 17:38:28.805163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.483 [2024-10-01 17:38:28.805174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.483 qpair failed and we were unable to recover it. 00:38:30.483 [2024-10-01 17:38:28.805451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.483 [2024-10-01 17:38:28.805463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.483 qpair failed and we were unable to recover it. 00:38:30.483 [2024-10-01 17:38:28.805761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.483 [2024-10-01 17:38:28.805772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.483 qpair failed and we were unable to recover it. 00:38:30.483 [2024-10-01 17:38:28.806122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.483 [2024-10-01 17:38:28.806133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.483 qpair failed and we were unable to recover it. 00:38:30.483 [2024-10-01 17:38:28.806465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.483 [2024-10-01 17:38:28.806476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.483 qpair failed and we were unable to recover it. 00:38:30.483 [2024-10-01 17:38:28.806760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.483 [2024-10-01 17:38:28.806771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.483 qpair failed and we were unable to recover it. 00:38:30.483 [2024-10-01 17:38:28.807067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.483 [2024-10-01 17:38:28.807078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.483 qpair failed and we were unable to recover it. 00:38:30.483 [2024-10-01 17:38:28.807376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.483 [2024-10-01 17:38:28.807389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.483 qpair failed and we were unable to recover it. 00:38:30.483 [2024-10-01 17:38:28.807595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.484 [2024-10-01 17:38:28.807605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.484 qpair failed and we were unable to recover it. 00:38:30.484 [2024-10-01 17:38:28.807918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.484 [2024-10-01 17:38:28.807929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.484 qpair failed and we were unable to recover it. 00:38:30.484 [2024-10-01 17:38:28.808236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.484 [2024-10-01 17:38:28.808247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.484 qpair failed and we were unable to recover it. 00:38:30.484 [2024-10-01 17:38:28.808546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.484 [2024-10-01 17:38:28.808558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.484 qpair failed and we were unable to recover it. 00:38:30.484 [2024-10-01 17:38:28.808892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.484 [2024-10-01 17:38:28.808902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.484 qpair failed and we were unable to recover it. 00:38:30.484 [2024-10-01 17:38:28.809209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.484 [2024-10-01 17:38:28.809220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.484 qpair failed and we were unable to recover it. 00:38:30.484 [2024-10-01 17:38:28.809527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.484 [2024-10-01 17:38:28.809537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.484 qpair failed and we were unable to recover it. 00:38:30.484 [2024-10-01 17:38:28.809840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.484 [2024-10-01 17:38:28.809850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.484 qpair failed and we were unable to recover it. 00:38:30.484 [2024-10-01 17:38:28.810121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.484 [2024-10-01 17:38:28.810132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.484 qpair failed and we were unable to recover it. 00:38:30.484 [2024-10-01 17:38:28.810424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.484 [2024-10-01 17:38:28.810435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.484 qpair failed and we were unable to recover it. 00:38:30.484 [2024-10-01 17:38:28.810731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.484 [2024-10-01 17:38:28.810741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.484 qpair failed and we were unable to recover it. 00:38:30.484 [2024-10-01 17:38:28.811000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.484 [2024-10-01 17:38:28.811012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.484 qpair failed and we were unable to recover it. 00:38:30.484 [2024-10-01 17:38:28.811344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.484 [2024-10-01 17:38:28.811355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.484 qpair failed and we were unable to recover it. 00:38:30.484 [2024-10-01 17:38:28.811686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.484 [2024-10-01 17:38:28.811697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.484 qpair failed and we were unable to recover it. 00:38:30.484 [2024-10-01 17:38:28.812003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.484 [2024-10-01 17:38:28.812014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.484 qpair failed and we were unable to recover it. 00:38:30.484 [2024-10-01 17:38:28.812327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.484 [2024-10-01 17:38:28.812338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.484 qpair failed and we were unable to recover it. 00:38:30.484 [2024-10-01 17:38:28.812640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.484 [2024-10-01 17:38:28.812651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.484 qpair failed and we were unable to recover it. 00:38:30.484 [2024-10-01 17:38:28.812933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.484 [2024-10-01 17:38:28.812944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.484 qpair failed and we were unable to recover it. 00:38:30.484 [2024-10-01 17:38:28.813250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.484 [2024-10-01 17:38:28.813262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.484 qpair failed and we were unable to recover it. 00:38:30.484 [2024-10-01 17:38:28.813571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.484 [2024-10-01 17:38:28.813581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.484 qpair failed and we were unable to recover it. 00:38:30.484 [2024-10-01 17:38:28.813896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.484 [2024-10-01 17:38:28.813907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.484 qpair failed and we were unable to recover it. 00:38:30.484 [2024-10-01 17:38:28.814198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.484 [2024-10-01 17:38:28.814209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.484 qpair failed and we were unable to recover it. 00:38:30.484 [2024-10-01 17:38:28.814509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.484 [2024-10-01 17:38:28.814520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.484 qpair failed and we were unable to recover it. 00:38:30.484 [2024-10-01 17:38:28.814820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.484 [2024-10-01 17:38:28.814830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.484 qpair failed and we were unable to recover it. 00:38:30.484 [2024-10-01 17:38:28.815130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.484 [2024-10-01 17:38:28.815141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.484 qpair failed and we were unable to recover it. 00:38:30.484 [2024-10-01 17:38:28.815431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.484 [2024-10-01 17:38:28.815441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.484 qpair failed and we were unable to recover it. 00:38:30.484 [2024-10-01 17:38:28.815741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.484 [2024-10-01 17:38:28.815751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.484 qpair failed and we were unable to recover it. 00:38:30.484 [2024-10-01 17:38:28.816056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.484 [2024-10-01 17:38:28.816067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.484 qpair failed and we were unable to recover it. 00:38:30.484 [2024-10-01 17:38:28.816372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.484 [2024-10-01 17:38:28.816382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.484 qpair failed and we were unable to recover it. 00:38:30.484 [2024-10-01 17:38:28.816712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.484 [2024-10-01 17:38:28.816722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.484 qpair failed and we were unable to recover it. 00:38:30.484 [2024-10-01 17:38:28.817031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.484 [2024-10-01 17:38:28.817043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.484 qpair failed and we were unable to recover it. 00:38:30.484 [2024-10-01 17:38:28.817352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.484 [2024-10-01 17:38:28.817362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.484 qpair failed and we were unable to recover it. 00:38:30.484 [2024-10-01 17:38:28.817664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.484 [2024-10-01 17:38:28.817675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.484 qpair failed and we were unable to recover it. 00:38:30.484 [2024-10-01 17:38:28.818008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.484 [2024-10-01 17:38:28.818019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.484 qpair failed and we were unable to recover it. 00:38:30.484 [2024-10-01 17:38:28.818344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.484 [2024-10-01 17:38:28.818355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.484 qpair failed and we were unable to recover it. 00:38:30.484 [2024-10-01 17:38:28.818660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.484 [2024-10-01 17:38:28.818670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.484 qpair failed and we were unable to recover it. 00:38:30.484 [2024-10-01 17:38:28.819034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.484 [2024-10-01 17:38:28.819045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.484 qpair failed and we were unable to recover it. 00:38:30.484 [2024-10-01 17:38:28.819350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.484 [2024-10-01 17:38:28.819361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.484 qpair failed and we were unable to recover it. 00:38:30.484 [2024-10-01 17:38:28.819701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.484 [2024-10-01 17:38:28.819711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.484 qpair failed and we were unable to recover it. 00:38:30.484 [2024-10-01 17:38:28.820033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.485 [2024-10-01 17:38:28.820044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.485 qpair failed and we were unable to recover it. 00:38:30.485 [2024-10-01 17:38:28.820364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.485 [2024-10-01 17:38:28.820375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.485 qpair failed and we were unable to recover it. 00:38:30.485 [2024-10-01 17:38:28.820647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.485 [2024-10-01 17:38:28.820658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.485 qpair failed and we were unable to recover it. 00:38:30.485 [2024-10-01 17:38:28.820963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.485 [2024-10-01 17:38:28.820973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.485 qpair failed and we were unable to recover it. 00:38:30.485 [2024-10-01 17:38:28.821242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.485 [2024-10-01 17:38:28.821253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.485 qpair failed and we were unable to recover it. 00:38:30.485 [2024-10-01 17:38:28.821587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.485 [2024-10-01 17:38:28.821597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.485 qpair failed and we were unable to recover it. 00:38:30.485 [2024-10-01 17:38:28.821884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.485 [2024-10-01 17:38:28.821895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.485 qpair failed and we were unable to recover it. 00:38:30.485 [2024-10-01 17:38:28.822201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.485 [2024-10-01 17:38:28.822212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.485 qpair failed and we were unable to recover it. 00:38:30.485 [2024-10-01 17:38:28.822512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.485 [2024-10-01 17:38:28.822523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.485 qpair failed and we were unable to recover it. 00:38:30.485 [2024-10-01 17:38:28.822825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.485 [2024-10-01 17:38:28.822836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.485 qpair failed and we were unable to recover it. 00:38:30.485 [2024-10-01 17:38:28.823051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.485 [2024-10-01 17:38:28.823063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.485 qpair failed and we were unable to recover it. 00:38:30.485 [2024-10-01 17:38:28.823374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.485 [2024-10-01 17:38:28.823386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.485 qpair failed and we were unable to recover it. 00:38:30.485 [2024-10-01 17:38:28.823713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.485 [2024-10-01 17:38:28.823724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.485 qpair failed and we were unable to recover it. 00:38:30.485 [2024-10-01 17:38:28.823972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.485 [2024-10-01 17:38:28.823982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.485 qpair failed and we were unable to recover it. 00:38:30.485 [2024-10-01 17:38:28.824302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.485 [2024-10-01 17:38:28.824313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.485 qpair failed and we were unable to recover it. 00:38:30.485 [2024-10-01 17:38:28.824629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.485 [2024-10-01 17:38:28.824639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.485 qpair failed and we were unable to recover it. 00:38:30.485 [2024-10-01 17:38:28.824945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.485 [2024-10-01 17:38:28.824956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.485 qpair failed and we were unable to recover it. 00:38:30.485 [2024-10-01 17:38:28.825151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.485 [2024-10-01 17:38:28.825161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.485 qpair failed and we were unable to recover it. 00:38:30.485 [2024-10-01 17:38:28.825355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.485 [2024-10-01 17:38:28.825366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.485 qpair failed and we were unable to recover it. 00:38:30.485 [2024-10-01 17:38:28.825680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.485 [2024-10-01 17:38:28.825690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.485 qpair failed and we were unable to recover it. 00:38:30.485 [2024-10-01 17:38:28.825983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.485 [2024-10-01 17:38:28.825997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.485 qpair failed and we were unable to recover it. 00:38:30.485 [2024-10-01 17:38:28.826331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.485 [2024-10-01 17:38:28.826342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.485 qpair failed and we were unable to recover it. 00:38:30.485 [2024-10-01 17:38:28.826667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.485 [2024-10-01 17:38:28.826678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.485 qpair failed and we were unable to recover it. 00:38:30.485 [2024-10-01 17:38:28.826836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.485 [2024-10-01 17:38:28.826847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.485 qpair failed and we were unable to recover it. 00:38:30.485 [2024-10-01 17:38:28.827142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.485 [2024-10-01 17:38:28.827153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.485 qpair failed and we were unable to recover it. 00:38:30.485 [2024-10-01 17:38:28.827475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.485 [2024-10-01 17:38:28.827486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.485 qpair failed and we were unable to recover it. 00:38:30.485 [2024-10-01 17:38:28.827674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.485 [2024-10-01 17:38:28.827685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.485 qpair failed and we were unable to recover it. 00:38:30.485 [2024-10-01 17:38:28.828015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.485 [2024-10-01 17:38:28.828027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.485 qpair failed and we were unable to recover it. 00:38:30.485 [2024-10-01 17:38:28.828317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.485 [2024-10-01 17:38:28.828330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.485 qpair failed and we were unable to recover it. 00:38:30.485 [2024-10-01 17:38:28.828520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.485 [2024-10-01 17:38:28.828532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.485 qpair failed and we were unable to recover it. 00:38:30.485 [2024-10-01 17:38:28.828824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.485 [2024-10-01 17:38:28.828834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.485 qpair failed and we were unable to recover it. 00:38:30.485 [2024-10-01 17:38:28.829130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.485 [2024-10-01 17:38:28.829141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.485 qpair failed and we were unable to recover it. 00:38:30.485 [2024-10-01 17:38:28.829409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.485 [2024-10-01 17:38:28.829419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.485 qpair failed and we were unable to recover it. 00:38:30.485 [2024-10-01 17:38:28.829722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.485 [2024-10-01 17:38:28.829733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.485 qpair failed and we were unable to recover it. 00:38:30.485 [2024-10-01 17:38:28.829914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.485 [2024-10-01 17:38:28.829927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.485 qpair failed and we were unable to recover it. 00:38:30.485 [2024-10-01 17:38:28.830104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.485 [2024-10-01 17:38:28.830116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.485 qpair failed and we were unable to recover it. 00:38:30.485 [2024-10-01 17:38:28.830383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.485 [2024-10-01 17:38:28.830394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.485 qpair failed and we were unable to recover it. 00:38:30.485 [2024-10-01 17:38:28.830724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.485 [2024-10-01 17:38:28.830735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.485 qpair failed and we were unable to recover it. 00:38:30.485 [2024-10-01 17:38:28.831064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.485 [2024-10-01 17:38:28.831075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.485 qpair failed and we were unable to recover it. 00:38:30.485 [2024-10-01 17:38:28.831401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.485 [2024-10-01 17:38:28.831411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.486 qpair failed and we were unable to recover it. 00:38:30.486 [2024-10-01 17:38:28.831592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.486 [2024-10-01 17:38:28.831604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.486 qpair failed and we were unable to recover it. 00:38:30.486 [2024-10-01 17:38:28.831913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.486 [2024-10-01 17:38:28.831924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.486 qpair failed and we were unable to recover it. 00:38:30.486 [2024-10-01 17:38:28.832218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.486 [2024-10-01 17:38:28.832229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.486 qpair failed and we were unable to recover it. 00:38:30.486 [2024-10-01 17:38:28.832533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.486 [2024-10-01 17:38:28.832544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.486 qpair failed and we were unable to recover it. 00:38:30.486 [2024-10-01 17:38:28.832848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.486 [2024-10-01 17:38:28.832859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.486 qpair failed and we were unable to recover it. 00:38:30.486 [2024-10-01 17:38:28.833164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.486 [2024-10-01 17:38:28.833175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.486 qpair failed and we were unable to recover it. 00:38:30.486 [2024-10-01 17:38:28.833462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.486 [2024-10-01 17:38:28.833472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.486 qpair failed and we were unable to recover it. 00:38:30.486 [2024-10-01 17:38:28.833776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.486 [2024-10-01 17:38:28.833786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.486 qpair failed and we were unable to recover it. 00:38:30.486 [2024-10-01 17:38:28.834084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.486 [2024-10-01 17:38:28.834096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.486 qpair failed and we were unable to recover it. 00:38:30.486 [2024-10-01 17:38:28.834397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.486 [2024-10-01 17:38:28.834409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.486 qpair failed and we were unable to recover it. 00:38:30.486 [2024-10-01 17:38:28.834691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.486 [2024-10-01 17:38:28.834703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.486 qpair failed and we were unable to recover it. 00:38:30.486 [2024-10-01 17:38:28.835007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.486 [2024-10-01 17:38:28.835019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.486 qpair failed and we were unable to recover it. 00:38:30.486 [2024-10-01 17:38:28.835301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.486 [2024-10-01 17:38:28.835312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.486 qpair failed and we were unable to recover it. 00:38:30.486 [2024-10-01 17:38:28.835632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.486 [2024-10-01 17:38:28.835643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.486 qpair failed and we were unable to recover it. 00:38:30.486 [2024-10-01 17:38:28.835972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.486 [2024-10-01 17:38:28.835983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.486 qpair failed and we were unable to recover it. 00:38:30.486 [2024-10-01 17:38:28.836289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.486 [2024-10-01 17:38:28.836300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.486 qpair failed and we were unable to recover it. 00:38:30.486 [2024-10-01 17:38:28.836667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.486 [2024-10-01 17:38:28.836677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.486 qpair failed and we were unable to recover it. 00:38:30.486 [2024-10-01 17:38:28.836975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.486 [2024-10-01 17:38:28.836985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.486 qpair failed and we were unable to recover it. 00:38:30.486 [2024-10-01 17:38:28.837283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.486 [2024-10-01 17:38:28.837294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.486 qpair failed and we were unable to recover it. 00:38:30.486 [2024-10-01 17:38:28.837596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.486 [2024-10-01 17:38:28.837606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.486 qpair failed and we were unable to recover it. 00:38:30.486 [2024-10-01 17:38:28.837906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.486 [2024-10-01 17:38:28.837918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.486 qpair failed and we were unable to recover it. 00:38:30.486 [2024-10-01 17:38:28.838236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.486 [2024-10-01 17:38:28.838248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.486 qpair failed and we were unable to recover it. 00:38:30.486 [2024-10-01 17:38:28.838530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.486 [2024-10-01 17:38:28.838542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.486 qpair failed and we were unable to recover it. 00:38:30.486 [2024-10-01 17:38:28.838838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.486 [2024-10-01 17:38:28.838848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.486 qpair failed and we were unable to recover it. 00:38:30.486 [2024-10-01 17:38:28.839151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.486 [2024-10-01 17:38:28.839163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.486 qpair failed and we were unable to recover it. 00:38:30.486 [2024-10-01 17:38:28.839433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.486 [2024-10-01 17:38:28.839444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.486 qpair failed and we were unable to recover it. 00:38:30.486 [2024-10-01 17:38:28.839670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.486 [2024-10-01 17:38:28.839681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.486 qpair failed and we were unable to recover it. 00:38:30.486 [2024-10-01 17:38:28.839984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.486 [2024-10-01 17:38:28.840004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.486 qpair failed and we were unable to recover it. 00:38:30.486 [2024-10-01 17:38:28.840279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.486 [2024-10-01 17:38:28.840291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.486 qpair failed and we were unable to recover it. 00:38:30.486 [2024-10-01 17:38:28.840625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.486 [2024-10-01 17:38:28.840638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.486 qpair failed and we were unable to recover it. 00:38:30.486 [2024-10-01 17:38:28.840961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.486 [2024-10-01 17:38:28.840973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.486 qpair failed and we were unable to recover it. 00:38:30.486 [2024-10-01 17:38:28.841270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.486 [2024-10-01 17:38:28.841282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.486 qpair failed and we were unable to recover it. 00:38:30.486 [2024-10-01 17:38:28.841579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.486 [2024-10-01 17:38:28.841590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.486 qpair failed and we were unable to recover it. 00:38:30.486 [2024-10-01 17:38:28.841850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.486 [2024-10-01 17:38:28.841860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.486 qpair failed and we were unable to recover it. 00:38:30.486 [2024-10-01 17:38:28.842186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.487 [2024-10-01 17:38:28.842197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.487 qpair failed and we were unable to recover it. 00:38:30.487 [2024-10-01 17:38:28.842499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.487 [2024-10-01 17:38:28.842510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.487 qpair failed and we were unable to recover it. 00:38:30.487 [2024-10-01 17:38:28.842823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.487 [2024-10-01 17:38:28.842833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.487 qpair failed and we were unable to recover it. 00:38:30.487 [2024-10-01 17:38:28.843135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.487 [2024-10-01 17:38:28.843146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.487 qpair failed and we were unable to recover it. 00:38:30.487 [2024-10-01 17:38:28.843433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.487 [2024-10-01 17:38:28.843443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.487 qpair failed and we were unable to recover it. 00:38:30.487 [2024-10-01 17:38:28.843743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.487 [2024-10-01 17:38:28.843754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.487 qpair failed and we were unable to recover it. 00:38:30.487 [2024-10-01 17:38:28.844153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.487 [2024-10-01 17:38:28.844165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.487 qpair failed and we were unable to recover it. 00:38:30.487 [2024-10-01 17:38:28.844495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.487 [2024-10-01 17:38:28.844506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.487 qpair failed and we were unable to recover it. 00:38:30.487 [2024-10-01 17:38:28.844831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.487 [2024-10-01 17:38:28.844841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.487 qpair failed and we were unable to recover it. 00:38:30.487 [2024-10-01 17:38:28.845139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.487 [2024-10-01 17:38:28.845150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.487 qpair failed and we were unable to recover it. 00:38:30.487 [2024-10-01 17:38:28.845465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.487 [2024-10-01 17:38:28.845475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.487 qpair failed and we were unable to recover it. 00:38:30.487 [2024-10-01 17:38:28.845794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.487 [2024-10-01 17:38:28.845804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.487 qpair failed and we were unable to recover it. 00:38:30.487 [2024-10-01 17:38:28.846100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.487 [2024-10-01 17:38:28.846110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.487 qpair failed and we were unable to recover it. 00:38:30.487 [2024-10-01 17:38:28.846410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.487 [2024-10-01 17:38:28.846421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.487 qpair failed and we were unable to recover it. 00:38:30.487 [2024-10-01 17:38:28.846720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.487 [2024-10-01 17:38:28.846732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.487 qpair failed and we were unable to recover it. 00:38:30.487 [2024-10-01 17:38:28.847061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.487 [2024-10-01 17:38:28.847073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.487 qpair failed and we were unable to recover it. 00:38:30.487 [2024-10-01 17:38:28.847374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.487 [2024-10-01 17:38:28.847386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.487 qpair failed and we were unable to recover it. 00:38:30.487 [2024-10-01 17:38:28.847687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.487 [2024-10-01 17:38:28.847697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.487 qpair failed and we were unable to recover it. 00:38:30.487 [2024-10-01 17:38:28.848005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.487 [2024-10-01 17:38:28.848016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.487 qpair failed and we were unable to recover it. 00:38:30.487 [2024-10-01 17:38:28.848348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.487 [2024-10-01 17:38:28.848358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.487 qpair failed and we were unable to recover it. 00:38:30.487 [2024-10-01 17:38:28.848681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.487 [2024-10-01 17:38:28.848692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.487 qpair failed and we were unable to recover it. 00:38:30.487 [2024-10-01 17:38:28.848961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.487 [2024-10-01 17:38:28.848971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.487 qpair failed and we were unable to recover it. 00:38:30.487 [2024-10-01 17:38:28.849261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.487 [2024-10-01 17:38:28.849275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.487 qpair failed and we were unable to recover it. 00:38:30.487 [2024-10-01 17:38:28.849593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.487 [2024-10-01 17:38:28.849603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.487 qpair failed and we were unable to recover it. 00:38:30.487 [2024-10-01 17:38:28.849878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.487 [2024-10-01 17:38:28.849890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.487 qpair failed and we were unable to recover it. 00:38:30.487 [2024-10-01 17:38:28.850192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.487 [2024-10-01 17:38:28.850204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.487 qpair failed and we were unable to recover it. 00:38:30.487 [2024-10-01 17:38:28.850505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.487 [2024-10-01 17:38:28.850516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.487 qpair failed and we were unable to recover it. 00:38:30.487 [2024-10-01 17:38:28.850817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.487 [2024-10-01 17:38:28.850828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.487 qpair failed and we were unable to recover it. 00:38:30.487 [2024-10-01 17:38:28.851120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.487 [2024-10-01 17:38:28.851131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.487 qpair failed and we were unable to recover it. 00:38:30.487 [2024-10-01 17:38:28.851427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.487 [2024-10-01 17:38:28.851438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.487 qpair failed and we were unable to recover it. 00:38:30.487 [2024-10-01 17:38:28.851698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.487 [2024-10-01 17:38:28.851709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.487 qpair failed and we were unable to recover it. 00:38:30.487 [2024-10-01 17:38:28.851992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.487 [2024-10-01 17:38:28.852011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.487 qpair failed and we were unable to recover it. 00:38:30.487 [2024-10-01 17:38:28.852328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.487 [2024-10-01 17:38:28.852339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.487 qpair failed and we were unable to recover it. 00:38:30.487 [2024-10-01 17:38:28.852643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.487 [2024-10-01 17:38:28.852653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.487 qpair failed and we were unable to recover it. 00:38:30.487 [2024-10-01 17:38:28.852961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.487 [2024-10-01 17:38:28.852972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.487 qpair failed and we were unable to recover it. 00:38:30.487 [2024-10-01 17:38:28.853348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.487 [2024-10-01 17:38:28.853360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.487 qpair failed and we were unable to recover it. 00:38:30.487 [2024-10-01 17:38:28.853650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.487 [2024-10-01 17:38:28.853662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.487 qpair failed and we were unable to recover it. 00:38:30.487 [2024-10-01 17:38:28.854000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.487 [2024-10-01 17:38:28.854011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.487 qpair failed and we were unable to recover it. 00:38:30.487 [2024-10-01 17:38:28.854298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.488 [2024-10-01 17:38:28.854308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.488 qpair failed and we were unable to recover it. 00:38:30.488 [2024-10-01 17:38:28.854614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.488 [2024-10-01 17:38:28.854625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.488 qpair failed and we were unable to recover it. 00:38:30.488 [2024-10-01 17:38:28.854950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.488 [2024-10-01 17:38:28.854960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.488 qpair failed and we were unable to recover it. 00:38:30.488 [2024-10-01 17:38:28.855282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.488 [2024-10-01 17:38:28.855293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.488 qpair failed and we were unable to recover it. 00:38:30.488 [2024-10-01 17:38:28.855597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.488 [2024-10-01 17:38:28.855607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.488 qpair failed and we were unable to recover it. 00:38:30.488 [2024-10-01 17:38:28.855928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.488 [2024-10-01 17:38:28.855938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.488 qpair failed and we were unable to recover it. 00:38:30.488 [2024-10-01 17:38:28.856230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.488 [2024-10-01 17:38:28.856240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.488 qpair failed and we were unable to recover it. 00:38:30.488 [2024-10-01 17:38:28.856532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.488 [2024-10-01 17:38:28.856543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.488 qpair failed and we were unable to recover it. 00:38:30.488 [2024-10-01 17:38:28.856843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.488 [2024-10-01 17:38:28.856855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.488 qpair failed and we were unable to recover it. 00:38:30.488 [2024-10-01 17:38:28.857192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.488 [2024-10-01 17:38:28.857204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.488 qpair failed and we were unable to recover it. 00:38:30.488 [2024-10-01 17:38:28.857535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.488 [2024-10-01 17:38:28.857546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.488 qpair failed and we were unable to recover it. 00:38:30.488 [2024-10-01 17:38:28.857846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.488 [2024-10-01 17:38:28.857856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.488 qpair failed and we were unable to recover it. 00:38:30.488 [2024-10-01 17:38:28.858150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.488 [2024-10-01 17:38:28.858161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.488 qpair failed and we were unable to recover it. 00:38:30.488 [2024-10-01 17:38:28.858370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.488 [2024-10-01 17:38:28.858380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.488 qpair failed and we were unable to recover it. 00:38:30.488 [2024-10-01 17:38:28.858660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.488 [2024-10-01 17:38:28.858671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.488 qpair failed and we were unable to recover it. 00:38:30.488 [2024-10-01 17:38:28.858978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.488 [2024-10-01 17:38:28.858989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.488 qpair failed and we were unable to recover it. 00:38:30.488 [2024-10-01 17:38:28.859296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.488 [2024-10-01 17:38:28.859306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.488 qpair failed and we were unable to recover it. 00:38:30.488 [2024-10-01 17:38:28.859608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.488 [2024-10-01 17:38:28.859619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.488 qpair failed and we were unable to recover it. 00:38:30.488 [2024-10-01 17:38:28.859946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.488 [2024-10-01 17:38:28.859957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.488 qpair failed and we were unable to recover it. 00:38:30.488 [2024-10-01 17:38:28.860261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.488 [2024-10-01 17:38:28.860272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.488 qpair failed and we were unable to recover it. 00:38:30.488 [2024-10-01 17:38:28.860572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.488 [2024-10-01 17:38:28.860583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.488 qpair failed and we were unable to recover it. 00:38:30.488 [2024-10-01 17:38:28.860906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.488 [2024-10-01 17:38:28.860917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.488 qpair failed and we were unable to recover it. 00:38:30.488 [2024-10-01 17:38:28.861207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.488 [2024-10-01 17:38:28.861218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.488 qpair failed and we were unable to recover it. 00:38:30.488 [2024-10-01 17:38:28.861534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.488 [2024-10-01 17:38:28.861545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.488 qpair failed and we were unable to recover it. 00:38:30.488 [2024-10-01 17:38:28.861849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.488 [2024-10-01 17:38:28.861860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.488 qpair failed and we were unable to recover it. 00:38:30.488 [2024-10-01 17:38:28.862160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.488 [2024-10-01 17:38:28.862177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.488 qpair failed and we were unable to recover it. 00:38:30.488 [2024-10-01 17:38:28.862461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.488 [2024-10-01 17:38:28.862473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.488 qpair failed and we were unable to recover it. 00:38:30.488 [2024-10-01 17:38:28.862776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.488 [2024-10-01 17:38:28.862787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.488 qpair failed and we were unable to recover it. 00:38:30.488 [2024-10-01 17:38:28.863122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.488 [2024-10-01 17:38:28.863134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.488 qpair failed and we were unable to recover it. 00:38:30.488 [2024-10-01 17:38:28.863404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.488 [2024-10-01 17:38:28.863415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.488 qpair failed and we were unable to recover it. 00:38:30.488 [2024-10-01 17:38:28.863741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.488 [2024-10-01 17:38:28.863751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.488 qpair failed and we were unable to recover it. 00:38:30.488 [2024-10-01 17:38:28.864123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.488 [2024-10-01 17:38:28.864134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.488 qpair failed and we were unable to recover it. 00:38:30.488 [2024-10-01 17:38:28.864442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.488 [2024-10-01 17:38:28.864453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.488 qpair failed and we were unable to recover it. 00:38:30.488 [2024-10-01 17:38:28.864737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.488 [2024-10-01 17:38:28.864749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.488 qpair failed and we were unable to recover it. 00:38:30.488 [2024-10-01 17:38:28.865073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.488 [2024-10-01 17:38:28.865085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.488 qpair failed and we were unable to recover it. 00:38:30.488 [2024-10-01 17:38:28.865389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.488 [2024-10-01 17:38:28.865399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.488 qpair failed and we were unable to recover it. 00:38:30.488 [2024-10-01 17:38:28.865583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.488 [2024-10-01 17:38:28.865595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.488 qpair failed and we were unable to recover it. 00:38:30.488 [2024-10-01 17:38:28.865906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.488 [2024-10-01 17:38:28.865916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.488 qpair failed and we were unable to recover it. 00:38:30.488 [2024-10-01 17:38:28.866212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.488 [2024-10-01 17:38:28.866223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.488 qpair failed and we were unable to recover it. 00:38:30.488 [2024-10-01 17:38:28.866548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.489 [2024-10-01 17:38:28.866559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.489 qpair failed and we were unable to recover it. 00:38:30.489 [2024-10-01 17:38:28.866856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.489 [2024-10-01 17:38:28.866866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.489 qpair failed and we were unable to recover it. 00:38:30.489 [2024-10-01 17:38:28.867150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.489 [2024-10-01 17:38:28.867161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.489 qpair failed and we were unable to recover it. 00:38:30.489 [2024-10-01 17:38:28.867491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.489 [2024-10-01 17:38:28.867502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.489 qpair failed and we were unable to recover it. 00:38:30.489 [2024-10-01 17:38:28.867772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.489 [2024-10-01 17:38:28.867783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.489 qpair failed and we were unable to recover it. 00:38:30.489 [2024-10-01 17:38:28.868089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.489 [2024-10-01 17:38:28.868100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.489 qpair failed and we were unable to recover it. 00:38:30.489 [2024-10-01 17:38:28.868433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.489 [2024-10-01 17:38:28.868445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.489 qpair failed and we were unable to recover it. 00:38:30.489 [2024-10-01 17:38:28.868773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.489 [2024-10-01 17:38:28.868784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.489 qpair failed and we were unable to recover it. 00:38:30.489 [2024-10-01 17:38:28.869083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.489 [2024-10-01 17:38:28.869095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.489 qpair failed and we were unable to recover it. 00:38:30.489 [2024-10-01 17:38:28.869367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.489 [2024-10-01 17:38:28.869379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.489 qpair failed and we were unable to recover it. 00:38:30.489 [2024-10-01 17:38:28.869708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.489 [2024-10-01 17:38:28.869718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.489 qpair failed and we were unable to recover it. 00:38:30.489 [2024-10-01 17:38:28.869999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.489 [2024-10-01 17:38:28.870011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.489 qpair failed and we were unable to recover it. 00:38:30.489 [2024-10-01 17:38:28.870332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.489 [2024-10-01 17:38:28.870343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.489 qpair failed and we were unable to recover it. 00:38:30.489 [2024-10-01 17:38:28.870644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.489 [2024-10-01 17:38:28.870657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.489 qpair failed and we were unable to recover it. 00:38:30.489 [2024-10-01 17:38:28.870976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.489 [2024-10-01 17:38:28.870986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.489 qpair failed and we were unable to recover it. 00:38:30.489 [2024-10-01 17:38:28.871296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.489 [2024-10-01 17:38:28.871307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.489 qpair failed and we were unable to recover it. 00:38:30.489 [2024-10-01 17:38:28.871610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.489 [2024-10-01 17:38:28.871621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.489 qpair failed and we were unable to recover it. 00:38:30.489 [2024-10-01 17:38:28.871923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.489 [2024-10-01 17:38:28.871934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.489 qpair failed and we were unable to recover it. 00:38:30.489 [2024-10-01 17:38:28.872254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.489 [2024-10-01 17:38:28.872267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.489 qpair failed and we were unable to recover it. 00:38:30.489 [2024-10-01 17:38:28.872596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.489 [2024-10-01 17:38:28.872607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.489 qpair failed and we were unable to recover it. 00:38:30.489 [2024-10-01 17:38:28.872903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.489 [2024-10-01 17:38:28.872913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.489 qpair failed and we were unable to recover it. 00:38:30.489 [2024-10-01 17:38:28.873196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.489 [2024-10-01 17:38:28.873207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.489 qpair failed and we were unable to recover it. 00:38:30.489 [2024-10-01 17:38:28.873505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.489 [2024-10-01 17:38:28.873516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.489 qpair failed and we were unable to recover it. 00:38:30.489 [2024-10-01 17:38:28.873840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.489 [2024-10-01 17:38:28.873851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.489 qpair failed and we were unable to recover it. 00:38:30.489 [2024-10-01 17:38:28.874153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.489 [2024-10-01 17:38:28.874164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.489 qpair failed and we were unable to recover it. 00:38:30.489 [2024-10-01 17:38:28.874382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.489 [2024-10-01 17:38:28.874393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.489 qpair failed and we were unable to recover it. 00:38:30.489 [2024-10-01 17:38:28.874722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.489 [2024-10-01 17:38:28.874733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.489 qpair failed and we were unable to recover it. 00:38:30.489 [2024-10-01 17:38:28.875057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.489 [2024-10-01 17:38:28.875069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.489 qpair failed and we were unable to recover it. 00:38:30.489 [2024-10-01 17:38:28.875372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.489 [2024-10-01 17:38:28.875383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.489 qpair failed and we were unable to recover it. 00:38:30.489 [2024-10-01 17:38:28.875682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.489 [2024-10-01 17:38:28.875692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.489 qpair failed and we were unable to recover it. 00:38:30.489 [2024-10-01 17:38:28.876015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.489 [2024-10-01 17:38:28.876026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.489 qpair failed and we were unable to recover it. 00:38:30.489 [2024-10-01 17:38:28.876313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.489 [2024-10-01 17:38:28.876323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.489 qpair failed and we were unable to recover it. 00:38:30.489 [2024-10-01 17:38:28.876518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.489 [2024-10-01 17:38:28.876529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.489 qpair failed and we were unable to recover it. 00:38:30.489 [2024-10-01 17:38:28.876743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.489 [2024-10-01 17:38:28.876755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.489 qpair failed and we were unable to recover it. 00:38:30.489 [2024-10-01 17:38:28.877077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.489 [2024-10-01 17:38:28.877088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.489 qpair failed and we were unable to recover it. 00:38:30.489 [2024-10-01 17:38:28.877411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.489 [2024-10-01 17:38:28.877422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.489 qpair failed and we were unable to recover it. 00:38:30.489 [2024-10-01 17:38:28.877711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.489 [2024-10-01 17:38:28.877721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.489 qpair failed and we were unable to recover it. 00:38:30.489 [2024-10-01 17:38:28.878024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.489 [2024-10-01 17:38:28.878035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.489 qpair failed and we were unable to recover it. 00:38:30.489 [2024-10-01 17:38:28.878338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.489 [2024-10-01 17:38:28.878348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.489 qpair failed and we were unable to recover it. 00:38:30.489 [2024-10-01 17:38:28.878676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.490 [2024-10-01 17:38:28.878686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.490 qpair failed and we were unable to recover it. 00:38:30.490 [2024-10-01 17:38:28.878867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.490 [2024-10-01 17:38:28.878880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.490 qpair failed and we were unable to recover it. 00:38:30.490 [2024-10-01 17:38:28.879196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.490 [2024-10-01 17:38:28.879207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.490 qpair failed and we were unable to recover it. 00:38:30.490 [2024-10-01 17:38:28.879388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.490 [2024-10-01 17:38:28.879399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.490 qpair failed and we were unable to recover it. 00:38:30.490 [2024-10-01 17:38:28.879696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.490 [2024-10-01 17:38:28.879706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.490 qpair failed and we were unable to recover it. 00:38:30.490 [2024-10-01 17:38:28.880010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.490 [2024-10-01 17:38:28.880021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.490 qpair failed and we were unable to recover it. 00:38:30.490 [2024-10-01 17:38:28.880320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.490 [2024-10-01 17:38:28.880332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.490 qpair failed and we were unable to recover it. 00:38:30.490 [2024-10-01 17:38:28.880593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.490 [2024-10-01 17:38:28.880604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.490 qpair failed and we were unable to recover it. 00:38:30.490 [2024-10-01 17:38:28.880878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.490 [2024-10-01 17:38:28.880889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.490 qpair failed and we were unable to recover it. 00:38:30.490 [2024-10-01 17:38:28.881206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.490 [2024-10-01 17:38:28.881217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.490 qpair failed and we were unable to recover it. 00:38:30.490 [2024-10-01 17:38:28.881517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.490 [2024-10-01 17:38:28.881527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.490 qpair failed and we were unable to recover it. 00:38:30.490 [2024-10-01 17:38:28.881828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.490 [2024-10-01 17:38:28.881839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.490 qpair failed and we were unable to recover it. 00:38:30.490 [2024-10-01 17:38:28.882137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.490 [2024-10-01 17:38:28.882149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.490 qpair failed and we were unable to recover it. 00:38:30.490 [2024-10-01 17:38:28.882465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.490 [2024-10-01 17:38:28.882476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.490 qpair failed and we were unable to recover it. 00:38:30.490 [2024-10-01 17:38:28.882777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.490 [2024-10-01 17:38:28.882789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.490 qpair failed and we were unable to recover it. 00:38:30.490 [2024-10-01 17:38:28.883077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.490 [2024-10-01 17:38:28.883090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.490 qpair failed and we were unable to recover it. 00:38:30.490 [2024-10-01 17:38:28.883362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.490 [2024-10-01 17:38:28.883372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.490 qpair failed and we were unable to recover it. 00:38:30.490 [2024-10-01 17:38:28.883717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.490 [2024-10-01 17:38:28.883727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.490 qpair failed and we were unable to recover it. 00:38:30.490 [2024-10-01 17:38:28.884032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.490 [2024-10-01 17:38:28.884043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.490 qpair failed and we were unable to recover it. 00:38:30.490 [2024-10-01 17:38:28.884362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.490 [2024-10-01 17:38:28.884373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.490 qpair failed and we were unable to recover it. 00:38:30.490 [2024-10-01 17:38:28.884557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.490 [2024-10-01 17:38:28.884569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.490 qpair failed and we were unable to recover it. 00:38:30.490 [2024-10-01 17:38:28.884851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.490 [2024-10-01 17:38:28.884861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.490 qpair failed and we were unable to recover it. 00:38:30.490 [2024-10-01 17:38:28.885176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.490 [2024-10-01 17:38:28.885187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.490 qpair failed and we were unable to recover it. 00:38:30.490 [2024-10-01 17:38:28.885501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.490 [2024-10-01 17:38:28.885512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.490 qpair failed and we were unable to recover it. 00:38:30.490 [2024-10-01 17:38:28.885834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.490 [2024-10-01 17:38:28.885846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.490 qpair failed and we were unable to recover it. 00:38:30.490 [2024-10-01 17:38:28.886143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.490 [2024-10-01 17:38:28.886155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.490 qpair failed and we were unable to recover it. 00:38:30.490 [2024-10-01 17:38:28.886458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.490 [2024-10-01 17:38:28.886468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.490 qpair failed and we were unable to recover it. 00:38:30.490 [2024-10-01 17:38:28.886751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.490 [2024-10-01 17:38:28.886761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.490 qpair failed and we were unable to recover it. 00:38:30.490 [2024-10-01 17:38:28.887081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.490 [2024-10-01 17:38:28.887093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.490 qpair failed and we were unable to recover it. 00:38:30.490 [2024-10-01 17:38:28.887406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.490 [2024-10-01 17:38:28.887416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.490 qpair failed and we were unable to recover it. 00:38:30.490 [2024-10-01 17:38:28.887712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.490 [2024-10-01 17:38:28.887723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.490 qpair failed and we were unable to recover it. 00:38:30.490 [2024-10-01 17:38:28.887984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.490 [2024-10-01 17:38:28.888004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.490 qpair failed and we were unable to recover it. 00:38:30.490 [2024-10-01 17:38:28.888349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.490 [2024-10-01 17:38:28.888360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.490 qpair failed and we were unable to recover it. 00:38:30.490 [2024-10-01 17:38:28.888660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.490 [2024-10-01 17:38:28.888671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.490 qpair failed and we were unable to recover it. 00:38:30.490 [2024-10-01 17:38:28.888874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.490 [2024-10-01 17:38:28.888887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.490 qpair failed and we were unable to recover it. 00:38:30.490 [2024-10-01 17:38:28.889061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.490 [2024-10-01 17:38:28.889073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.490 qpair failed and we were unable to recover it. 00:38:30.490 [2024-10-01 17:38:28.889433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.490 [2024-10-01 17:38:28.889444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.490 qpair failed and we were unable to recover it. 00:38:30.490 [2024-10-01 17:38:28.889724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.490 [2024-10-01 17:38:28.889734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.490 qpair failed and we were unable to recover it. 00:38:30.490 [2024-10-01 17:38:28.890037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.490 [2024-10-01 17:38:28.890048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.490 qpair failed and we were unable to recover it. 00:38:30.490 [2024-10-01 17:38:28.890371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.491 [2024-10-01 17:38:28.890381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.491 qpair failed and we were unable to recover it. 00:38:30.491 [2024-10-01 17:38:28.890659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.491 [2024-10-01 17:38:28.890670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.491 qpair failed and we were unable to recover it. 00:38:30.491 [2024-10-01 17:38:28.891008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.491 [2024-10-01 17:38:28.891020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.491 qpair failed and we were unable to recover it. 00:38:30.491 [2024-10-01 17:38:28.891311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.491 [2024-10-01 17:38:28.891323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.491 qpair failed and we were unable to recover it. 00:38:30.491 [2024-10-01 17:38:28.891625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.491 [2024-10-01 17:38:28.891637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.491 qpair failed and we were unable to recover it. 00:38:30.491 [2024-10-01 17:38:28.891957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.491 [2024-10-01 17:38:28.891969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.491 qpair failed and we were unable to recover it. 00:38:30.491 [2024-10-01 17:38:28.892321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.491 [2024-10-01 17:38:28.892333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.491 qpair failed and we were unable to recover it. 00:38:30.491 [2024-10-01 17:38:28.892641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.491 [2024-10-01 17:38:28.892652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.491 qpair failed and we were unable to recover it. 00:38:30.491 [2024-10-01 17:38:28.892959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.491 [2024-10-01 17:38:28.892969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.491 qpair failed and we were unable to recover it. 00:38:30.491 [2024-10-01 17:38:28.893175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.491 [2024-10-01 17:38:28.893186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.491 qpair failed and we were unable to recover it. 00:38:30.491 [2024-10-01 17:38:28.893487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.491 [2024-10-01 17:38:28.893498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.491 qpair failed and we were unable to recover it. 00:38:30.491 [2024-10-01 17:38:28.893789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.491 [2024-10-01 17:38:28.893799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.491 qpair failed and we were unable to recover it. 00:38:30.491 [2024-10-01 17:38:28.894110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.491 [2024-10-01 17:38:28.894121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.491 qpair failed and we were unable to recover it. 00:38:30.491 [2024-10-01 17:38:28.894447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.491 [2024-10-01 17:38:28.894457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.491 qpair failed and we were unable to recover it. 00:38:30.491 [2024-10-01 17:38:28.894721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.491 [2024-10-01 17:38:28.894731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.491 qpair failed and we were unable to recover it. 00:38:30.491 [2024-10-01 17:38:28.895031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.491 [2024-10-01 17:38:28.895042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.491 qpair failed and we were unable to recover it. 00:38:30.491 [2024-10-01 17:38:28.895418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.491 [2024-10-01 17:38:28.895428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.491 qpair failed and we were unable to recover it. 00:38:30.491 [2024-10-01 17:38:28.895741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.491 [2024-10-01 17:38:28.895752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.491 qpair failed and we were unable to recover it. 00:38:30.491 [2024-10-01 17:38:28.896061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.491 [2024-10-01 17:38:28.896073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.491 qpair failed and we were unable to recover it. 00:38:30.491 [2024-10-01 17:38:28.896371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.491 [2024-10-01 17:38:28.896382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.491 qpair failed and we were unable to recover it. 00:38:30.491 [2024-10-01 17:38:28.896637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.491 [2024-10-01 17:38:28.896648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.491 qpair failed and we were unable to recover it. 00:38:30.491 [2024-10-01 17:38:28.896981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.491 [2024-10-01 17:38:28.896991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.491 qpair failed and we were unable to recover it. 00:38:30.491 [2024-10-01 17:38:28.897314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.491 [2024-10-01 17:38:28.897325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.491 qpair failed and we were unable to recover it. 00:38:30.491 [2024-10-01 17:38:28.897632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.491 [2024-10-01 17:38:28.897642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.491 qpair failed and we were unable to recover it. 00:38:30.491 [2024-10-01 17:38:28.897950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.491 [2024-10-01 17:38:28.897960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.491 qpair failed and we were unable to recover it. 00:38:30.491 [2024-10-01 17:38:28.898299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.491 [2024-10-01 17:38:28.898310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.491 qpair failed and we were unable to recover it. 00:38:30.491 [2024-10-01 17:38:28.898608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.491 [2024-10-01 17:38:28.898619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.491 qpair failed and we were unable to recover it. 00:38:30.491 [2024-10-01 17:38:28.898930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.491 [2024-10-01 17:38:28.898940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.491 qpair failed and we were unable to recover it. 00:38:30.491 [2024-10-01 17:38:28.899231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.491 [2024-10-01 17:38:28.899243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.491 qpair failed and we were unable to recover it. 00:38:30.491 [2024-10-01 17:38:28.899512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.491 [2024-10-01 17:38:28.899523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.491 qpair failed and we were unable to recover it. 00:38:30.491 [2024-10-01 17:38:28.899827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.491 [2024-10-01 17:38:28.899838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.491 qpair failed and we were unable to recover it. 00:38:30.491 [2024-10-01 17:38:28.900137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.491 [2024-10-01 17:38:28.900149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.491 qpair failed and we were unable to recover it. 00:38:30.491 [2024-10-01 17:38:28.900349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.491 [2024-10-01 17:38:28.900360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.491 qpair failed and we were unable to recover it. 00:38:30.491 [2024-10-01 17:38:28.900665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.491 [2024-10-01 17:38:28.900676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.491 qpair failed and we were unable to recover it. 00:38:30.491 [2024-10-01 17:38:28.900992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.491 [2024-10-01 17:38:28.901007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.491 qpair failed and we were unable to recover it. 00:38:30.491 [2024-10-01 17:38:28.901333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.492 [2024-10-01 17:38:28.901343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.492 qpair failed and we were unable to recover it. 00:38:30.492 [2024-10-01 17:38:28.901647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.492 [2024-10-01 17:38:28.901657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.492 qpair failed and we were unable to recover it. 00:38:30.492 [2024-10-01 17:38:28.901981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.492 [2024-10-01 17:38:28.901992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.492 qpair failed and we were unable to recover it. 00:38:30.492 [2024-10-01 17:38:28.902290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.492 [2024-10-01 17:38:28.902300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.492 qpair failed and we were unable to recover it. 00:38:30.492 [2024-10-01 17:38:28.902601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.492 [2024-10-01 17:38:28.902611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.492 qpair failed and we were unable to recover it. 00:38:30.492 [2024-10-01 17:38:28.902935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.492 [2024-10-01 17:38:28.902945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.492 qpair failed and we were unable to recover it. 00:38:30.492 [2024-10-01 17:38:28.903232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.492 [2024-10-01 17:38:28.903244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.492 qpair failed and we were unable to recover it. 00:38:30.492 [2024-10-01 17:38:28.903558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.492 [2024-10-01 17:38:28.903570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.492 qpair failed and we were unable to recover it. 00:38:30.492 [2024-10-01 17:38:28.903876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.492 [2024-10-01 17:38:28.903888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.492 qpair failed and we were unable to recover it. 00:38:30.492 [2024-10-01 17:38:28.904222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.492 [2024-10-01 17:38:28.904236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.492 qpair failed and we were unable to recover it. 00:38:30.492 [2024-10-01 17:38:28.904568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.492 [2024-10-01 17:38:28.904578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.492 qpair failed and we were unable to recover it. 00:38:30.492 [2024-10-01 17:38:28.904881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.492 [2024-10-01 17:38:28.904892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.492 qpair failed and we were unable to recover it. 00:38:30.492 [2024-10-01 17:38:28.905194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.492 [2024-10-01 17:38:28.905205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.492 qpair failed and we were unable to recover it. 00:38:30.492 [2024-10-01 17:38:28.905482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.492 [2024-10-01 17:38:28.905493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.492 qpair failed and we were unable to recover it. 00:38:30.492 [2024-10-01 17:38:28.905828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.492 [2024-10-01 17:38:28.905839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.492 qpair failed and we were unable to recover it. 00:38:30.492 [2024-10-01 17:38:28.906191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.492 [2024-10-01 17:38:28.906202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.492 qpair failed and we were unable to recover it. 00:38:30.492 [2024-10-01 17:38:28.906509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.492 [2024-10-01 17:38:28.906520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.492 qpair failed and we were unable to recover it. 00:38:30.492 [2024-10-01 17:38:28.906788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.492 [2024-10-01 17:38:28.906801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.492 qpair failed and we were unable to recover it. 00:38:30.492 [2024-10-01 17:38:28.907104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.492 [2024-10-01 17:38:28.907116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.492 qpair failed and we were unable to recover it. 00:38:30.492 [2024-10-01 17:38:28.907440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.492 [2024-10-01 17:38:28.907452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.492 qpair failed and we were unable to recover it. 00:38:30.492 [2024-10-01 17:38:28.907763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.492 [2024-10-01 17:38:28.907774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.492 qpair failed and we were unable to recover it. 00:38:30.492 [2024-10-01 17:38:28.908076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.492 [2024-10-01 17:38:28.908087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.492 qpair failed and we were unable to recover it. 00:38:30.492 [2024-10-01 17:38:28.908374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.492 [2024-10-01 17:38:28.908385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.492 qpair failed and we were unable to recover it. 00:38:30.492 [2024-10-01 17:38:28.908587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.492 [2024-10-01 17:38:28.908598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.492 qpair failed and we were unable to recover it. 00:38:30.492 [2024-10-01 17:38:28.908918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.492 [2024-10-01 17:38:28.908928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.492 qpair failed and we were unable to recover it. 00:38:30.492 [2024-10-01 17:38:28.909233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.492 [2024-10-01 17:38:28.909244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.492 qpair failed and we were unable to recover it. 00:38:30.492 [2024-10-01 17:38:28.909578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.492 [2024-10-01 17:38:28.909588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.492 qpair failed and we were unable to recover it. 00:38:30.492 [2024-10-01 17:38:28.909797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.492 [2024-10-01 17:38:28.909808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.492 qpair failed and we were unable to recover it. 00:38:30.492 [2024-10-01 17:38:28.910068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.492 [2024-10-01 17:38:28.910079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.492 qpair failed and we were unable to recover it. 00:38:30.492 [2024-10-01 17:38:28.910347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.492 [2024-10-01 17:38:28.910358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.492 qpair failed and we were unable to recover it. 00:38:30.492 [2024-10-01 17:38:28.910681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.492 [2024-10-01 17:38:28.910692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.492 qpair failed and we were unable to recover it. 00:38:30.492 [2024-10-01 17:38:28.911035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.492 [2024-10-01 17:38:28.911046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.492 qpair failed and we were unable to recover it. 00:38:30.492 [2024-10-01 17:38:28.911351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.492 [2024-10-01 17:38:28.911362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.492 qpair failed and we were unable to recover it. 00:38:30.492 [2024-10-01 17:38:28.911664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.492 [2024-10-01 17:38:28.911675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.492 qpair failed and we were unable to recover it. 00:38:30.492 [2024-10-01 17:38:28.912012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.492 [2024-10-01 17:38:28.912024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.492 qpair failed and we were unable to recover it. 00:38:30.492 [2024-10-01 17:38:28.912368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.492 [2024-10-01 17:38:28.912378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.492 qpair failed and we were unable to recover it. 00:38:30.492 [2024-10-01 17:38:28.912688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.492 [2024-10-01 17:38:28.912698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.492 qpair failed and we were unable to recover it. 00:38:30.492 [2024-10-01 17:38:28.913035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.492 [2024-10-01 17:38:28.913047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.492 qpair failed and we were unable to recover it. 00:38:30.492 [2024-10-01 17:38:28.913348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.492 [2024-10-01 17:38:28.913358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.492 qpair failed and we were unable to recover it. 00:38:30.492 [2024-10-01 17:38:28.913655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.493 [2024-10-01 17:38:28.913665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.493 qpair failed and we were unable to recover it. 00:38:30.493 [2024-10-01 17:38:28.913973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.493 [2024-10-01 17:38:28.913983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.493 qpair failed and we were unable to recover it. 00:38:30.493 [2024-10-01 17:38:28.914287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.493 [2024-10-01 17:38:28.914298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.493 qpair failed and we were unable to recover it. 00:38:30.493 [2024-10-01 17:38:28.914574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.493 [2024-10-01 17:38:28.914584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.493 qpair failed and we were unable to recover it. 00:38:30.493 [2024-10-01 17:38:28.914883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.493 [2024-10-01 17:38:28.914894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.493 qpair failed and we were unable to recover it. 00:38:30.493 [2024-10-01 17:38:28.915078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.493 [2024-10-01 17:38:28.915090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.493 qpair failed and we were unable to recover it. 00:38:30.493 [2024-10-01 17:38:28.915352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.493 [2024-10-01 17:38:28.915363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.493 qpair failed and we were unable to recover it. 00:38:30.493 [2024-10-01 17:38:28.915645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.493 [2024-10-01 17:38:28.915655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.493 qpair failed and we were unable to recover it. 00:38:30.493 [2024-10-01 17:38:28.915955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.493 [2024-10-01 17:38:28.915966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.493 qpair failed and we were unable to recover it. 00:38:30.493 [2024-10-01 17:38:28.916239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.493 [2024-10-01 17:38:28.916251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.493 qpair failed and we were unable to recover it. 00:38:30.493 [2024-10-01 17:38:28.916578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.493 [2024-10-01 17:38:28.916588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.493 qpair failed and we were unable to recover it. 00:38:30.493 [2024-10-01 17:38:28.916799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.493 [2024-10-01 17:38:28.916810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.493 qpair failed and we were unable to recover it. 00:38:30.493 [2024-10-01 17:38:28.917109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.493 [2024-10-01 17:38:28.917120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.493 qpair failed and we were unable to recover it. 00:38:30.493 [2024-10-01 17:38:28.917328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.493 [2024-10-01 17:38:28.917339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.493 qpair failed and we were unable to recover it. 00:38:30.493 [2024-10-01 17:38:28.917598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.493 [2024-10-01 17:38:28.917609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.493 qpair failed and we were unable to recover it. 00:38:30.493 [2024-10-01 17:38:28.917933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.493 [2024-10-01 17:38:28.917943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.493 qpair failed and we were unable to recover it. 00:38:30.493 [2024-10-01 17:38:28.918247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.493 [2024-10-01 17:38:28.918258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.493 qpair failed and we were unable to recover it. 00:38:30.493 [2024-10-01 17:38:28.918579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.493 [2024-10-01 17:38:28.918590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.493 qpair failed and we were unable to recover it. 00:38:30.493 [2024-10-01 17:38:28.918851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.493 [2024-10-01 17:38:28.918861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.493 qpair failed and we were unable to recover it. 00:38:30.493 [2024-10-01 17:38:28.919170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.493 [2024-10-01 17:38:28.919181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.493 qpair failed and we were unable to recover it. 00:38:30.493 [2024-10-01 17:38:28.919487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.493 [2024-10-01 17:38:28.919498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.493 qpair failed and we were unable to recover it. 00:38:30.493 [2024-10-01 17:38:28.919781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.493 [2024-10-01 17:38:28.919792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.493 qpair failed and we were unable to recover it. 00:38:30.493 [2024-10-01 17:38:28.920049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.493 [2024-10-01 17:38:28.920061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.493 qpair failed and we were unable to recover it. 00:38:30.493 [2024-10-01 17:38:28.920346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.493 [2024-10-01 17:38:28.920357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.493 qpair failed and we were unable to recover it. 00:38:30.493 [2024-10-01 17:38:28.920660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.493 [2024-10-01 17:38:28.920671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.493 qpair failed and we were unable to recover it. 00:38:30.493 [2024-10-01 17:38:28.920977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.493 [2024-10-01 17:38:28.920988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.493 qpair failed and we were unable to recover it. 00:38:30.493 [2024-10-01 17:38:28.921283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.493 [2024-10-01 17:38:28.921293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.493 qpair failed and we were unable to recover it. 00:38:30.493 [2024-10-01 17:38:28.921590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.493 [2024-10-01 17:38:28.921601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.493 qpair failed and we were unable to recover it. 00:38:30.493 [2024-10-01 17:38:28.921913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.493 [2024-10-01 17:38:28.921924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.493 qpair failed and we were unable to recover it. 00:38:30.493 [2024-10-01 17:38:28.922229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.493 [2024-10-01 17:38:28.922240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.493 qpair failed and we were unable to recover it. 00:38:30.493 [2024-10-01 17:38:28.922545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.493 [2024-10-01 17:38:28.922556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.493 qpair failed and we were unable to recover it. 00:38:30.493 [2024-10-01 17:38:28.922857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.493 [2024-10-01 17:38:28.922869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.493 qpair failed and we were unable to recover it. 00:38:30.493 [2024-10-01 17:38:28.923207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.493 [2024-10-01 17:38:28.923219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.493 qpair failed and we were unable to recover it. 00:38:30.493 [2024-10-01 17:38:28.923515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.493 [2024-10-01 17:38:28.923526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.493 qpair failed and we were unable to recover it. 00:38:30.493 [2024-10-01 17:38:28.923859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.493 [2024-10-01 17:38:28.923869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.493 qpair failed and we were unable to recover it. 00:38:30.493 [2024-10-01 17:38:28.924164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.493 [2024-10-01 17:38:28.924175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.493 qpair failed and we were unable to recover it. 00:38:30.493 [2024-10-01 17:38:28.924517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.493 [2024-10-01 17:38:28.924528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.493 qpair failed and we were unable to recover it. 00:38:30.493 [2024-10-01 17:38:28.924827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.493 [2024-10-01 17:38:28.924838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.493 qpair failed and we were unable to recover it. 00:38:30.493 [2024-10-01 17:38:28.925157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.493 [2024-10-01 17:38:28.925172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.493 qpair failed and we were unable to recover it. 00:38:30.493 [2024-10-01 17:38:28.925453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.494 [2024-10-01 17:38:28.925463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.494 qpair failed and we were unable to recover it. 00:38:30.494 [2024-10-01 17:38:28.925771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.494 [2024-10-01 17:38:28.925781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.494 qpair failed and we were unable to recover it. 00:38:30.494 [2024-10-01 17:38:28.926080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.494 [2024-10-01 17:38:28.926092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.494 qpair failed and we were unable to recover it. 00:38:30.494 [2024-10-01 17:38:28.926420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.494 [2024-10-01 17:38:28.926432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.494 qpair failed and we were unable to recover it. 00:38:30.494 [2024-10-01 17:38:28.926724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.494 [2024-10-01 17:38:28.926736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.494 qpair failed and we were unable to recover it. 00:38:30.494 [2024-10-01 17:38:28.926908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.494 [2024-10-01 17:38:28.926920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.494 qpair failed and we were unable to recover it. 00:38:30.494 [2024-10-01 17:38:28.927200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.494 [2024-10-01 17:38:28.927211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.494 qpair failed and we were unable to recover it. 00:38:30.494 [2024-10-01 17:38:28.927530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.494 [2024-10-01 17:38:28.927540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.494 qpair failed and we were unable to recover it. 00:38:30.494 [2024-10-01 17:38:28.927890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.494 [2024-10-01 17:38:28.927900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.494 qpair failed and we were unable to recover it. 00:38:30.494 [2024-10-01 17:38:28.928196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.494 [2024-10-01 17:38:28.928208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.494 qpair failed and we were unable to recover it. 00:38:30.494 [2024-10-01 17:38:28.928516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.494 [2024-10-01 17:38:28.928528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.494 qpair failed and we were unable to recover it. 00:38:30.494 [2024-10-01 17:38:28.928873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.494 [2024-10-01 17:38:28.928883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.494 qpair failed and we were unable to recover it. 00:38:30.494 [2024-10-01 17:38:28.929213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.494 [2024-10-01 17:38:28.929224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.494 qpair failed and we were unable to recover it. 00:38:30.494 [2024-10-01 17:38:28.929526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.494 [2024-10-01 17:38:28.929537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.494 qpair failed and we were unable to recover it. 00:38:30.494 [2024-10-01 17:38:28.929808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.494 [2024-10-01 17:38:28.929820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.494 qpair failed and we were unable to recover it. 00:38:30.494 [2024-10-01 17:38:28.930121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.494 [2024-10-01 17:38:28.930132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.494 qpair failed and we were unable to recover it. 00:38:30.494 [2024-10-01 17:38:28.930456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.494 [2024-10-01 17:38:28.930466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.494 qpair failed and we were unable to recover it. 00:38:30.494 [2024-10-01 17:38:28.930767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.494 [2024-10-01 17:38:28.930778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.494 qpair failed and we were unable to recover it. 00:38:30.494 [2024-10-01 17:38:28.931054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.494 [2024-10-01 17:38:28.931066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.494 qpair failed and we were unable to recover it. 00:38:30.494 [2024-10-01 17:38:28.931343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.494 [2024-10-01 17:38:28.931354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.494 qpair failed and we were unable to recover it. 00:38:30.494 [2024-10-01 17:38:28.931626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.494 [2024-10-01 17:38:28.931637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.494 qpair failed and we were unable to recover it. 00:38:30.494 [2024-10-01 17:38:28.931895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.494 [2024-10-01 17:38:28.931906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.494 qpair failed and we were unable to recover it. 00:38:30.494 [2024-10-01 17:38:28.932183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.494 [2024-10-01 17:38:28.932195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.494 qpair failed and we were unable to recover it. 00:38:30.494 [2024-10-01 17:38:28.932494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.494 [2024-10-01 17:38:28.932505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.494 qpair failed and we were unable to recover it. 00:38:30.494 [2024-10-01 17:38:28.932792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.494 [2024-10-01 17:38:28.932802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.494 qpair failed and we were unable to recover it. 00:38:30.494 [2024-10-01 17:38:28.933104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.494 [2024-10-01 17:38:28.933115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.494 qpair failed and we were unable to recover it. 00:38:30.494 [2024-10-01 17:38:28.933416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.494 [2024-10-01 17:38:28.933427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.494 qpair failed and we were unable to recover it. 00:38:30.494 [2024-10-01 17:38:28.933760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.494 [2024-10-01 17:38:28.933771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.494 qpair failed and we were unable to recover it. 00:38:30.494 [2024-10-01 17:38:28.934099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.494 [2024-10-01 17:38:28.934110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.494 qpair failed and we were unable to recover it. 00:38:30.494 [2024-10-01 17:38:28.934402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.494 [2024-10-01 17:38:28.934412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.494 qpair failed and we were unable to recover it. 00:38:30.494 [2024-10-01 17:38:28.934751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.494 [2024-10-01 17:38:28.934762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.494 qpair failed and we were unable to recover it. 00:38:30.494 [2024-10-01 17:38:28.935056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.494 [2024-10-01 17:38:28.935068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.494 qpair failed and we were unable to recover it. 00:38:30.494 [2024-10-01 17:38:28.935368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.494 [2024-10-01 17:38:28.935378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.494 qpair failed and we were unable to recover it. 00:38:30.494 [2024-10-01 17:38:28.935680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.494 [2024-10-01 17:38:28.935690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.494 qpair failed and we were unable to recover it. 00:38:30.494 [2024-10-01 17:38:28.936034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.494 [2024-10-01 17:38:28.936045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.494 qpair failed and we were unable to recover it. 00:38:30.494 [2024-10-01 17:38:28.936365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.494 [2024-10-01 17:38:28.936376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.494 qpair failed and we were unable to recover it. 00:38:30.494 [2024-10-01 17:38:28.936566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.494 [2024-10-01 17:38:28.936577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.494 qpair failed and we were unable to recover it. 00:38:30.494 [2024-10-01 17:38:28.936977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.494 [2024-10-01 17:38:28.936988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.494 qpair failed and we were unable to recover it. 00:38:30.494 [2024-10-01 17:38:28.937295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.494 [2024-10-01 17:38:28.937306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.494 qpair failed and we were unable to recover it. 00:38:30.494 [2024-10-01 17:38:28.937501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.494 [2024-10-01 17:38:28.937512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.495 qpair failed and we were unable to recover it. 00:38:30.495 [2024-10-01 17:38:28.937812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.495 [2024-10-01 17:38:28.937825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.495 qpair failed and we were unable to recover it. 00:38:30.495 [2024-10-01 17:38:28.938183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.495 [2024-10-01 17:38:28.938194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.495 qpair failed and we were unable to recover it. 00:38:30.495 [2024-10-01 17:38:28.938495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.495 [2024-10-01 17:38:28.938506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.495 qpair failed and we were unable to recover it. 00:38:30.495 [2024-10-01 17:38:28.938816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.495 [2024-10-01 17:38:28.938827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.495 qpair failed and we were unable to recover it. 00:38:30.495 [2024-10-01 17:38:28.939016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.495 [2024-10-01 17:38:28.939029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.495 qpair failed and we were unable to recover it. 00:38:30.495 [2024-10-01 17:38:28.939336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.495 [2024-10-01 17:38:28.939347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.495 qpair failed and we were unable to recover it. 00:38:30.495 [2024-10-01 17:38:28.939656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.495 [2024-10-01 17:38:28.939667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.495 qpair failed and we were unable to recover it. 00:38:30.495 [2024-10-01 17:38:28.939974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.495 [2024-10-01 17:38:28.939984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.495 qpair failed and we were unable to recover it. 00:38:30.495 [2024-10-01 17:38:28.940310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.495 [2024-10-01 17:38:28.940321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.495 qpair failed and we were unable to recover it. 00:38:30.495 [2024-10-01 17:38:28.940628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.495 [2024-10-01 17:38:28.940639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.495 qpair failed and we were unable to recover it. 00:38:30.495 [2024-10-01 17:38:28.940941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.495 [2024-10-01 17:38:28.940952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.495 qpair failed and we were unable to recover it. 00:38:30.495 [2024-10-01 17:38:28.941121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.495 [2024-10-01 17:38:28.941133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.495 qpair failed and we were unable to recover it. 00:38:30.495 [2024-10-01 17:38:28.941462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.495 [2024-10-01 17:38:28.941473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.495 qpair failed and we were unable to recover it. 00:38:30.495 [2024-10-01 17:38:28.941777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.495 [2024-10-01 17:38:28.941788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.495 qpair failed and we were unable to recover it. 00:38:30.495 [2024-10-01 17:38:28.942129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.495 [2024-10-01 17:38:28.942141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.495 qpair failed and we were unable to recover it. 00:38:30.495 [2024-10-01 17:38:28.942430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.495 [2024-10-01 17:38:28.942442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.495 qpair failed and we were unable to recover it. 00:38:30.495 [2024-10-01 17:38:28.942775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.495 [2024-10-01 17:38:28.942785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.495 qpair failed and we were unable to recover it. 00:38:30.495 [2024-10-01 17:38:28.943089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.495 [2024-10-01 17:38:28.943100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.495 qpair failed and we were unable to recover it. 00:38:30.495 [2024-10-01 17:38:28.943406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.495 [2024-10-01 17:38:28.943417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.495 qpair failed and we were unable to recover it. 00:38:30.495 [2024-10-01 17:38:28.943678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.495 [2024-10-01 17:38:28.943689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.495 qpair failed and we were unable to recover it. 00:38:30.495 [2024-10-01 17:38:28.943984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.495 [2024-10-01 17:38:28.943998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.495 qpair failed and we were unable to recover it. 00:38:30.495 [2024-10-01 17:38:28.944364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.495 [2024-10-01 17:38:28.944374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.495 qpair failed and we were unable to recover it. 00:38:30.495 [2024-10-01 17:38:28.944583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.495 [2024-10-01 17:38:28.944595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.495 qpair failed and we were unable to recover it. 00:38:30.495 [2024-10-01 17:38:28.944918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.495 [2024-10-01 17:38:28.944928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.495 qpair failed and we were unable to recover it. 00:38:30.495 [2024-10-01 17:38:28.945255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.495 [2024-10-01 17:38:28.945265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.495 qpair failed and we were unable to recover it. 00:38:30.495 [2024-10-01 17:38:28.945560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.495 [2024-10-01 17:38:28.945570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.495 qpair failed and we were unable to recover it. 00:38:30.495 [2024-10-01 17:38:28.945875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.495 [2024-10-01 17:38:28.945886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.495 qpair failed and we were unable to recover it. 00:38:30.495 [2024-10-01 17:38:28.946220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.495 [2024-10-01 17:38:28.946233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.495 qpair failed and we were unable to recover it. 00:38:30.495 [2024-10-01 17:38:28.946515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.495 [2024-10-01 17:38:28.946526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.495 qpair failed and we were unable to recover it. 00:38:30.495 [2024-10-01 17:38:28.946909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.495 [2024-10-01 17:38:28.946920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.495 qpair failed and we were unable to recover it. 00:38:30.495 [2024-10-01 17:38:28.947229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.495 [2024-10-01 17:38:28.947240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.495 qpair failed and we were unable to recover it. 00:38:30.495 [2024-10-01 17:38:28.947562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.495 [2024-10-01 17:38:28.947573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.495 qpair failed and we were unable to recover it. 00:38:30.495 [2024-10-01 17:38:28.947904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.495 [2024-10-01 17:38:28.947916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.495 qpair failed and we were unable to recover it. 00:38:30.495 [2024-10-01 17:38:28.948224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.495 [2024-10-01 17:38:28.948236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.495 qpair failed and we were unable to recover it. 00:38:30.495 [2024-10-01 17:38:28.948541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.496 [2024-10-01 17:38:28.948552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.496 qpair failed and we were unable to recover it. 00:38:30.496 [2024-10-01 17:38:28.948840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.496 [2024-10-01 17:38:28.948851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.496 qpair failed and we were unable to recover it. 00:38:30.496 [2024-10-01 17:38:28.949133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.496 [2024-10-01 17:38:28.949144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.496 qpair failed and we were unable to recover it. 00:38:30.496 [2024-10-01 17:38:28.949440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.496 [2024-10-01 17:38:28.949451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.496 qpair failed and we were unable to recover it. 00:38:30.496 [2024-10-01 17:38:28.949758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.496 [2024-10-01 17:38:28.949768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.496 qpair failed and we were unable to recover it. 00:38:30.496 [2024-10-01 17:38:28.950064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.496 [2024-10-01 17:38:28.950075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.496 qpair failed and we were unable to recover it. 00:38:30.496 [2024-10-01 17:38:28.950365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.496 [2024-10-01 17:38:28.950376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.496 qpair failed and we were unable to recover it. 00:38:30.496 [2024-10-01 17:38:28.950694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.496 [2024-10-01 17:38:28.950705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.496 qpair failed and we were unable to recover it. 00:38:30.496 [2024-10-01 17:38:28.951024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.496 [2024-10-01 17:38:28.951036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.496 qpair failed and we were unable to recover it. 00:38:30.496 [2024-10-01 17:38:28.951354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.496 [2024-10-01 17:38:28.951365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.496 qpair failed and we were unable to recover it. 00:38:30.496 [2024-10-01 17:38:28.951687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.496 [2024-10-01 17:38:28.951698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.496 qpair failed and we were unable to recover it. 00:38:30.496 [2024-10-01 17:38:28.951992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.496 [2024-10-01 17:38:28.952009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.496 qpair failed and we were unable to recover it. 00:38:30.496 [2024-10-01 17:38:28.952335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.496 [2024-10-01 17:38:28.952345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.496 qpair failed and we were unable to recover it. 00:38:30.496 [2024-10-01 17:38:28.952652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.496 [2024-10-01 17:38:28.952663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.496 qpair failed and we were unable to recover it. 00:38:30.496 [2024-10-01 17:38:28.953001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.496 [2024-10-01 17:38:28.953013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.496 qpair failed and we were unable to recover it. 00:38:30.496 [2024-10-01 17:38:28.953318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.496 [2024-10-01 17:38:28.953328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.496 qpair failed and we were unable to recover it. 00:38:30.496 [2024-10-01 17:38:28.953628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.496 [2024-10-01 17:38:28.953639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.496 qpair failed and we were unable to recover it. 00:38:30.496 [2024-10-01 17:38:28.953900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.496 [2024-10-01 17:38:28.953911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.496 qpair failed and we were unable to recover it. 00:38:30.496 [2024-10-01 17:38:28.954123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.496 [2024-10-01 17:38:28.954135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.496 qpair failed and we were unable to recover it. 00:38:30.496 [2024-10-01 17:38:28.954531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.496 [2024-10-01 17:38:28.954542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.496 qpair failed and we were unable to recover it. 00:38:30.496 [2024-10-01 17:38:28.954842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.496 [2024-10-01 17:38:28.954853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.496 qpair failed and we were unable to recover it. 00:38:30.496 [2024-10-01 17:38:28.955175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.496 [2024-10-01 17:38:28.955186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.496 qpair failed and we were unable to recover it. 00:38:30.496 [2024-10-01 17:38:28.955511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.496 [2024-10-01 17:38:28.955521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.496 qpair failed and we were unable to recover it. 00:38:30.496 [2024-10-01 17:38:28.955840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.496 [2024-10-01 17:38:28.955851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.496 qpair failed and we were unable to recover it. 00:38:30.496 [2024-10-01 17:38:28.956152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.496 [2024-10-01 17:38:28.956163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.496 qpair failed and we were unable to recover it. 00:38:30.496 [2024-10-01 17:38:28.956463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.496 [2024-10-01 17:38:28.956474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.496 qpair failed and we were unable to recover it. 00:38:30.496 [2024-10-01 17:38:28.956798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.496 [2024-10-01 17:38:28.956809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.496 qpair failed and we were unable to recover it. 00:38:30.496 [2024-10-01 17:38:28.957110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.496 [2024-10-01 17:38:28.957122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.496 qpair failed and we were unable to recover it. 00:38:30.496 [2024-10-01 17:38:28.957443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.496 [2024-10-01 17:38:28.957455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.496 qpair failed and we were unable to recover it. 00:38:30.496 [2024-10-01 17:38:28.957716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.496 [2024-10-01 17:38:28.957727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.496 qpair failed and we were unable to recover it. 00:38:30.496 [2024-10-01 17:38:28.958001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.496 [2024-10-01 17:38:28.958013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.496 qpair failed and we were unable to recover it. 00:38:30.496 [2024-10-01 17:38:28.958281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.496 [2024-10-01 17:38:28.958292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.496 qpair failed and we were unable to recover it. 00:38:30.496 [2024-10-01 17:38:28.958597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.496 [2024-10-01 17:38:28.958607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.496 qpair failed and we were unable to recover it. 00:38:30.496 [2024-10-01 17:38:28.958926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.496 [2024-10-01 17:38:28.958937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.496 qpair failed and we were unable to recover it. 00:38:30.496 [2024-10-01 17:38:28.959228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.496 [2024-10-01 17:38:28.959241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.496 qpair failed and we were unable to recover it. 00:38:30.496 [2024-10-01 17:38:28.959544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.496 [2024-10-01 17:38:28.959555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.496 qpair failed and we were unable to recover it. 00:38:30.496 [2024-10-01 17:38:28.959858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.496 [2024-10-01 17:38:28.959869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.496 qpair failed and we were unable to recover it. 00:38:30.496 [2024-10-01 17:38:28.960175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.496 [2024-10-01 17:38:28.960186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.496 qpair failed and we were unable to recover it. 00:38:30.496 [2024-10-01 17:38:28.960513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.496 [2024-10-01 17:38:28.960524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.496 qpair failed and we were unable to recover it. 00:38:30.497 [2024-10-01 17:38:28.960781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.497 [2024-10-01 17:38:28.960791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.497 qpair failed and we were unable to recover it. 00:38:30.497 [2024-10-01 17:38:28.961099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.497 [2024-10-01 17:38:28.961110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.497 qpair failed and we were unable to recover it. 00:38:30.497 [2024-10-01 17:38:28.961386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.497 [2024-10-01 17:38:28.961397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.497 qpair failed and we were unable to recover it. 00:38:30.497 [2024-10-01 17:38:28.961688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.497 [2024-10-01 17:38:28.961700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.497 qpair failed and we were unable to recover it. 00:38:30.497 [2024-10-01 17:38:28.961964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.497 [2024-10-01 17:38:28.961976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.497 qpair failed and we were unable to recover it. 00:38:30.497 [2024-10-01 17:38:28.962311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.497 [2024-10-01 17:38:28.962322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.497 qpair failed and we were unable to recover it. 00:38:30.497 [2024-10-01 17:38:28.962625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.497 [2024-10-01 17:38:28.962636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.497 qpair failed and we were unable to recover it. 00:38:30.497 [2024-10-01 17:38:28.962968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.497 [2024-10-01 17:38:28.962980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.497 qpair failed and we were unable to recover it. 00:38:30.497 [2024-10-01 17:38:28.963382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.497 [2024-10-01 17:38:28.963393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.497 qpair failed and we were unable to recover it. 00:38:30.497 [2024-10-01 17:38:28.963701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.497 [2024-10-01 17:38:28.963713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.497 qpair failed and we were unable to recover it. 00:38:30.497 [2024-10-01 17:38:28.963988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.497 [2024-10-01 17:38:28.964003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.497 qpair failed and we were unable to recover it. 00:38:30.497 [2024-10-01 17:38:28.964313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.497 [2024-10-01 17:38:28.964325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.497 qpair failed and we were unable to recover it. 00:38:30.497 [2024-10-01 17:38:28.964648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.497 [2024-10-01 17:38:28.964658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.497 qpair failed and we were unable to recover it. 00:38:30.497 [2024-10-01 17:38:28.964960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.497 [2024-10-01 17:38:28.964971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.497 qpair failed and we were unable to recover it. 00:38:30.497 [2024-10-01 17:38:28.965186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.497 [2024-10-01 17:38:28.965198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.497 qpair failed and we were unable to recover it. 00:38:30.497 [2024-10-01 17:38:28.965483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.497 [2024-10-01 17:38:28.965494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.497 qpair failed and we were unable to recover it. 00:38:30.497 [2024-10-01 17:38:28.965797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.497 [2024-10-01 17:38:28.965808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.497 qpair failed and we were unable to recover it. 00:38:30.497 [2024-10-01 17:38:28.966118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.497 [2024-10-01 17:38:28.966129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.497 qpair failed and we were unable to recover it. 00:38:30.497 [2024-10-01 17:38:28.966453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.497 [2024-10-01 17:38:28.966464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.497 qpair failed and we were unable to recover it. 00:38:30.497 [2024-10-01 17:38:28.966803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.497 [2024-10-01 17:38:28.966813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.497 qpair failed and we were unable to recover it. 00:38:30.497 [2024-10-01 17:38:28.967117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.497 [2024-10-01 17:38:28.967128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.497 qpair failed and we were unable to recover it. 00:38:30.497 [2024-10-01 17:38:28.967319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.497 [2024-10-01 17:38:28.967331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.497 qpair failed and we were unable to recover it. 00:38:30.497 [2024-10-01 17:38:28.967638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.497 [2024-10-01 17:38:28.967651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.497 qpair failed and we were unable to recover it. 00:38:30.497 [2024-10-01 17:38:28.967980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.497 [2024-10-01 17:38:28.967991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.497 qpair failed and we were unable to recover it. 00:38:30.497 [2024-10-01 17:38:28.968308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.497 [2024-10-01 17:38:28.968319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.497 qpair failed and we were unable to recover it. 00:38:30.497 [2024-10-01 17:38:28.968627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.497 [2024-10-01 17:38:28.968639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.497 qpair failed and we were unable to recover it. 00:38:30.497 [2024-10-01 17:38:28.968967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.497 [2024-10-01 17:38:28.968979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.497 qpair failed and we were unable to recover it. 00:38:30.497 [2024-10-01 17:38:28.969312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.497 [2024-10-01 17:38:28.969323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.497 qpair failed and we were unable to recover it. 00:38:30.497 [2024-10-01 17:38:28.969664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.497 [2024-10-01 17:38:28.969675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.497 qpair failed and we were unable to recover it. 00:38:30.497 [2024-10-01 17:38:28.969971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.497 [2024-10-01 17:38:28.969982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.497 qpair failed and we were unable to recover it. 00:38:30.497 [2024-10-01 17:38:28.970301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.497 [2024-10-01 17:38:28.970312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.497 qpair failed and we were unable to recover it. 00:38:30.497 [2024-10-01 17:38:28.970588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.497 [2024-10-01 17:38:28.970598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.497 qpair failed and we were unable to recover it. 00:38:30.497 [2024-10-01 17:38:28.970901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.497 [2024-10-01 17:38:28.970912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.497 qpair failed and we were unable to recover it. 00:38:30.497 [2024-10-01 17:38:28.971249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.497 [2024-10-01 17:38:28.971260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.497 qpair failed and we were unable to recover it. 00:38:30.497 [2024-10-01 17:38:28.971595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.497 [2024-10-01 17:38:28.971607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.497 qpair failed and we were unable to recover it. 00:38:30.497 [2024-10-01 17:38:28.971944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.497 [2024-10-01 17:38:28.971955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.497 qpair failed and we were unable to recover it. 00:38:30.497 [2024-10-01 17:38:28.972246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.497 [2024-10-01 17:38:28.972258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.497 qpair failed and we were unable to recover it. 00:38:30.497 [2024-10-01 17:38:28.972561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.497 [2024-10-01 17:38:28.972572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.497 qpair failed and we were unable to recover it. 00:38:30.497 [2024-10-01 17:38:28.972908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.498 [2024-10-01 17:38:28.972918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.498 qpair failed and we were unable to recover it. 00:38:30.498 [2024-10-01 17:38:28.973223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.498 [2024-10-01 17:38:28.973234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.498 qpair failed and we were unable to recover it. 00:38:30.498 [2024-10-01 17:38:28.973547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.498 [2024-10-01 17:38:28.973558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.498 qpair failed and we were unable to recover it. 00:38:30.498 [2024-10-01 17:38:28.973768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.498 [2024-10-01 17:38:28.973779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.498 qpair failed and we were unable to recover it. 00:38:30.498 [2024-10-01 17:38:28.974128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.498 [2024-10-01 17:38:28.974139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.498 qpair failed and we were unable to recover it. 00:38:30.498 [2024-10-01 17:38:28.974437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.498 [2024-10-01 17:38:28.974448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.498 qpair failed and we were unable to recover it. 00:38:30.498 [2024-10-01 17:38:28.974750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.498 [2024-10-01 17:38:28.974762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.498 qpair failed and we were unable to recover it. 00:38:30.498 [2024-10-01 17:38:28.975060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.498 [2024-10-01 17:38:28.975071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.498 qpair failed and we were unable to recover it. 00:38:30.498 [2024-10-01 17:38:28.975387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.498 [2024-10-01 17:38:28.975398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.498 qpair failed and we were unable to recover it. 00:38:30.498 [2024-10-01 17:38:28.975706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.498 [2024-10-01 17:38:28.975718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.498 qpair failed and we were unable to recover it. 00:38:30.498 [2024-10-01 17:38:28.976017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.498 [2024-10-01 17:38:28.976028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.498 qpair failed and we were unable to recover it. 00:38:30.498 [2024-10-01 17:38:28.976323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.498 [2024-10-01 17:38:28.976334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.498 qpair failed and we were unable to recover it. 00:38:30.498 [2024-10-01 17:38:28.976657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.498 [2024-10-01 17:38:28.976668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.498 qpair failed and we were unable to recover it. 00:38:30.498 [2024-10-01 17:38:28.976941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.498 [2024-10-01 17:38:28.976952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.498 qpair failed and we were unable to recover it. 00:38:30.498 [2024-10-01 17:38:28.977279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.498 [2024-10-01 17:38:28.977290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.498 qpair failed and we were unable to recover it. 00:38:30.498 [2024-10-01 17:38:28.977614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.498 [2024-10-01 17:38:28.977626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.498 qpair failed and we were unable to recover it. 00:38:30.498 [2024-10-01 17:38:28.977874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.498 [2024-10-01 17:38:28.977886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.498 qpair failed and we were unable to recover it. 00:38:30.498 [2024-10-01 17:38:28.978201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.498 [2024-10-01 17:38:28.978213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.498 qpair failed and we were unable to recover it. 00:38:30.498 [2024-10-01 17:38:28.978407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.498 [2024-10-01 17:38:28.978419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.498 qpair failed and we were unable to recover it. 00:38:30.498 [2024-10-01 17:38:28.978703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.498 [2024-10-01 17:38:28.978714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.498 qpair failed and we were unable to recover it. 00:38:30.498 [2024-10-01 17:38:28.979017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.498 [2024-10-01 17:38:28.979029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.498 qpair failed and we were unable to recover it. 00:38:30.498 [2024-10-01 17:38:28.979323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.498 [2024-10-01 17:38:28.979335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.498 qpair failed and we were unable to recover it. 00:38:30.498 [2024-10-01 17:38:28.979632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.498 [2024-10-01 17:38:28.979643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.498 qpair failed and we were unable to recover it. 00:38:30.498 [2024-10-01 17:38:28.979817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.498 [2024-10-01 17:38:28.979828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.498 qpair failed and we were unable to recover it. 00:38:30.498 [2024-10-01 17:38:28.980128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.498 [2024-10-01 17:38:28.980140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.498 qpair failed and we were unable to recover it. 00:38:30.498 [2024-10-01 17:38:28.980433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.498 [2024-10-01 17:38:28.980447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.498 qpair failed and we were unable to recover it. 00:38:30.498 [2024-10-01 17:38:28.980771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.498 [2024-10-01 17:38:28.980782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.498 qpair failed and we were unable to recover it. 00:38:30.498 [2024-10-01 17:38:28.981078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.498 [2024-10-01 17:38:28.981089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.498 qpair failed and we were unable to recover it. 00:38:30.498 [2024-10-01 17:38:28.981375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.498 [2024-10-01 17:38:28.981386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.498 qpair failed and we were unable to recover it. 00:38:30.498 [2024-10-01 17:38:28.981573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.498 [2024-10-01 17:38:28.981586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.498 qpair failed and we were unable to recover it. 00:38:30.498 [2024-10-01 17:38:28.981921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.498 [2024-10-01 17:38:28.981932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.498 qpair failed and we were unable to recover it. 00:38:30.498 [2024-10-01 17:38:28.982235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.498 [2024-10-01 17:38:28.982246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.498 qpair failed and we were unable to recover it. 00:38:30.498 [2024-10-01 17:38:28.982572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.498 [2024-10-01 17:38:28.982583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.498 qpair failed and we were unable to recover it. 00:38:30.498 [2024-10-01 17:38:28.982861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.498 [2024-10-01 17:38:28.982872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.498 qpair failed and we were unable to recover it. 00:38:30.498 [2024-10-01 17:38:28.983190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.498 [2024-10-01 17:38:28.983201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.498 qpair failed and we were unable to recover it. 00:38:30.498 [2024-10-01 17:38:28.983488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.498 [2024-10-01 17:38:28.983499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.498 qpair failed and we were unable to recover it. 00:38:30.498 [2024-10-01 17:38:28.983802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.498 [2024-10-01 17:38:28.983813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.498 qpair failed and we were unable to recover it. 00:38:30.498 [2024-10-01 17:38:28.984116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.498 [2024-10-01 17:38:28.984127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.498 qpair failed and we were unable to recover it. 00:38:30.498 [2024-10-01 17:38:28.984453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.498 [2024-10-01 17:38:28.984464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.498 qpair failed and we were unable to recover it. 00:38:30.498 [2024-10-01 17:38:28.984776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.498 [2024-10-01 17:38:28.984787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.499 qpair failed and we were unable to recover it. 00:38:30.499 [2024-10-01 17:38:28.985129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.499 [2024-10-01 17:38:28.985141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.499 qpair failed and we were unable to recover it. 00:38:30.499 [2024-10-01 17:38:28.985477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.499 [2024-10-01 17:38:28.985488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.499 qpair failed and we were unable to recover it. 00:38:30.499 [2024-10-01 17:38:28.985789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.499 [2024-10-01 17:38:28.985800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.499 qpair failed and we were unable to recover it. 00:38:30.499 [2024-10-01 17:38:28.986166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.499 [2024-10-01 17:38:28.986177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.499 qpair failed and we were unable to recover it. 00:38:30.499 [2024-10-01 17:38:28.986436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.499 [2024-10-01 17:38:28.986448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.499 qpair failed and we were unable to recover it. 00:38:30.499 [2024-10-01 17:38:28.986725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.499 [2024-10-01 17:38:28.986736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.499 qpair failed and we were unable to recover it. 00:38:30.499 [2024-10-01 17:38:28.987039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.499 [2024-10-01 17:38:28.987051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.499 qpair failed and we were unable to recover it. 00:38:30.499 [2024-10-01 17:38:28.987356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.499 [2024-10-01 17:38:28.987367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.499 qpair failed and we were unable to recover it. 00:38:30.499 [2024-10-01 17:38:28.987666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.499 [2024-10-01 17:38:28.987677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.499 qpair failed and we were unable to recover it. 00:38:30.499 [2024-10-01 17:38:28.988005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.499 [2024-10-01 17:38:28.988016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.499 qpair failed and we were unable to recover it. 00:38:30.499 [2024-10-01 17:38:28.988385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.499 [2024-10-01 17:38:28.988395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.499 qpair failed and we were unable to recover it. 00:38:30.499 [2024-10-01 17:38:28.988701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.499 [2024-10-01 17:38:28.988712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.499 qpair failed and we were unable to recover it. 00:38:30.499 [2024-10-01 17:38:28.989026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.499 [2024-10-01 17:38:28.989042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.499 qpair failed and we were unable to recover it. 00:38:30.499 [2024-10-01 17:38:28.989337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.499 [2024-10-01 17:38:28.989348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.499 qpair failed and we were unable to recover it. 00:38:30.499 [2024-10-01 17:38:28.989647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.499 [2024-10-01 17:38:28.989659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.499 qpair failed and we were unable to recover it. 00:38:30.499 [2024-10-01 17:38:28.989957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.499 [2024-10-01 17:38:28.989967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.499 qpair failed and we were unable to recover it. 00:38:30.499 [2024-10-01 17:38:28.990230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.499 [2024-10-01 17:38:28.990242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.499 qpair failed and we were unable to recover it. 00:38:30.499 [2024-10-01 17:38:28.990572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.499 [2024-10-01 17:38:28.990583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.499 qpair failed and we were unable to recover it. 00:38:30.499 [2024-10-01 17:38:28.990876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.499 [2024-10-01 17:38:28.990887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.499 qpair failed and we were unable to recover it. 00:38:30.499 [2024-10-01 17:38:28.991184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.499 [2024-10-01 17:38:28.991195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.499 qpair failed and we were unable to recover it. 00:38:30.499 [2024-10-01 17:38:28.991517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.499 [2024-10-01 17:38:28.991528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.499 qpair failed and we were unable to recover it. 00:38:30.499 [2024-10-01 17:38:28.991852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.499 [2024-10-01 17:38:28.991862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.499 qpair failed and we were unable to recover it. 00:38:30.499 [2024-10-01 17:38:28.992157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.499 [2024-10-01 17:38:28.992169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.499 qpair failed and we were unable to recover it. 00:38:30.499 [2024-10-01 17:38:28.992489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.499 [2024-10-01 17:38:28.992500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.499 qpair failed and we were unable to recover it. 00:38:30.499 [2024-10-01 17:38:28.992825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.499 [2024-10-01 17:38:28.992836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.499 qpair failed and we were unable to recover it. 00:38:30.499 [2024-10-01 17:38:28.993156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.499 [2024-10-01 17:38:28.993168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.499 qpair failed and we were unable to recover it. 00:38:30.499 [2024-10-01 17:38:28.993493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.499 [2024-10-01 17:38:28.993504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.499 qpair failed and we were unable to recover it. 00:38:30.499 [2024-10-01 17:38:28.993808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.499 [2024-10-01 17:38:28.993819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.499 qpair failed and we were unable to recover it. 00:38:30.499 [2024-10-01 17:38:28.994142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.499 [2024-10-01 17:38:28.994154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.499 qpair failed and we were unable to recover it. 00:38:30.499 [2024-10-01 17:38:28.994482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.499 [2024-10-01 17:38:28.994494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.499 qpair failed and we were unable to recover it. 00:38:30.499 [2024-10-01 17:38:28.994818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.499 [2024-10-01 17:38:28.994830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.499 qpair failed and we were unable to recover it. 00:38:30.499 [2024-10-01 17:38:28.995168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.499 [2024-10-01 17:38:28.995180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.499 qpair failed and we were unable to recover it. 00:38:30.499 [2024-10-01 17:38:28.995506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.499 [2024-10-01 17:38:28.995518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.499 qpair failed and we were unable to recover it. 00:38:30.499 [2024-10-01 17:38:28.995848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.499 [2024-10-01 17:38:28.995859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.499 qpair failed and we were unable to recover it. 00:38:30.499 [2024-10-01 17:38:28.996158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.499 [2024-10-01 17:38:28.996168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.499 qpair failed and we were unable to recover it. 00:38:30.499 [2024-10-01 17:38:28.996483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.499 [2024-10-01 17:38:28.996493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.499 qpair failed and we were unable to recover it. 00:38:30.499 [2024-10-01 17:38:28.996779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.499 [2024-10-01 17:38:28.996790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.499 qpair failed and we were unable to recover it. 00:38:30.499 [2024-10-01 17:38:28.997082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.499 [2024-10-01 17:38:28.997093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.499 qpair failed and we were unable to recover it. 00:38:30.499 [2024-10-01 17:38:28.997400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.499 [2024-10-01 17:38:28.997411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.500 qpair failed and we were unable to recover it. 00:38:30.500 [2024-10-01 17:38:28.997717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.500 [2024-10-01 17:38:28.997729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.500 qpair failed and we were unable to recover it. 00:38:30.500 [2024-10-01 17:38:28.998060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.500 [2024-10-01 17:38:28.998072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.500 qpair failed and we were unable to recover it. 00:38:30.774 [2024-10-01 17:38:28.998400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.774 [2024-10-01 17:38:28.998413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.774 qpair failed and we were unable to recover it. 00:38:30.774 [2024-10-01 17:38:28.998714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.774 [2024-10-01 17:38:28.998725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.774 qpair failed and we were unable to recover it. 00:38:30.774 [2024-10-01 17:38:28.999034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.774 [2024-10-01 17:38:28.999045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.774 qpair failed and we were unable to recover it. 00:38:30.774 [2024-10-01 17:38:28.999348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.774 [2024-10-01 17:38:28.999359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.774 qpair failed and we were unable to recover it. 00:38:30.774 [2024-10-01 17:38:28.999687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.774 [2024-10-01 17:38:28.999698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.774 qpair failed and we were unable to recover it. 00:38:30.774 [2024-10-01 17:38:29.000000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.774 [2024-10-01 17:38:29.000012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.774 qpair failed and we were unable to recover it. 00:38:30.775 [2024-10-01 17:38:29.000295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.775 [2024-10-01 17:38:29.000306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.775 qpair failed and we were unable to recover it. 00:38:30.775 [2024-10-01 17:38:29.000624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.775 [2024-10-01 17:38:29.000635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.775 qpair failed and we were unable to recover it. 00:38:30.775 [2024-10-01 17:38:29.000962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.775 [2024-10-01 17:38:29.000973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.775 qpair failed and we were unable to recover it. 00:38:30.775 [2024-10-01 17:38:29.001281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.775 [2024-10-01 17:38:29.001292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.775 qpair failed and we were unable to recover it. 00:38:30.775 [2024-10-01 17:38:29.001591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.775 [2024-10-01 17:38:29.001602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.775 qpair failed and we were unable to recover it. 00:38:30.775 [2024-10-01 17:38:29.001889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.775 [2024-10-01 17:38:29.001900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.775 qpair failed and we were unable to recover it. 00:38:30.775 [2024-10-01 17:38:29.002205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.775 [2024-10-01 17:38:29.002219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.775 qpair failed and we were unable to recover it. 00:38:30.775 [2024-10-01 17:38:29.002434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.775 [2024-10-01 17:38:29.002445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.775 qpair failed and we were unable to recover it. 00:38:30.775 [2024-10-01 17:38:29.002759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.775 [2024-10-01 17:38:29.002771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.775 qpair failed and we were unable to recover it. 00:38:30.775 [2024-10-01 17:38:29.003090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.775 [2024-10-01 17:38:29.003102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.775 qpair failed and we were unable to recover it. 00:38:30.775 [2024-10-01 17:38:29.003429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.775 [2024-10-01 17:38:29.003440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.775 qpair failed and we were unable to recover it. 00:38:30.775 [2024-10-01 17:38:29.003712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.775 [2024-10-01 17:38:29.003723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.775 qpair failed and we were unable to recover it. 00:38:30.775 [2024-10-01 17:38:29.004029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.775 [2024-10-01 17:38:29.004041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.775 qpair failed and we were unable to recover it. 00:38:30.775 [2024-10-01 17:38:29.004354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.775 [2024-10-01 17:38:29.004365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.775 qpair failed and we were unable to recover it. 00:38:30.775 [2024-10-01 17:38:29.004690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.775 [2024-10-01 17:38:29.004702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.775 qpair failed and we were unable to recover it. 00:38:30.775 [2024-10-01 17:38:29.005004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.775 [2024-10-01 17:38:29.005017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.775 qpair failed and we were unable to recover it. 00:38:30.775 [2024-10-01 17:38:29.005335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.775 [2024-10-01 17:38:29.005346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.775 qpair failed and we were unable to recover it. 00:38:30.775 [2024-10-01 17:38:29.005670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.775 [2024-10-01 17:38:29.005682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.775 qpair failed and we were unable to recover it. 00:38:30.775 [2024-10-01 17:38:29.005960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.775 [2024-10-01 17:38:29.005972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.775 qpair failed and we were unable to recover it. 00:38:30.775 [2024-10-01 17:38:29.006179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.775 [2024-10-01 17:38:29.006192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.775 qpair failed and we were unable to recover it. 00:38:30.775 [2024-10-01 17:38:29.006516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.775 [2024-10-01 17:38:29.006527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.775 qpair failed and we were unable to recover it. 00:38:30.775 [2024-10-01 17:38:29.006715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.775 [2024-10-01 17:38:29.006728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.775 qpair failed and we were unable to recover it. 00:38:30.775 [2024-10-01 17:38:29.007053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.775 [2024-10-01 17:38:29.007065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.775 qpair failed and we were unable to recover it. 00:38:30.775 [2024-10-01 17:38:29.007408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.775 [2024-10-01 17:38:29.007418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.775 qpair failed and we were unable to recover it. 00:38:30.775 [2024-10-01 17:38:29.007719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.775 [2024-10-01 17:38:29.007730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.775 qpair failed and we were unable to recover it. 00:38:30.775 [2024-10-01 17:38:29.008071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.775 [2024-10-01 17:38:29.008082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.775 qpair failed and we were unable to recover it. 00:38:30.775 [2024-10-01 17:38:29.008414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.775 [2024-10-01 17:38:29.008425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.775 qpair failed and we were unable to recover it. 00:38:30.775 [2024-10-01 17:38:29.008591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.775 [2024-10-01 17:38:29.008604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.775 qpair failed and we were unable to recover it. 00:38:30.775 [2024-10-01 17:38:29.008888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.775 [2024-10-01 17:38:29.008899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.775 qpair failed and we were unable to recover it. 00:38:30.775 [2024-10-01 17:38:29.009174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.775 [2024-10-01 17:38:29.009185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.775 qpair failed and we were unable to recover it. 00:38:30.775 [2024-10-01 17:38:29.009486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.775 [2024-10-01 17:38:29.009498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.775 qpair failed and we were unable to recover it. 00:38:30.775 [2024-10-01 17:38:29.009793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.775 [2024-10-01 17:38:29.009804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.775 qpair failed and we were unable to recover it. 00:38:30.775 [2024-10-01 17:38:29.010107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.775 [2024-10-01 17:38:29.010119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.776 qpair failed and we were unable to recover it. 00:38:30.776 [2024-10-01 17:38:29.010335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.776 [2024-10-01 17:38:29.010346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.776 qpair failed and we were unable to recover it. 00:38:30.776 [2024-10-01 17:38:29.010686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.776 [2024-10-01 17:38:29.010697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.776 qpair failed and we were unable to recover it. 00:38:30.776 [2024-10-01 17:38:29.011006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.776 [2024-10-01 17:38:29.011018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.776 qpair failed and we were unable to recover it. 00:38:30.776 [2024-10-01 17:38:29.011335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.776 [2024-10-01 17:38:29.011346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.776 qpair failed and we were unable to recover it. 00:38:30.776 [2024-10-01 17:38:29.011657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.776 [2024-10-01 17:38:29.011669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.776 qpair failed and we were unable to recover it. 00:38:30.776 [2024-10-01 17:38:29.011997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.776 [2024-10-01 17:38:29.012008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.776 qpair failed and we were unable to recover it. 00:38:30.776 [2024-10-01 17:38:29.012206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.776 [2024-10-01 17:38:29.012218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.776 qpair failed and we were unable to recover it. 00:38:30.776 [2024-10-01 17:38:29.012521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.776 [2024-10-01 17:38:29.012533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.776 qpair failed and we were unable to recover it. 00:38:30.776 [2024-10-01 17:38:29.012835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.776 [2024-10-01 17:38:29.012846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.776 qpair failed and we were unable to recover it. 00:38:30.776 [2024-10-01 17:38:29.013140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.776 [2024-10-01 17:38:29.013151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.776 qpair failed and we were unable to recover it. 00:38:30.776 [2024-10-01 17:38:29.013460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.776 [2024-10-01 17:38:29.013471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.776 qpair failed and we were unable to recover it. 00:38:30.776 [2024-10-01 17:38:29.013767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.776 [2024-10-01 17:38:29.013778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.776 qpair failed and we were unable to recover it. 00:38:30.776 [2024-10-01 17:38:29.014045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.776 [2024-10-01 17:38:29.014056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.776 qpair failed and we were unable to recover it. 00:38:30.776 [2024-10-01 17:38:29.014244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.776 [2024-10-01 17:38:29.014256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.776 qpair failed and we were unable to recover it. 00:38:30.776 [2024-10-01 17:38:29.014582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.776 [2024-10-01 17:38:29.014593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.776 qpair failed and we were unable to recover it. 00:38:30.776 [2024-10-01 17:38:29.014894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.776 [2024-10-01 17:38:29.014904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.776 qpair failed and we were unable to recover it. 00:38:30.776 [2024-10-01 17:38:29.015215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.776 [2024-10-01 17:38:29.015226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.776 qpair failed and we were unable to recover it. 00:38:30.776 [2024-10-01 17:38:29.015432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.776 [2024-10-01 17:38:29.015443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.776 qpair failed and we were unable to recover it. 00:38:30.776 [2024-10-01 17:38:29.015745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.776 [2024-10-01 17:38:29.015756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.776 qpair failed and we were unable to recover it. 00:38:30.776 [2024-10-01 17:38:29.016057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.776 [2024-10-01 17:38:29.016068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.776 qpair failed and we were unable to recover it. 00:38:30.776 [2024-10-01 17:38:29.016380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.776 [2024-10-01 17:38:29.016391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.776 qpair failed and we were unable to recover it. 00:38:30.776 [2024-10-01 17:38:29.016720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.776 [2024-10-01 17:38:29.016731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.776 qpair failed and we were unable to recover it. 00:38:30.776 [2024-10-01 17:38:29.017029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.776 [2024-10-01 17:38:29.017040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.776 qpair failed and we were unable to recover it. 00:38:30.776 [2024-10-01 17:38:29.017347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.776 [2024-10-01 17:38:29.017358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.776 qpair failed and we were unable to recover it. 00:38:30.776 [2024-10-01 17:38:29.017581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.776 [2024-10-01 17:38:29.017593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.776 qpair failed and we were unable to recover it. 00:38:30.776 [2024-10-01 17:38:29.017921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.776 [2024-10-01 17:38:29.017933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.776 qpair failed and we were unable to recover it. 00:38:30.776 [2024-10-01 17:38:29.018235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.776 [2024-10-01 17:38:29.018246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.776 qpair failed and we were unable to recover it. 00:38:30.776 [2024-10-01 17:38:29.018507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.776 [2024-10-01 17:38:29.018518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.776 qpair failed and we were unable to recover it. 00:38:30.776 [2024-10-01 17:38:29.018830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.776 [2024-10-01 17:38:29.018841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.776 qpair failed and we were unable to recover it. 00:38:30.776 [2024-10-01 17:38:29.019116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.776 [2024-10-01 17:38:29.019127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.776 qpair failed and we were unable to recover it. 00:38:30.776 [2024-10-01 17:38:29.019436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.777 [2024-10-01 17:38:29.019447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.777 qpair failed and we were unable to recover it. 00:38:30.777 [2024-10-01 17:38:29.019748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.777 [2024-10-01 17:38:29.019759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.777 qpair failed and we were unable to recover it. 00:38:30.777 [2024-10-01 17:38:29.020090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.777 [2024-10-01 17:38:29.020101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.777 qpair failed and we were unable to recover it. 00:38:30.777 [2024-10-01 17:38:29.020390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.777 [2024-10-01 17:38:29.020401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.777 qpair failed and we were unable to recover it. 00:38:30.777 [2024-10-01 17:38:29.020697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.777 [2024-10-01 17:38:29.020709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.777 qpair failed and we were unable to recover it. 00:38:30.777 [2024-10-01 17:38:29.021010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.777 [2024-10-01 17:38:29.021022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.777 qpair failed and we were unable to recover it. 00:38:30.777 [2024-10-01 17:38:29.021295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.777 [2024-10-01 17:38:29.021306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.777 qpair failed and we were unable to recover it. 00:38:30.777 [2024-10-01 17:38:29.021584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.777 [2024-10-01 17:38:29.021595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.777 qpair failed and we were unable to recover it. 00:38:30.777 [2024-10-01 17:38:29.021792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.777 [2024-10-01 17:38:29.021803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.777 qpair failed and we were unable to recover it. 00:38:30.777 [2024-10-01 17:38:29.022106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.777 [2024-10-01 17:38:29.022117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.777 qpair failed and we were unable to recover it. 00:38:30.777 [2024-10-01 17:38:29.022457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.777 [2024-10-01 17:38:29.022467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.777 qpair failed and we were unable to recover it. 00:38:30.777 [2024-10-01 17:38:29.022804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.777 [2024-10-01 17:38:29.022816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.777 qpair failed and we were unable to recover it. 00:38:30.777 [2024-10-01 17:38:29.023120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.777 [2024-10-01 17:38:29.023131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.777 qpair failed and we were unable to recover it. 00:38:30.777 [2024-10-01 17:38:29.023428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.777 [2024-10-01 17:38:29.023439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.777 qpair failed and we were unable to recover it. 00:38:30.777 [2024-10-01 17:38:29.023742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.777 [2024-10-01 17:38:29.023753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.777 qpair failed and we were unable to recover it. 00:38:30.777 [2024-10-01 17:38:29.024036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.777 [2024-10-01 17:38:29.024047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.777 qpair failed and we were unable to recover it. 00:38:30.777 [2024-10-01 17:38:29.024348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.777 [2024-10-01 17:38:29.024358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.777 qpair failed and we were unable to recover it. 00:38:30.777 [2024-10-01 17:38:29.024663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.777 [2024-10-01 17:38:29.024673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.777 qpair failed and we were unable to recover it. 00:38:30.777 [2024-10-01 17:38:29.025001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.777 [2024-10-01 17:38:29.025012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.777 qpair failed and we were unable to recover it. 00:38:30.777 [2024-10-01 17:38:29.025311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.777 [2024-10-01 17:38:29.025322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.777 qpair failed and we were unable to recover it. 00:38:30.777 [2024-10-01 17:38:29.025619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.777 [2024-10-01 17:38:29.025630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.777 qpair failed and we were unable to recover it. 00:38:30.777 [2024-10-01 17:38:29.025919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.777 [2024-10-01 17:38:29.025930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.777 qpair failed and we were unable to recover it. 00:38:30.777 [2024-10-01 17:38:29.026263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.777 [2024-10-01 17:38:29.026274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.777 qpair failed and we were unable to recover it. 00:38:30.777 [2024-10-01 17:38:29.026597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.777 [2024-10-01 17:38:29.026608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.777 qpair failed and we were unable to recover it. 00:38:30.777 [2024-10-01 17:38:29.026899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.777 [2024-10-01 17:38:29.026910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.777 qpair failed and we were unable to recover it. 00:38:30.777 [2024-10-01 17:38:29.027205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.777 [2024-10-01 17:38:29.027216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.777 qpair failed and we were unable to recover it. 00:38:30.777 [2024-10-01 17:38:29.027561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.777 [2024-10-01 17:38:29.027571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.777 qpair failed and we were unable to recover it. 00:38:30.777 [2024-10-01 17:38:29.027896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.777 [2024-10-01 17:38:29.027907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.777 qpair failed and we were unable to recover it. 00:38:30.777 [2024-10-01 17:38:29.028200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.777 [2024-10-01 17:38:29.028211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.777 qpair failed and we were unable to recover it. 00:38:30.777 [2024-10-01 17:38:29.028405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.777 [2024-10-01 17:38:29.028416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.777 qpair failed and we were unable to recover it. 00:38:30.777 [2024-10-01 17:38:29.028599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.777 [2024-10-01 17:38:29.028610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.777 qpair failed and we were unable to recover it. 00:38:30.777 [2024-10-01 17:38:29.028917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.777 [2024-10-01 17:38:29.028928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.777 qpair failed and we were unable to recover it. 00:38:30.777 [2024-10-01 17:38:29.029240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.777 [2024-10-01 17:38:29.029251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.777 qpair failed and we were unable to recover it. 00:38:30.777 [2024-10-01 17:38:29.029553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.777 [2024-10-01 17:38:29.029564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.777 qpair failed and we were unable to recover it. 00:38:30.777 [2024-10-01 17:38:29.029870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.777 [2024-10-01 17:38:29.029881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.777 qpair failed and we were unable to recover it. 00:38:30.778 [2024-10-01 17:38:29.030200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.778 [2024-10-01 17:38:29.030211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.778 qpair failed and we were unable to recover it. 00:38:30.778 [2024-10-01 17:38:29.030509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.778 [2024-10-01 17:38:29.030519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.778 qpair failed and we were unable to recover it. 00:38:30.778 [2024-10-01 17:38:29.030818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.778 [2024-10-01 17:38:29.030831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.778 qpair failed and we were unable to recover it. 00:38:30.778 [2024-10-01 17:38:29.031104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.778 [2024-10-01 17:38:29.031115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.778 qpair failed and we were unable to recover it. 00:38:30.778 [2024-10-01 17:38:29.031458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.778 [2024-10-01 17:38:29.031469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.778 qpair failed and we were unable to recover it. 00:38:30.778 [2024-10-01 17:38:29.031750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.778 [2024-10-01 17:38:29.031761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.778 qpair failed and we were unable to recover it. 00:38:30.778 [2024-10-01 17:38:29.032079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.778 [2024-10-01 17:38:29.032091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.778 qpair failed and we were unable to recover it. 00:38:30.778 [2024-10-01 17:38:29.032411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.778 [2024-10-01 17:38:29.032421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.778 qpair failed and we were unable to recover it. 00:38:30.778 [2024-10-01 17:38:29.032703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.778 [2024-10-01 17:38:29.032715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.778 qpair failed and we were unable to recover it. 00:38:30.778 [2024-10-01 17:38:29.032999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.778 [2024-10-01 17:38:29.033011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.778 qpair failed and we were unable to recover it. 00:38:30.778 [2024-10-01 17:38:29.033297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.778 [2024-10-01 17:38:29.033307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.778 qpair failed and we were unable to recover it. 00:38:30.778 [2024-10-01 17:38:29.033618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.778 [2024-10-01 17:38:29.033629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.778 qpair failed and we were unable to recover it. 00:38:30.778 [2024-10-01 17:38:29.033961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.778 [2024-10-01 17:38:29.033973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.778 qpair failed and we were unable to recover it. 00:38:30.778 [2024-10-01 17:38:29.034270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.778 [2024-10-01 17:38:29.034281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.778 qpair failed and we were unable to recover it. 00:38:30.778 [2024-10-01 17:38:29.034575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.778 [2024-10-01 17:38:29.034587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.778 qpair failed and we were unable to recover it. 00:38:30.778 [2024-10-01 17:38:29.034893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.778 [2024-10-01 17:38:29.034904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.778 qpair failed and we were unable to recover it. 00:38:30.778 [2024-10-01 17:38:29.035197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.778 [2024-10-01 17:38:29.035207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.778 qpair failed and we were unable to recover it. 00:38:30.778 [2024-10-01 17:38:29.035492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.778 [2024-10-01 17:38:29.035505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.778 qpair failed and we were unable to recover it. 00:38:30.778 [2024-10-01 17:38:29.035759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.778 [2024-10-01 17:38:29.035771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.778 qpair failed and we were unable to recover it. 00:38:30.778 [2024-10-01 17:38:29.036090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.778 [2024-10-01 17:38:29.036102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.778 qpair failed and we were unable to recover it. 00:38:30.778 [2024-10-01 17:38:29.036394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.778 [2024-10-01 17:38:29.036406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.778 qpair failed and we were unable to recover it. 00:38:30.778 [2024-10-01 17:38:29.036703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.778 [2024-10-01 17:38:29.036715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.778 qpair failed and we were unable to recover it. 00:38:30.778 [2024-10-01 17:38:29.036897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.778 [2024-10-01 17:38:29.036909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.778 qpair failed and we were unable to recover it. 00:38:30.778 [2024-10-01 17:38:29.037199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.778 [2024-10-01 17:38:29.037212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.778 qpair failed and we were unable to recover it. 00:38:30.778 [2024-10-01 17:38:29.037504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.778 [2024-10-01 17:38:29.037516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.778 qpair failed and we were unable to recover it. 00:38:30.778 [2024-10-01 17:38:29.037835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.778 [2024-10-01 17:38:29.037847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.778 qpair failed and we were unable to recover it. 00:38:30.778 [2024-10-01 17:38:29.038036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.778 [2024-10-01 17:38:29.038048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.778 qpair failed and we were unable to recover it. 00:38:30.778 [2024-10-01 17:38:29.038334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.778 [2024-10-01 17:38:29.038345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.778 qpair failed and we were unable to recover it. 00:38:30.778 [2024-10-01 17:38:29.038672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.778 [2024-10-01 17:38:29.038683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.778 qpair failed and we were unable to recover it. 00:38:30.778 [2024-10-01 17:38:29.038985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.778 [2024-10-01 17:38:29.039010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.778 qpair failed and we were unable to recover it. 00:38:30.778 [2024-10-01 17:38:29.039318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.778 [2024-10-01 17:38:29.039328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.778 qpair failed and we were unable to recover it. 00:38:30.778 [2024-10-01 17:38:29.039663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.778 [2024-10-01 17:38:29.039675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.779 qpair failed and we were unable to recover it. 00:38:30.779 [2024-10-01 17:38:29.040008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.779 [2024-10-01 17:38:29.040020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.779 qpair failed and we were unable to recover it. 00:38:30.779 [2024-10-01 17:38:29.040333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.779 [2024-10-01 17:38:29.040343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.779 qpair failed and we were unable to recover it. 00:38:30.779 [2024-10-01 17:38:29.040649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.779 [2024-10-01 17:38:29.040661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.779 qpair failed and we were unable to recover it. 00:38:30.779 [2024-10-01 17:38:29.040986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.779 [2024-10-01 17:38:29.041000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.779 qpair failed and we were unable to recover it. 00:38:30.779 [2024-10-01 17:38:29.041295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.779 [2024-10-01 17:38:29.041305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.779 qpair failed and we were unable to recover it. 00:38:30.779 [2024-10-01 17:38:29.041610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.779 [2024-10-01 17:38:29.041621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.779 qpair failed and we were unable to recover it. 00:38:30.779 [2024-10-01 17:38:29.041925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.779 [2024-10-01 17:38:29.041936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.779 qpair failed and we were unable to recover it. 00:38:30.779 [2024-10-01 17:38:29.042227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.779 [2024-10-01 17:38:29.042238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.779 qpair failed and we were unable to recover it. 00:38:30.779 [2024-10-01 17:38:29.042520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.779 [2024-10-01 17:38:29.042531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.779 qpair failed and we were unable to recover it. 00:38:30.779 [2024-10-01 17:38:29.042832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.779 [2024-10-01 17:38:29.042843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.779 qpair failed and we were unable to recover it. 00:38:30.779 [2024-10-01 17:38:29.043148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.779 [2024-10-01 17:38:29.043160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.779 qpair failed and we were unable to recover it. 00:38:30.779 [2024-10-01 17:38:29.043480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.779 [2024-10-01 17:38:29.043491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.779 qpair failed and we were unable to recover it. 00:38:30.779 [2024-10-01 17:38:29.043816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.779 [2024-10-01 17:38:29.043829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.779 qpair failed and we were unable to recover it. 00:38:30.779 [2024-10-01 17:38:29.044128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.779 [2024-10-01 17:38:29.044139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.779 qpair failed and we were unable to recover it. 00:38:30.779 [2024-10-01 17:38:29.044434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.779 [2024-10-01 17:38:29.044444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.779 qpair failed and we were unable to recover it. 00:38:30.779 [2024-10-01 17:38:29.044747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.779 [2024-10-01 17:38:29.044758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.779 qpair failed and we were unable to recover it. 00:38:30.779 [2024-10-01 17:38:29.045086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.779 [2024-10-01 17:38:29.045097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.779 qpair failed and we were unable to recover it. 00:38:30.779 [2024-10-01 17:38:29.045394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.779 [2024-10-01 17:38:29.045405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.779 qpair failed and we were unable to recover it. 00:38:30.779 [2024-10-01 17:38:29.045663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.779 [2024-10-01 17:38:29.045674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.779 qpair failed and we were unable to recover it. 00:38:30.779 [2024-10-01 17:38:29.046006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.779 [2024-10-01 17:38:29.046018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.779 qpair failed and we were unable to recover it. 00:38:30.779 [2024-10-01 17:38:29.046299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.779 [2024-10-01 17:38:29.046309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.779 qpair failed and we were unable to recover it. 00:38:30.779 [2024-10-01 17:38:29.046611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.779 [2024-10-01 17:38:29.046621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.779 qpair failed and we were unable to recover it. 00:38:30.779 [2024-10-01 17:38:29.046921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.779 [2024-10-01 17:38:29.046931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.779 qpair failed and we were unable to recover it. 00:38:30.779 [2024-10-01 17:38:29.047231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.779 [2024-10-01 17:38:29.047242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.779 qpair failed and we were unable to recover it. 00:38:30.779 [2024-10-01 17:38:29.047522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.779 [2024-10-01 17:38:29.047533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.779 qpair failed and we were unable to recover it. 00:38:30.779 [2024-10-01 17:38:29.047834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.779 [2024-10-01 17:38:29.047844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.779 qpair failed and we were unable to recover it. 00:38:30.779 [2024-10-01 17:38:29.048065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.779 [2024-10-01 17:38:29.048077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.779 qpair failed and we were unable to recover it. 00:38:30.779 [2024-10-01 17:38:29.048394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.779 [2024-10-01 17:38:29.048405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.779 qpair failed and we were unable to recover it. 00:38:30.779 [2024-10-01 17:38:29.048732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.779 [2024-10-01 17:38:29.048743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.779 qpair failed and we were unable to recover it. 00:38:30.779 [2024-10-01 17:38:29.049048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.779 [2024-10-01 17:38:29.049059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.779 qpair failed and we were unable to recover it. 00:38:30.779 [2024-10-01 17:38:29.049458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.779 [2024-10-01 17:38:29.049469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.779 qpair failed and we were unable to recover it. 00:38:30.779 [2024-10-01 17:38:29.049790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.780 [2024-10-01 17:38:29.049801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.780 qpair failed and we were unable to recover it. 00:38:30.780 [2024-10-01 17:38:29.050126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.780 [2024-10-01 17:38:29.050137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.780 qpair failed and we were unable to recover it. 00:38:30.780 [2024-10-01 17:38:29.050482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.780 [2024-10-01 17:38:29.050492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.780 qpair failed and we were unable to recover it. 00:38:30.780 [2024-10-01 17:38:29.050799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.780 [2024-10-01 17:38:29.050810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.780 qpair failed and we were unable to recover it. 00:38:30.780 [2024-10-01 17:38:29.051120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.780 [2024-10-01 17:38:29.051131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.780 qpair failed and we were unable to recover it. 00:38:30.780 [2024-10-01 17:38:29.051461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.780 [2024-10-01 17:38:29.051472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.780 qpair failed and we were unable to recover it. 00:38:30.780 [2024-10-01 17:38:29.051798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.780 [2024-10-01 17:38:29.051810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.780 qpair failed and we were unable to recover it. 00:38:30.780 [2024-10-01 17:38:29.052111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.780 [2024-10-01 17:38:29.052122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.780 qpair failed and we were unable to recover it. 00:38:30.780 [2024-10-01 17:38:29.052452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.780 [2024-10-01 17:38:29.052462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.780 qpair failed and we were unable to recover it. 00:38:30.780 [2024-10-01 17:38:29.052793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.780 [2024-10-01 17:38:29.052804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.780 qpair failed and we were unable to recover it. 00:38:30.780 [2024-10-01 17:38:29.053104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.780 [2024-10-01 17:38:29.053115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.780 qpair failed and we were unable to recover it. 00:38:30.780 [2024-10-01 17:38:29.053311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.780 [2024-10-01 17:38:29.053322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.780 qpair failed and we were unable to recover it. 00:38:30.780 [2024-10-01 17:38:29.053645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.780 [2024-10-01 17:38:29.053655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.780 qpair failed and we were unable to recover it. 00:38:30.780 [2024-10-01 17:38:29.053931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.780 [2024-10-01 17:38:29.053942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.780 qpair failed and we were unable to recover it. 00:38:30.780 [2024-10-01 17:38:29.054239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.780 [2024-10-01 17:38:29.054250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.780 qpair failed and we were unable to recover it. 00:38:30.780 [2024-10-01 17:38:29.054541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.780 [2024-10-01 17:38:29.054552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.780 qpair failed and we were unable to recover it. 00:38:30.780 [2024-10-01 17:38:29.054853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.780 [2024-10-01 17:38:29.054865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.780 qpair failed and we were unable to recover it. 00:38:30.780 [2024-10-01 17:38:29.055162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.780 [2024-10-01 17:38:29.055173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.780 qpair failed and we were unable to recover it. 00:38:30.780 [2024-10-01 17:38:29.055487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.780 [2024-10-01 17:38:29.055498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.780 qpair failed and we were unable to recover it. 00:38:30.780 [2024-10-01 17:38:29.055797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.780 [2024-10-01 17:38:29.055807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.780 qpair failed and we were unable to recover it. 00:38:30.780 [2024-10-01 17:38:29.056138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.780 [2024-10-01 17:38:29.056149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.780 qpair failed and we were unable to recover it. 00:38:30.780 [2024-10-01 17:38:29.056434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.780 [2024-10-01 17:38:29.056445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.780 qpair failed and we were unable to recover it. 00:38:30.780 [2024-10-01 17:38:29.056749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.780 [2024-10-01 17:38:29.056762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.780 qpair failed and we were unable to recover it. 00:38:30.780 [2024-10-01 17:38:29.057068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.780 [2024-10-01 17:38:29.057079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.780 qpair failed and we were unable to recover it. 00:38:30.780 [2024-10-01 17:38:29.057379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.780 [2024-10-01 17:38:29.057390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.780 qpair failed and we were unable to recover it. 00:38:30.780 [2024-10-01 17:38:29.057713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.780 [2024-10-01 17:38:29.057724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.780 qpair failed and we were unable to recover it. 00:38:30.781 [2024-10-01 17:38:29.058019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.781 [2024-10-01 17:38:29.058029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.781 qpair failed and we were unable to recover it. 00:38:30.781 [2024-10-01 17:38:29.058335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.781 [2024-10-01 17:38:29.058346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.781 qpair failed and we were unable to recover it. 00:38:30.781 [2024-10-01 17:38:29.058608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.781 [2024-10-01 17:38:29.058620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.781 qpair failed and we were unable to recover it. 00:38:30.781 [2024-10-01 17:38:29.058955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.781 [2024-10-01 17:38:29.058966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.781 qpair failed and we were unable to recover it. 00:38:30.781 [2024-10-01 17:38:29.059267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.781 [2024-10-01 17:38:29.059279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.781 qpair failed and we were unable to recover it. 00:38:30.781 [2024-10-01 17:38:29.059598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.781 [2024-10-01 17:38:29.059610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.781 qpair failed and we were unable to recover it. 00:38:30.781 [2024-10-01 17:38:29.059827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.781 [2024-10-01 17:38:29.059838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.781 qpair failed and we were unable to recover it. 00:38:30.781 [2024-10-01 17:38:29.060136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.781 [2024-10-01 17:38:29.060147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.781 qpair failed and we were unable to recover it. 00:38:30.781 [2024-10-01 17:38:29.060447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.781 [2024-10-01 17:38:29.060458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.781 qpair failed and we were unable to recover it. 00:38:30.781 [2024-10-01 17:38:29.060763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.781 [2024-10-01 17:38:29.060774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.781 qpair failed and we were unable to recover it. 00:38:30.781 [2024-10-01 17:38:29.061076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.781 [2024-10-01 17:38:29.061088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.781 qpair failed and we were unable to recover it. 00:38:30.781 [2024-10-01 17:38:29.061417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.781 [2024-10-01 17:38:29.061428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.781 qpair failed and we were unable to recover it. 00:38:30.781 [2024-10-01 17:38:29.061735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.781 [2024-10-01 17:38:29.061746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.781 qpair failed and we were unable to recover it. 00:38:30.781 [2024-10-01 17:38:29.062051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.781 [2024-10-01 17:38:29.062062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.781 qpair failed and we were unable to recover it. 00:38:30.781 [2024-10-01 17:38:29.062387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.781 [2024-10-01 17:38:29.062398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.781 qpair failed and we were unable to recover it. 00:38:30.781 [2024-10-01 17:38:29.062722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.781 [2024-10-01 17:38:29.062734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.781 qpair failed and we were unable to recover it. 00:38:30.781 [2024-10-01 17:38:29.063018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.781 [2024-10-01 17:38:29.063029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.781 qpair failed and we were unable to recover it. 00:38:30.781 [2024-10-01 17:38:29.063352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.781 [2024-10-01 17:38:29.063362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.781 qpair failed and we were unable to recover it. 00:38:30.781 [2024-10-01 17:38:29.063681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.781 [2024-10-01 17:38:29.063692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.781 qpair failed and we were unable to recover it. 00:38:30.781 [2024-10-01 17:38:29.063971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.781 [2024-10-01 17:38:29.063981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.781 qpair failed and we were unable to recover it. 00:38:30.781 [2024-10-01 17:38:29.064278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.781 [2024-10-01 17:38:29.064290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.781 qpair failed and we were unable to recover it. 00:38:30.781 [2024-10-01 17:38:29.064590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.781 [2024-10-01 17:38:29.064600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.781 qpair failed and we were unable to recover it. 00:38:30.781 [2024-10-01 17:38:29.064859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.781 [2024-10-01 17:38:29.064870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.781 qpair failed and we were unable to recover it. 00:38:30.781 [2024-10-01 17:38:29.065199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.781 [2024-10-01 17:38:29.065214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.781 qpair failed and we were unable to recover it. 00:38:30.781 [2024-10-01 17:38:29.065532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.781 [2024-10-01 17:38:29.065542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.781 qpair failed and we were unable to recover it. 00:38:30.781 [2024-10-01 17:38:29.065842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.781 [2024-10-01 17:38:29.065853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.781 qpair failed and we were unable to recover it. 00:38:30.781 [2024-10-01 17:38:29.066155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.781 [2024-10-01 17:38:29.066166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.781 qpair failed and we were unable to recover it. 00:38:30.781 [2024-10-01 17:38:29.066442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.781 [2024-10-01 17:38:29.066452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.781 qpair failed and we were unable to recover it. 00:38:30.781 [2024-10-01 17:38:29.066754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.781 [2024-10-01 17:38:29.066765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.781 qpair failed and we were unable to recover it. 00:38:30.781 [2024-10-01 17:38:29.067064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.781 [2024-10-01 17:38:29.067075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.781 qpair failed and we were unable to recover it. 00:38:30.781 [2024-10-01 17:38:29.067365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.781 [2024-10-01 17:38:29.067376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.781 qpair failed and we were unable to recover it. 00:38:30.782 [2024-10-01 17:38:29.067708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.782 [2024-10-01 17:38:29.067720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.782 qpair failed and we were unable to recover it. 00:38:30.782 [2024-10-01 17:38:29.067996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.782 [2024-10-01 17:38:29.068008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.782 qpair failed and we were unable to recover it. 00:38:30.782 [2024-10-01 17:38:29.068336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.782 [2024-10-01 17:38:29.068347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.782 qpair failed and we were unable to recover it. 00:38:30.782 [2024-10-01 17:38:29.068674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.782 [2024-10-01 17:38:29.068685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.782 qpair failed and we were unable to recover it. 00:38:30.782 [2024-10-01 17:38:29.068938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.782 [2024-10-01 17:38:29.068948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.782 qpair failed and we were unable to recover it. 00:38:30.782 [2024-10-01 17:38:29.069251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.782 [2024-10-01 17:38:29.069262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.782 qpair failed and we were unable to recover it. 00:38:30.782 [2024-10-01 17:38:29.069567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.782 [2024-10-01 17:38:29.069577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.782 qpair failed and we were unable to recover it. 00:38:30.782 [2024-10-01 17:38:29.069877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.782 [2024-10-01 17:38:29.069888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.782 qpair failed and we were unable to recover it. 00:38:30.782 [2024-10-01 17:38:29.070200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.782 [2024-10-01 17:38:29.070213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.782 qpair failed and we were unable to recover it. 00:38:30.782 [2024-10-01 17:38:29.070517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.782 [2024-10-01 17:38:29.070528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.782 qpair failed and we were unable to recover it. 00:38:30.782 [2024-10-01 17:38:29.070700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.782 [2024-10-01 17:38:29.070710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.782 qpair failed and we were unable to recover it. 00:38:30.782 [2024-10-01 17:38:29.071026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.782 [2024-10-01 17:38:29.071037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.782 qpair failed and we were unable to recover it. 00:38:30.782 [2024-10-01 17:38:29.071336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.782 [2024-10-01 17:38:29.071346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.782 qpair failed and we were unable to recover it. 00:38:30.782 [2024-10-01 17:38:29.071644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.782 [2024-10-01 17:38:29.071655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.782 qpair failed and we were unable to recover it. 00:38:30.782 [2024-10-01 17:38:29.071964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.782 [2024-10-01 17:38:29.071975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.782 qpair failed and we were unable to recover it. 00:38:30.782 [2024-10-01 17:38:29.072299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.782 [2024-10-01 17:38:29.072311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.782 qpair failed and we were unable to recover it. 00:38:30.782 [2024-10-01 17:38:29.072643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.782 [2024-10-01 17:38:29.072654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.782 qpair failed and we were unable to recover it. 00:38:30.782 [2024-10-01 17:38:29.072959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.782 [2024-10-01 17:38:29.072969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.782 qpair failed and we were unable to recover it. 00:38:30.782 [2024-10-01 17:38:29.073276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.782 [2024-10-01 17:38:29.073288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.782 qpair failed and we were unable to recover it. 00:38:30.782 [2024-10-01 17:38:29.073580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.782 [2024-10-01 17:38:29.073591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.782 qpair failed and we were unable to recover it. 00:38:30.782 [2024-10-01 17:38:29.073923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.782 [2024-10-01 17:38:29.073935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.782 qpair failed and we were unable to recover it. 00:38:30.782 [2024-10-01 17:38:29.074234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.782 [2024-10-01 17:38:29.074246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.782 qpair failed and we were unable to recover it. 00:38:30.782 [2024-10-01 17:38:29.074541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.782 [2024-10-01 17:38:29.074551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.782 qpair failed and we were unable to recover it. 00:38:30.782 [2024-10-01 17:38:29.074809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.782 [2024-10-01 17:38:29.074820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.782 qpair failed and we were unable to recover it. 00:38:30.782 [2024-10-01 17:38:29.075120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.782 [2024-10-01 17:38:29.075131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.782 qpair failed and we were unable to recover it. 00:38:30.782 [2024-10-01 17:38:29.075484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.782 [2024-10-01 17:38:29.075495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.782 qpair failed and we were unable to recover it. 00:38:30.782 [2024-10-01 17:38:29.075812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.782 [2024-10-01 17:38:29.075823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.782 qpair failed and we were unable to recover it. 00:38:30.782 [2024-10-01 17:38:29.076148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.782 [2024-10-01 17:38:29.076160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.782 qpair failed and we were unable to recover it. 00:38:30.782 [2024-10-01 17:38:29.076487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.782 [2024-10-01 17:38:29.076498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.782 qpair failed and we were unable to recover it. 00:38:30.782 [2024-10-01 17:38:29.076756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.782 [2024-10-01 17:38:29.076767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.782 qpair failed and we were unable to recover it. 00:38:30.782 [2024-10-01 17:38:29.077085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.782 [2024-10-01 17:38:29.077096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.782 qpair failed and we were unable to recover it. 00:38:30.782 [2024-10-01 17:38:29.077415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.782 [2024-10-01 17:38:29.077427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.782 qpair failed and we were unable to recover it. 00:38:30.782 [2024-10-01 17:38:29.077704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.782 [2024-10-01 17:38:29.077716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.782 qpair failed and we were unable to recover it. 00:38:30.782 [2024-10-01 17:38:29.078019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.782 [2024-10-01 17:38:29.078032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.783 qpair failed and we were unable to recover it. 00:38:30.783 [2024-10-01 17:38:29.078386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.783 [2024-10-01 17:38:29.078399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.783 qpair failed and we were unable to recover it. 00:38:30.783 [2024-10-01 17:38:29.078693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.783 [2024-10-01 17:38:29.078704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.783 qpair failed and we were unable to recover it. 00:38:30.783 [2024-10-01 17:38:29.079029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.783 [2024-10-01 17:38:29.079040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.783 qpair failed and we were unable to recover it. 00:38:30.783 [2024-10-01 17:38:29.079386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.783 [2024-10-01 17:38:29.079397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.783 qpair failed and we were unable to recover it. 00:38:30.783 [2024-10-01 17:38:29.079609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.783 [2024-10-01 17:38:29.079619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.783 qpair failed and we were unable to recover it. 00:38:30.783 [2024-10-01 17:38:29.079941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.783 [2024-10-01 17:38:29.079951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.783 qpair failed and we were unable to recover it. 00:38:30.783 [2024-10-01 17:38:29.080226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.783 [2024-10-01 17:38:29.080237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.783 qpair failed and we were unable to recover it. 00:38:30.783 [2024-10-01 17:38:29.080539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.783 [2024-10-01 17:38:29.080550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.783 qpair failed and we were unable to recover it. 00:38:30.783 [2024-10-01 17:38:29.080854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.783 [2024-10-01 17:38:29.080864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.783 qpair failed and we were unable to recover it. 00:38:30.783 [2024-10-01 17:38:29.081164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.783 [2024-10-01 17:38:29.081176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.783 qpair failed and we were unable to recover it. 00:38:30.783 [2024-10-01 17:38:29.081469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.783 [2024-10-01 17:38:29.081480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.783 qpair failed and we were unable to recover it. 00:38:30.783 [2024-10-01 17:38:29.081674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.783 [2024-10-01 17:38:29.081685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.783 qpair failed and we were unable to recover it. 00:38:30.783 [2024-10-01 17:38:29.081902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.783 [2024-10-01 17:38:29.081913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.783 qpair failed and we were unable to recover it. 00:38:30.783 [2024-10-01 17:38:29.082194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.783 [2024-10-01 17:38:29.082206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.783 qpair failed and we were unable to recover it. 00:38:30.783 [2024-10-01 17:38:29.082532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.783 [2024-10-01 17:38:29.082542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.783 qpair failed and we were unable to recover it. 00:38:30.783 [2024-10-01 17:38:29.082838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.783 [2024-10-01 17:38:29.082849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.783 qpair failed and we were unable to recover it. 00:38:30.783 [2024-10-01 17:38:29.083157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.783 [2024-10-01 17:38:29.083168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.783 qpair failed and we were unable to recover it. 00:38:30.783 [2024-10-01 17:38:29.083490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.783 [2024-10-01 17:38:29.083501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.783 qpair failed and we were unable to recover it. 00:38:30.783 [2024-10-01 17:38:29.083780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.783 [2024-10-01 17:38:29.083791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.783 qpair failed and we were unable to recover it. 00:38:30.783 [2024-10-01 17:38:29.084063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.783 [2024-10-01 17:38:29.084074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.783 qpair failed and we were unable to recover it. 00:38:30.783 [2024-10-01 17:38:29.084414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.783 [2024-10-01 17:38:29.084424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.783 qpair failed and we were unable to recover it. 00:38:30.783 [2024-10-01 17:38:29.084710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.783 [2024-10-01 17:38:29.084722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.783 qpair failed and we were unable to recover it. 00:38:30.783 [2024-10-01 17:38:29.084932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.783 [2024-10-01 17:38:29.084943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.783 qpair failed and we were unable to recover it. 00:38:30.783 [2024-10-01 17:38:29.085239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.783 [2024-10-01 17:38:29.085250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.783 qpair failed and we were unable to recover it. 00:38:30.783 [2024-10-01 17:38:29.085582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.783 [2024-10-01 17:38:29.085594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.783 qpair failed and we were unable to recover it. 00:38:30.783 [2024-10-01 17:38:29.085926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.783 [2024-10-01 17:38:29.085937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.783 qpair failed and we were unable to recover it. 00:38:30.783 [2024-10-01 17:38:29.086262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.783 [2024-10-01 17:38:29.086275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.783 qpair failed and we were unable to recover it. 00:38:30.783 [2024-10-01 17:38:29.086616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.783 [2024-10-01 17:38:29.086628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.783 qpair failed and we were unable to recover it. 00:38:30.783 [2024-10-01 17:38:29.086925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.783 [2024-10-01 17:38:29.086936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.783 qpair failed and we were unable to recover it. 00:38:30.783 [2024-10-01 17:38:29.087233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.783 [2024-10-01 17:38:29.087243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.783 qpair failed and we were unable to recover it. 00:38:30.783 [2024-10-01 17:38:29.087534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.783 [2024-10-01 17:38:29.087545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.783 qpair failed and we were unable to recover it. 00:38:30.783 [2024-10-01 17:38:29.087850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.783 [2024-10-01 17:38:29.087861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.783 qpair failed and we were unable to recover it. 00:38:30.783 [2024-10-01 17:38:29.088166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.783 [2024-10-01 17:38:29.088177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.783 qpair failed and we were unable to recover it. 00:38:30.784 [2024-10-01 17:38:29.088346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.784 [2024-10-01 17:38:29.088358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.784 qpair failed and we were unable to recover it. 00:38:30.784 [2024-10-01 17:38:29.088673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.784 [2024-10-01 17:38:29.088685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.784 qpair failed and we were unable to recover it. 00:38:30.784 [2024-10-01 17:38:29.088981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.784 [2024-10-01 17:38:29.088992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.784 qpair failed and we were unable to recover it. 00:38:30.784 [2024-10-01 17:38:29.089322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.784 [2024-10-01 17:38:29.089333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.784 qpair failed and we were unable to recover it. 00:38:30.784 [2024-10-01 17:38:29.089687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.784 [2024-10-01 17:38:29.089698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.784 qpair failed and we were unable to recover it. 00:38:30.784 [2024-10-01 17:38:29.089991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.784 [2024-10-01 17:38:29.090007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.784 qpair failed and we were unable to recover it. 00:38:30.784 [2024-10-01 17:38:29.090341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.784 [2024-10-01 17:38:29.090352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.784 qpair failed and we were unable to recover it. 00:38:30.784 [2024-10-01 17:38:29.090651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.784 [2024-10-01 17:38:29.090662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.784 qpair failed and we were unable to recover it. 00:38:30.784 [2024-10-01 17:38:29.091016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.784 [2024-10-01 17:38:29.091028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.784 qpair failed and we were unable to recover it. 00:38:30.784 [2024-10-01 17:38:29.091334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.784 [2024-10-01 17:38:29.091344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.784 qpair failed and we were unable to recover it. 00:38:30.784 [2024-10-01 17:38:29.091654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.784 [2024-10-01 17:38:29.091666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.784 qpair failed and we were unable to recover it. 00:38:30.784 [2024-10-01 17:38:29.091876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.784 [2024-10-01 17:38:29.091887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.784 qpair failed and we were unable to recover it. 00:38:30.784 [2024-10-01 17:38:29.092162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.784 [2024-10-01 17:38:29.092174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.784 qpair failed and we were unable to recover it. 00:38:30.784 [2024-10-01 17:38:29.092507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.784 [2024-10-01 17:38:29.092518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.784 qpair failed and we were unable to recover it. 00:38:30.784 [2024-10-01 17:38:29.092813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.784 [2024-10-01 17:38:29.092823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.784 qpair failed and we were unable to recover it. 00:38:30.784 [2024-10-01 17:38:29.093127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.784 [2024-10-01 17:38:29.093138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.784 qpair failed and we were unable to recover it. 00:38:30.784 [2024-10-01 17:38:29.093460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.784 [2024-10-01 17:38:29.093470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.784 qpair failed and we were unable to recover it. 00:38:30.784 [2024-10-01 17:38:29.093756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.784 [2024-10-01 17:38:29.093767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.784 qpair failed and we were unable to recover it. 00:38:30.784 [2024-10-01 17:38:29.094101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.784 [2024-10-01 17:38:29.094112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.784 qpair failed and we were unable to recover it. 00:38:30.784 [2024-10-01 17:38:29.094412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.784 [2024-10-01 17:38:29.094423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.784 qpair failed and we were unable to recover it. 00:38:30.784 [2024-10-01 17:38:29.094753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.784 [2024-10-01 17:38:29.094763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.784 qpair failed and we were unable to recover it. 00:38:30.784 [2024-10-01 17:38:29.095090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.784 [2024-10-01 17:38:29.095103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.784 qpair failed and we were unable to recover it. 00:38:30.784 [2024-10-01 17:38:29.095400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.784 [2024-10-01 17:38:29.095411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.784 qpair failed and we were unable to recover it. 00:38:30.784 [2024-10-01 17:38:29.095751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.784 [2024-10-01 17:38:29.095763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.784 qpair failed and we were unable to recover it. 00:38:30.784 [2024-10-01 17:38:29.096010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.784 [2024-10-01 17:38:29.096022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.784 qpair failed and we were unable to recover it. 00:38:30.784 [2024-10-01 17:38:29.096358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.784 [2024-10-01 17:38:29.096368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.784 qpair failed and we were unable to recover it. 00:38:30.784 [2024-10-01 17:38:29.096671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.784 [2024-10-01 17:38:29.096682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.784 qpair failed and we were unable to recover it. 00:38:30.784 [2024-10-01 17:38:29.096988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.784 [2024-10-01 17:38:29.097003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.784 qpair failed and we were unable to recover it. 00:38:30.784 [2024-10-01 17:38:29.097341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.784 [2024-10-01 17:38:29.097351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.784 qpair failed and we were unable to recover it. 00:38:30.784 [2024-10-01 17:38:29.097679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.784 [2024-10-01 17:38:29.097690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.784 qpair failed and we were unable to recover it. 00:38:30.784 [2024-10-01 17:38:29.097990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.784 [2024-10-01 17:38:29.098004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.784 qpair failed and we were unable to recover it. 00:38:30.784 [2024-10-01 17:38:29.098341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.784 [2024-10-01 17:38:29.098353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.784 qpair failed and we were unable to recover it. 00:38:30.784 [2024-10-01 17:38:29.098677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.785 [2024-10-01 17:38:29.098688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.785 qpair failed and we were unable to recover it. 00:38:30.785 [2024-10-01 17:38:29.099017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.785 [2024-10-01 17:38:29.099029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.785 qpair failed and we were unable to recover it. 00:38:30.785 [2024-10-01 17:38:29.099393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.785 [2024-10-01 17:38:29.099406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.785 qpair failed and we were unable to recover it. 00:38:30.785 [2024-10-01 17:38:29.099715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.785 [2024-10-01 17:38:29.099725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.785 qpair failed and we were unable to recover it. 00:38:30.785 [2024-10-01 17:38:29.099915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.785 [2024-10-01 17:38:29.099926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.785 qpair failed and we were unable to recover it. 00:38:30.785 [2024-10-01 17:38:29.100204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.785 [2024-10-01 17:38:29.100214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.785 qpair failed and we were unable to recover it. 00:38:30.785 [2024-10-01 17:38:29.100523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.785 [2024-10-01 17:38:29.100534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.785 qpair failed and we were unable to recover it. 00:38:30.785 [2024-10-01 17:38:29.100855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.785 [2024-10-01 17:38:29.100866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.785 qpair failed and we were unable to recover it. 00:38:30.785 [2024-10-01 17:38:29.101165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.785 [2024-10-01 17:38:29.101175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.785 qpair failed and we were unable to recover it. 00:38:30.785 [2024-10-01 17:38:29.101464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.785 [2024-10-01 17:38:29.101474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.785 qpair failed and we were unable to recover it. 00:38:30.785 [2024-10-01 17:38:29.101673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.785 [2024-10-01 17:38:29.101684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.785 qpair failed and we were unable to recover it. 00:38:30.785 [2024-10-01 17:38:29.101850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.785 [2024-10-01 17:38:29.101861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.785 qpair failed and we were unable to recover it. 00:38:30.785 [2024-10-01 17:38:29.102158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.785 [2024-10-01 17:38:29.102170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.785 qpair failed and we were unable to recover it. 00:38:30.785 [2024-10-01 17:38:29.102452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.785 [2024-10-01 17:38:29.102464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.785 qpair failed and we were unable to recover it. 00:38:30.785 [2024-10-01 17:38:29.102766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.785 [2024-10-01 17:38:29.102777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.785 qpair failed and we were unable to recover it. 00:38:30.785 [2024-10-01 17:38:29.103109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.785 [2024-10-01 17:38:29.103120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.785 qpair failed and we were unable to recover it. 00:38:30.785 [2024-10-01 17:38:29.103443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.785 [2024-10-01 17:38:29.103453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.785 qpair failed and we were unable to recover it. 00:38:30.785 [2024-10-01 17:38:29.103726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.785 [2024-10-01 17:38:29.103737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.785 qpair failed and we were unable to recover it. 00:38:30.785 [2024-10-01 17:38:29.104042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.785 [2024-10-01 17:38:29.104053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.785 qpair failed and we were unable to recover it. 00:38:30.785 [2024-10-01 17:38:29.104365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.785 [2024-10-01 17:38:29.104376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.785 qpair failed and we were unable to recover it. 00:38:30.785 [2024-10-01 17:38:29.104663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.785 [2024-10-01 17:38:29.104673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.785 qpair failed and we were unable to recover it. 00:38:30.785 [2024-10-01 17:38:29.105002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.785 [2024-10-01 17:38:29.105013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.785 qpair failed and we were unable to recover it. 00:38:30.785 [2024-10-01 17:38:29.105288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.785 [2024-10-01 17:38:29.105298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.785 qpair failed and we were unable to recover it. 00:38:30.785 [2024-10-01 17:38:29.105595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.785 [2024-10-01 17:38:29.105605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.785 qpair failed and we were unable to recover it. 00:38:30.785 [2024-10-01 17:38:29.105869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.785 [2024-10-01 17:38:29.105879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.785 qpair failed and we were unable to recover it. 00:38:30.785 [2024-10-01 17:38:29.106154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.785 [2024-10-01 17:38:29.106165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.785 qpair failed and we were unable to recover it. 00:38:30.785 [2024-10-01 17:38:29.106510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.785 [2024-10-01 17:38:29.106520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.785 qpair failed and we were unable to recover it. 00:38:30.785 [2024-10-01 17:38:29.106819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.786 [2024-10-01 17:38:29.106829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.786 qpair failed and we were unable to recover it. 00:38:30.786 [2024-10-01 17:38:29.107133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.786 [2024-10-01 17:38:29.107144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.786 qpair failed and we were unable to recover it. 00:38:30.786 [2024-10-01 17:38:29.107425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.786 [2024-10-01 17:38:29.107436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.786 qpair failed and we were unable to recover it. 00:38:30.786 [2024-10-01 17:38:29.107747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.786 [2024-10-01 17:38:29.107758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.786 qpair failed and we were unable to recover it. 00:38:30.786 [2024-10-01 17:38:29.108066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.786 [2024-10-01 17:38:29.108078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.786 qpair failed and we were unable to recover it. 00:38:30.786 [2024-10-01 17:38:29.108378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.786 [2024-10-01 17:38:29.108389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.786 qpair failed and we were unable to recover it. 00:38:30.786 [2024-10-01 17:38:29.108718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.786 [2024-10-01 17:38:29.108728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.786 qpair failed and we were unable to recover it. 00:38:30.786 [2024-10-01 17:38:29.109000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.786 [2024-10-01 17:38:29.109012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.786 qpair failed and we were unable to recover it. 00:38:30.786 [2024-10-01 17:38:29.109291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.786 [2024-10-01 17:38:29.109302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.786 qpair failed and we were unable to recover it. 00:38:30.786 [2024-10-01 17:38:29.109631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.786 [2024-10-01 17:38:29.109641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.786 qpair failed and we were unable to recover it. 00:38:30.786 [2024-10-01 17:38:29.109967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.786 [2024-10-01 17:38:29.109978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.786 qpair failed and we were unable to recover it. 00:38:30.786 [2024-10-01 17:38:29.110329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.786 [2024-10-01 17:38:29.110340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.786 qpair failed and we were unable to recover it. 00:38:30.786 [2024-10-01 17:38:29.110640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.786 [2024-10-01 17:38:29.110651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.786 qpair failed and we were unable to recover it. 00:38:30.786 [2024-10-01 17:38:29.110951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.786 [2024-10-01 17:38:29.110962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.786 qpair failed and we were unable to recover it. 00:38:30.786 [2024-10-01 17:38:29.111289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.786 [2024-10-01 17:38:29.111301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.786 qpair failed and we were unable to recover it. 00:38:30.786 [2024-10-01 17:38:29.111607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.786 [2024-10-01 17:38:29.111618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.786 qpair failed and we were unable to recover it. 00:38:30.786 [2024-10-01 17:38:29.111924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.786 [2024-10-01 17:38:29.111935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.786 qpair failed and we were unable to recover it. 00:38:30.786 [2024-10-01 17:38:29.112227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.786 [2024-10-01 17:38:29.112238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.786 qpair failed and we were unable to recover it. 00:38:30.786 [2024-10-01 17:38:29.112513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.786 [2024-10-01 17:38:29.112523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.786 qpair failed and we were unable to recover it. 00:38:30.786 [2024-10-01 17:38:29.112825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.786 [2024-10-01 17:38:29.112836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.786 qpair failed and we were unable to recover it. 00:38:30.786 [2024-10-01 17:38:29.113139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.786 [2024-10-01 17:38:29.113150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.786 qpair failed and we were unable to recover it. 00:38:30.786 [2024-10-01 17:38:29.113429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.786 [2024-10-01 17:38:29.113440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.786 qpair failed and we were unable to recover it. 00:38:30.786 [2024-10-01 17:38:29.113721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.786 [2024-10-01 17:38:29.113732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.786 qpair failed and we were unable to recover it. 00:38:30.786 [2024-10-01 17:38:29.114028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.786 [2024-10-01 17:38:29.114039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.786 qpair failed and we were unable to recover it. 00:38:30.786 [2024-10-01 17:38:29.114338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.786 [2024-10-01 17:38:29.114349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.786 qpair failed and we were unable to recover it. 00:38:30.786 [2024-10-01 17:38:29.114673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.786 [2024-10-01 17:38:29.114684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.786 qpair failed and we were unable to recover it. 00:38:30.786 [2024-10-01 17:38:29.114968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.786 [2024-10-01 17:38:29.114979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.786 qpair failed and we were unable to recover it. 00:38:30.786 [2024-10-01 17:38:29.115284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.786 [2024-10-01 17:38:29.115296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.786 qpair failed and we were unable to recover it. 00:38:30.786 [2024-10-01 17:38:29.115587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.786 [2024-10-01 17:38:29.115598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.786 qpair failed and we were unable to recover it. 00:38:30.786 [2024-10-01 17:38:29.115904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.786 [2024-10-01 17:38:29.115915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.786 qpair failed and we were unable to recover it. 00:38:30.786 [2024-10-01 17:38:29.116211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.786 [2024-10-01 17:38:29.116223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.786 qpair failed and we were unable to recover it. 00:38:30.786 [2024-10-01 17:38:29.116525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.786 [2024-10-01 17:38:29.116536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.786 qpair failed and we were unable to recover it. 00:38:30.786 [2024-10-01 17:38:29.116861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.786 [2024-10-01 17:38:29.116872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.787 qpair failed and we were unable to recover it. 00:38:30.787 [2024-10-01 17:38:29.117153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.787 [2024-10-01 17:38:29.117164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.787 qpair failed and we were unable to recover it. 00:38:30.787 [2024-10-01 17:38:29.117498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.787 [2024-10-01 17:38:29.117509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.787 qpair failed and we were unable to recover it. 00:38:30.787 [2024-10-01 17:38:29.117836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.787 [2024-10-01 17:38:29.117847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.787 qpair failed and we were unable to recover it. 00:38:30.787 [2024-10-01 17:38:29.118151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.787 [2024-10-01 17:38:29.118162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.787 qpair failed and we were unable to recover it. 00:38:30.787 [2024-10-01 17:38:29.118465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.787 [2024-10-01 17:38:29.118475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.787 qpair failed and we were unable to recover it. 00:38:30.787 [2024-10-01 17:38:29.118797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.787 [2024-10-01 17:38:29.118808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.787 qpair failed and we were unable to recover it. 00:38:30.787 [2024-10-01 17:38:29.119111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.787 [2024-10-01 17:38:29.119122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.787 qpair failed and we were unable to recover it. 00:38:30.787 [2024-10-01 17:38:29.119422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.787 [2024-10-01 17:38:29.119432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.787 qpair failed and we were unable to recover it. 00:38:30.787 [2024-10-01 17:38:29.119696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.787 [2024-10-01 17:38:29.119706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.787 qpair failed and we were unable to recover it. 00:38:30.787 [2024-10-01 17:38:29.119982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.787 [2024-10-01 17:38:29.119992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.787 qpair failed and we were unable to recover it. 00:38:30.787 [2024-10-01 17:38:29.120277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.787 [2024-10-01 17:38:29.120290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.787 qpair failed and we were unable to recover it. 00:38:30.787 [2024-10-01 17:38:29.120599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.787 [2024-10-01 17:38:29.120610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.787 qpair failed and we were unable to recover it. 00:38:30.787 [2024-10-01 17:38:29.120930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.787 [2024-10-01 17:38:29.120941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.787 qpair failed and we were unable to recover it. 00:38:30.787 [2024-10-01 17:38:29.121239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.787 [2024-10-01 17:38:29.121250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.787 qpair failed and we were unable to recover it. 00:38:30.787 [2024-10-01 17:38:29.121551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.787 [2024-10-01 17:38:29.121561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.787 qpair failed and we were unable to recover it. 00:38:30.787 [2024-10-01 17:38:29.121868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.787 [2024-10-01 17:38:29.121879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.787 qpair failed and we were unable to recover it. 00:38:30.787 [2024-10-01 17:38:29.122158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.787 [2024-10-01 17:38:29.122170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.787 qpair failed and we were unable to recover it. 00:38:30.787 [2024-10-01 17:38:29.122373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.787 [2024-10-01 17:38:29.122383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.787 qpair failed and we were unable to recover it. 00:38:30.787 [2024-10-01 17:38:29.122696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.787 [2024-10-01 17:38:29.122706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.787 qpair failed and we were unable to recover it. 00:38:30.787 [2024-10-01 17:38:29.123010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.787 [2024-10-01 17:38:29.123021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.787 qpair failed and we were unable to recover it. 00:38:30.787 [2024-10-01 17:38:29.123201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.787 [2024-10-01 17:38:29.123213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.787 qpair failed and we were unable to recover it. 00:38:30.787 [2024-10-01 17:38:29.123553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.787 [2024-10-01 17:38:29.123564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.787 qpair failed and we were unable to recover it. 00:38:30.787 [2024-10-01 17:38:29.123943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.787 [2024-10-01 17:38:29.123953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.787 qpair failed and we were unable to recover it. 00:38:30.787 [2024-10-01 17:38:29.124251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.787 [2024-10-01 17:38:29.124262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.787 qpair failed and we were unable to recover it. 00:38:30.787 [2024-10-01 17:38:29.124595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.787 [2024-10-01 17:38:29.124606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.787 qpair failed and we were unable to recover it. 00:38:30.787 [2024-10-01 17:38:29.125024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.787 [2024-10-01 17:38:29.125036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.787 qpair failed and we were unable to recover it. 00:38:30.787 [2024-10-01 17:38:29.125337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.787 [2024-10-01 17:38:29.125348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.787 qpair failed and we were unable to recover it. 00:38:30.787 [2024-10-01 17:38:29.125653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.787 [2024-10-01 17:38:29.125665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.787 qpair failed and we were unable to recover it. 00:38:30.787 [2024-10-01 17:38:29.125846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.787 [2024-10-01 17:38:29.125858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.787 qpair failed and we were unable to recover it. 00:38:30.787 [2024-10-01 17:38:29.126165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.787 [2024-10-01 17:38:29.126176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.787 qpair failed and we were unable to recover it. 00:38:30.787 [2024-10-01 17:38:29.126476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.787 [2024-10-01 17:38:29.126486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.787 qpair failed and we were unable to recover it. 00:38:30.787 [2024-10-01 17:38:29.126792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.787 [2024-10-01 17:38:29.126803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.788 qpair failed and we were unable to recover it. 00:38:30.788 [2024-10-01 17:38:29.126979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.788 [2024-10-01 17:38:29.126991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.788 qpair failed and we were unable to recover it. 00:38:30.788 [2024-10-01 17:38:29.127335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.788 [2024-10-01 17:38:29.127346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.788 qpair failed and we were unable to recover it. 00:38:30.788 [2024-10-01 17:38:29.127647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.788 [2024-10-01 17:38:29.127657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.788 qpair failed and we were unable to recover it. 00:38:30.788 [2024-10-01 17:38:29.127963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.788 [2024-10-01 17:38:29.127974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.788 qpair failed and we were unable to recover it. 00:38:30.788 [2024-10-01 17:38:29.128276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.788 [2024-10-01 17:38:29.128287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.788 qpair failed and we were unable to recover it. 00:38:30.788 [2024-10-01 17:38:29.128612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.788 [2024-10-01 17:38:29.128622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.788 qpair failed and we were unable to recover it. 00:38:30.788 [2024-10-01 17:38:29.128897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.788 [2024-10-01 17:38:29.128908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.788 qpair failed and we were unable to recover it. 00:38:30.788 [2024-10-01 17:38:29.129221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.788 [2024-10-01 17:38:29.129232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.788 qpair failed and we were unable to recover it. 00:38:30.788 [2024-10-01 17:38:29.129564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.788 [2024-10-01 17:38:29.129575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.788 qpair failed and we were unable to recover it. 00:38:30.788 [2024-10-01 17:38:29.129760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.788 [2024-10-01 17:38:29.129772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.788 qpair failed and we were unable to recover it. 00:38:30.788 [2024-10-01 17:38:29.129979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.788 [2024-10-01 17:38:29.129990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.788 qpair failed and we were unable to recover it. 00:38:30.788 [2024-10-01 17:38:29.130285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.788 [2024-10-01 17:38:29.130296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.788 qpair failed and we were unable to recover it. 00:38:30.788 [2024-10-01 17:38:29.130597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.788 [2024-10-01 17:38:29.130607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.788 qpair failed and we were unable to recover it. 00:38:30.788 [2024-10-01 17:38:29.130935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.788 [2024-10-01 17:38:29.130945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.788 qpair failed and we were unable to recover it. 00:38:30.788 [2024-10-01 17:38:29.131254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.788 [2024-10-01 17:38:29.131265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.788 qpair failed and we were unable to recover it. 00:38:30.788 [2024-10-01 17:38:29.131575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.788 [2024-10-01 17:38:29.131586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.788 qpair failed and we were unable to recover it. 00:38:30.788 [2024-10-01 17:38:29.131906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.788 [2024-10-01 17:38:29.131917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.788 qpair failed and we were unable to recover it. 00:38:30.788 [2024-10-01 17:38:29.132199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.788 [2024-10-01 17:38:29.132210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.788 qpair failed and we were unable to recover it. 00:38:30.788 [2024-10-01 17:38:29.132512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.788 [2024-10-01 17:38:29.132523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.788 qpair failed and we were unable to recover it. 00:38:30.788 [2024-10-01 17:38:29.132816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.788 [2024-10-01 17:38:29.132827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.788 qpair failed and we were unable to recover it. 00:38:30.788 [2024-10-01 17:38:29.133154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.788 [2024-10-01 17:38:29.133166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.788 qpair failed and we were unable to recover it. 00:38:30.788 [2024-10-01 17:38:29.133378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.788 [2024-10-01 17:38:29.133389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.788 qpair failed and we were unable to recover it. 00:38:30.788 [2024-10-01 17:38:29.133693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.788 [2024-10-01 17:38:29.133703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.788 qpair failed and we were unable to recover it. 00:38:30.788 [2024-10-01 17:38:29.134014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.788 [2024-10-01 17:38:29.134024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.788 qpair failed and we were unable to recover it. 00:38:30.788 [2024-10-01 17:38:29.134320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.788 [2024-10-01 17:38:29.134330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.788 qpair failed and we were unable to recover it. 00:38:30.788 [2024-10-01 17:38:29.134671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.788 [2024-10-01 17:38:29.134681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.788 qpair failed and we were unable to recover it. 00:38:30.788 [2024-10-01 17:38:29.134985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.788 [2024-10-01 17:38:29.134998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.788 qpair failed and we were unable to recover it. 00:38:30.788 [2024-10-01 17:38:29.135394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.788 [2024-10-01 17:38:29.135405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.788 qpair failed and we were unable to recover it. 00:38:30.788 [2024-10-01 17:38:29.135735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.788 [2024-10-01 17:38:29.135746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.788 qpair failed and we were unable to recover it. 00:38:30.788 [2024-10-01 17:38:29.136032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.788 [2024-10-01 17:38:29.136043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.788 qpair failed and we were unable to recover it. 00:38:30.788 [2024-10-01 17:38:29.136359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.788 [2024-10-01 17:38:29.136369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.788 qpair failed and we were unable to recover it. 00:38:30.788 [2024-10-01 17:38:29.136673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.788 [2024-10-01 17:38:29.136684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.788 qpair failed and we were unable to recover it. 00:38:30.788 [2024-10-01 17:38:29.136981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.788 [2024-10-01 17:38:29.136992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.788 qpair failed and we were unable to recover it. 00:38:30.788 [2024-10-01 17:38:29.137294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.788 [2024-10-01 17:38:29.137305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.788 qpair failed and we were unable to recover it. 00:38:30.788 [2024-10-01 17:38:29.137574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.789 [2024-10-01 17:38:29.137585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.789 qpair failed and we were unable to recover it. 00:38:30.789 [2024-10-01 17:38:29.137777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.789 [2024-10-01 17:38:29.137788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.789 qpair failed and we were unable to recover it. 00:38:30.789 [2024-10-01 17:38:29.137984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.789 [2024-10-01 17:38:29.138002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.789 qpair failed and we were unable to recover it. 00:38:30.789 [2024-10-01 17:38:29.138318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.789 [2024-10-01 17:38:29.138329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.789 qpair failed and we were unable to recover it. 00:38:30.789 [2024-10-01 17:38:29.138673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.789 [2024-10-01 17:38:29.138684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.789 qpair failed and we were unable to recover it. 00:38:30.789 [2024-10-01 17:38:29.138986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.789 [2024-10-01 17:38:29.139006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.789 qpair failed and we were unable to recover it. 00:38:30.789 [2024-10-01 17:38:29.139288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.789 [2024-10-01 17:38:29.139298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.789 qpair failed and we were unable to recover it. 00:38:30.789 [2024-10-01 17:38:29.139628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.789 [2024-10-01 17:38:29.139639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.789 qpair failed and we were unable to recover it. 00:38:30.789 [2024-10-01 17:38:29.139933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.789 [2024-10-01 17:38:29.139943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.789 qpair failed and we were unable to recover it. 00:38:30.789 [2024-10-01 17:38:29.140245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.789 [2024-10-01 17:38:29.140256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.789 qpair failed and we were unable to recover it. 00:38:30.789 [2024-10-01 17:38:29.140560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.789 [2024-10-01 17:38:29.140571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.789 qpair failed and we were unable to recover it. 00:38:30.789 [2024-10-01 17:38:29.140897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.789 [2024-10-01 17:38:29.140909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.789 qpair failed and we were unable to recover it. 00:38:30.789 [2024-10-01 17:38:29.141217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.789 [2024-10-01 17:38:29.141231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.789 qpair failed and we were unable to recover it. 00:38:30.789 [2024-10-01 17:38:29.141539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.789 [2024-10-01 17:38:29.141550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.789 qpair failed and we were unable to recover it. 00:38:30.789 [2024-10-01 17:38:29.141852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.789 [2024-10-01 17:38:29.141862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.789 qpair failed and we were unable to recover it. 00:38:30.789 [2024-10-01 17:38:29.142190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.789 [2024-10-01 17:38:29.142202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.789 qpair failed and we were unable to recover it. 00:38:30.789 [2024-10-01 17:38:29.142498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.789 [2024-10-01 17:38:29.142509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.789 qpair failed and we were unable to recover it. 00:38:30.789 [2024-10-01 17:38:29.142814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.789 [2024-10-01 17:38:29.142825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.789 qpair failed and we were unable to recover it. 00:38:30.789 [2024-10-01 17:38:29.143126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.789 [2024-10-01 17:38:29.143137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.789 qpair failed and we were unable to recover it. 00:38:30.789 [2024-10-01 17:38:29.143421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.789 [2024-10-01 17:38:29.143432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.789 qpair failed and we were unable to recover it. 00:38:30.789 [2024-10-01 17:38:29.143734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.789 [2024-10-01 17:38:29.143745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.789 qpair failed and we were unable to recover it. 00:38:30.789 [2024-10-01 17:38:29.144087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.789 [2024-10-01 17:38:29.144098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.789 qpair failed and we were unable to recover it. 00:38:30.789 [2024-10-01 17:38:29.144420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.789 [2024-10-01 17:38:29.144431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.789 qpair failed and we were unable to recover it. 00:38:30.789 [2024-10-01 17:38:29.144707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.789 [2024-10-01 17:38:29.144719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.789 qpair failed and we were unable to recover it. 00:38:30.789 [2024-10-01 17:38:29.145040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.789 [2024-10-01 17:38:29.145051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.789 qpair failed and we were unable to recover it. 00:38:30.789 [2024-10-01 17:38:29.145367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.789 [2024-10-01 17:38:29.145377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.789 qpair failed and we were unable to recover it. 00:38:30.789 [2024-10-01 17:38:29.145709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.789 [2024-10-01 17:38:29.145720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.789 qpair failed and we were unable to recover it. 00:38:30.789 [2024-10-01 17:38:29.146047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.789 [2024-10-01 17:38:29.146058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.789 qpair failed and we were unable to recover it. 00:38:30.789 [2024-10-01 17:38:29.146355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.789 [2024-10-01 17:38:29.146366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.789 qpair failed and we were unable to recover it. 00:38:30.789 [2024-10-01 17:38:29.146668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.789 [2024-10-01 17:38:29.146678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.789 qpair failed and we were unable to recover it. 00:38:30.789 [2024-10-01 17:38:29.147004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.789 [2024-10-01 17:38:29.147015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.789 qpair failed and we were unable to recover it. 00:38:30.789 [2024-10-01 17:38:29.147320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.789 [2024-10-01 17:38:29.147332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.789 qpair failed and we were unable to recover it. 00:38:30.789 [2024-10-01 17:38:29.147634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.789 [2024-10-01 17:38:29.147645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.789 qpair failed and we were unable to recover it. 00:38:30.789 [2024-10-01 17:38:29.147965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.789 [2024-10-01 17:38:29.147976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.789 qpair failed and we were unable to recover it. 00:38:30.790 [2024-10-01 17:38:29.148281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.790 [2024-10-01 17:38:29.148293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.790 qpair failed and we were unable to recover it. 00:38:30.790 [2024-10-01 17:38:29.148568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.790 [2024-10-01 17:38:29.148579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.790 qpair failed and we were unable to recover it. 00:38:30.790 [2024-10-01 17:38:29.148881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.790 [2024-10-01 17:38:29.148892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.790 qpair failed and we were unable to recover it. 00:38:30.790 [2024-10-01 17:38:29.149172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.790 [2024-10-01 17:38:29.149183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.790 qpair failed and we were unable to recover it. 00:38:30.790 [2024-10-01 17:38:29.149464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.790 [2024-10-01 17:38:29.149474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.790 qpair failed and we were unable to recover it. 00:38:30.790 [2024-10-01 17:38:29.149747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.790 [2024-10-01 17:38:29.149758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.790 qpair failed and we were unable to recover it. 00:38:30.790 [2024-10-01 17:38:29.150088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.790 [2024-10-01 17:38:29.150099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.790 qpair failed and we were unable to recover it. 00:38:30.790 [2024-10-01 17:38:29.150421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.790 [2024-10-01 17:38:29.150432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.790 qpair failed and we were unable to recover it. 00:38:30.790 [2024-10-01 17:38:29.150764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.790 [2024-10-01 17:38:29.150775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.790 qpair failed and we were unable to recover it. 00:38:30.790 [2024-10-01 17:38:29.151100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.790 [2024-10-01 17:38:29.151113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.790 qpair failed and we were unable to recover it. 00:38:30.790 [2024-10-01 17:38:29.151410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.790 [2024-10-01 17:38:29.151421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.790 qpair failed and we were unable to recover it. 00:38:30.790 [2024-10-01 17:38:29.151724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.790 [2024-10-01 17:38:29.151734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.790 qpair failed and we were unable to recover it. 00:38:30.790 [2024-10-01 17:38:29.152040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.790 [2024-10-01 17:38:29.152051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.790 qpair failed and we were unable to recover it. 00:38:30.790 [2024-10-01 17:38:29.152385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.790 [2024-10-01 17:38:29.152395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.790 qpair failed and we were unable to recover it. 00:38:30.790 [2024-10-01 17:38:29.152681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.790 [2024-10-01 17:38:29.152692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.790 qpair failed and we were unable to recover it. 00:38:30.790 [2024-10-01 17:38:29.153000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.790 [2024-10-01 17:38:29.153012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.790 qpair failed and we were unable to recover it. 00:38:30.790 [2024-10-01 17:38:29.153294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.790 [2024-10-01 17:38:29.153305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.790 qpair failed and we were unable to recover it. 00:38:30.790 [2024-10-01 17:38:29.153570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.790 [2024-10-01 17:38:29.153582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.790 qpair failed and we were unable to recover it. 00:38:30.790 [2024-10-01 17:38:29.153883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.790 [2024-10-01 17:38:29.153894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.790 qpair failed and we were unable to recover it. 00:38:30.790 [2024-10-01 17:38:29.154210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.790 [2024-10-01 17:38:29.154223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.790 qpair failed and we were unable to recover it. 00:38:30.790 [2024-10-01 17:38:29.154497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.790 [2024-10-01 17:38:29.154507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.790 qpair failed and we were unable to recover it. 00:38:30.790 [2024-10-01 17:38:29.154832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.790 [2024-10-01 17:38:29.154842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.790 qpair failed and we were unable to recover it. 00:38:30.790 [2024-10-01 17:38:29.155153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.790 [2024-10-01 17:38:29.155164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.790 qpair failed and we were unable to recover it. 00:38:30.790 [2024-10-01 17:38:29.155467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.790 [2024-10-01 17:38:29.155478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.790 qpair failed and we were unable to recover it. 00:38:30.790 [2024-10-01 17:38:29.155784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.790 [2024-10-01 17:38:29.155795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.790 qpair failed and we were unable to recover it. 00:38:30.790 [2024-10-01 17:38:29.156085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.790 [2024-10-01 17:38:29.156097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.790 qpair failed and we were unable to recover it. 00:38:30.790 [2024-10-01 17:38:29.156421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.790 [2024-10-01 17:38:29.156431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.790 qpair failed and we were unable to recover it. 00:38:30.790 [2024-10-01 17:38:29.156732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.790 [2024-10-01 17:38:29.156742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.790 qpair failed and we were unable to recover it. 00:38:30.790 [2024-10-01 17:38:29.156921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.790 [2024-10-01 17:38:29.156934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.790 qpair failed and we were unable to recover it. 00:38:30.790 [2024-10-01 17:38:29.157234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.790 [2024-10-01 17:38:29.157245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.790 qpair failed and we were unable to recover it. 00:38:30.790 [2024-10-01 17:38:29.157547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.790 [2024-10-01 17:38:29.157558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.790 qpair failed and we were unable to recover it. 00:38:30.790 [2024-10-01 17:38:29.157748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.790 [2024-10-01 17:38:29.157759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.791 qpair failed and we were unable to recover it. 00:38:30.791 [2024-10-01 17:38:29.158098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.791 [2024-10-01 17:38:29.158109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.791 qpair failed and we were unable to recover it. 00:38:30.791 [2024-10-01 17:38:29.158439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.791 [2024-10-01 17:38:29.158450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.791 qpair failed and we were unable to recover it. 00:38:30.791 [2024-10-01 17:38:29.158727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.791 [2024-10-01 17:38:29.158737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.791 qpair failed and we were unable to recover it. 00:38:30.791 [2024-10-01 17:38:29.158897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.791 [2024-10-01 17:38:29.158908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.791 qpair failed and we were unable to recover it. 00:38:30.791 [2024-10-01 17:38:29.159085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.791 [2024-10-01 17:38:29.159097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.791 qpair failed and we were unable to recover it. 00:38:30.791 [2024-10-01 17:38:29.159406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.791 [2024-10-01 17:38:29.159417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.791 qpair failed and we were unable to recover it. 00:38:30.791 [2024-10-01 17:38:29.159701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.791 [2024-10-01 17:38:29.159712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.791 qpair failed and we were unable to recover it. 00:38:30.791 [2024-10-01 17:38:29.160009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.791 [2024-10-01 17:38:29.160020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.791 qpair failed and we were unable to recover it. 00:38:30.791 [2024-10-01 17:38:29.160321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.791 [2024-10-01 17:38:29.160332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.791 qpair failed and we were unable to recover it. 00:38:30.791 [2024-10-01 17:38:29.160656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.791 [2024-10-01 17:38:29.160666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.791 qpair failed and we were unable to recover it. 00:38:30.791 [2024-10-01 17:38:29.160939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.791 [2024-10-01 17:38:29.160950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.791 qpair failed and we were unable to recover it. 00:38:30.791 [2024-10-01 17:38:29.161244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.791 [2024-10-01 17:38:29.161255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.791 qpair failed and we were unable to recover it. 00:38:30.791 [2024-10-01 17:38:29.161516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.791 [2024-10-01 17:38:29.161527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.791 qpair failed and we were unable to recover it. 00:38:30.791 [2024-10-01 17:38:29.161854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.791 [2024-10-01 17:38:29.161865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.791 qpair failed and we were unable to recover it. 00:38:30.791 [2024-10-01 17:38:29.162152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.791 [2024-10-01 17:38:29.162165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.791 qpair failed and we were unable to recover it. 00:38:30.791 [2024-10-01 17:38:29.162471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.791 [2024-10-01 17:38:29.162481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.791 qpair failed and we were unable to recover it. 00:38:30.791 [2024-10-01 17:38:29.162788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.791 [2024-10-01 17:38:29.162800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.791 qpair failed and we were unable to recover it. 00:38:30.791 [2024-10-01 17:38:29.162987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.791 [2024-10-01 17:38:29.163003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.791 qpair failed and we were unable to recover it. 00:38:30.791 [2024-10-01 17:38:29.163324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.791 [2024-10-01 17:38:29.163336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.791 qpair failed and we were unable to recover it. 00:38:30.791 [2024-10-01 17:38:29.163632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.791 [2024-10-01 17:38:29.163643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.791 qpair failed and we were unable to recover it. 00:38:30.791 [2024-10-01 17:38:29.163964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.791 [2024-10-01 17:38:29.163975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.791 qpair failed and we were unable to recover it. 00:38:30.791 [2024-10-01 17:38:29.164270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.791 [2024-10-01 17:38:29.164282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.791 qpair failed and we were unable to recover it. 00:38:30.791 [2024-10-01 17:38:29.164598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.791 [2024-10-01 17:38:29.164608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.791 qpair failed and we were unable to recover it. 00:38:30.791 [2024-10-01 17:38:29.164910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.791 [2024-10-01 17:38:29.164921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.791 qpair failed and we were unable to recover it. 00:38:30.791 [2024-10-01 17:38:29.165230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.791 [2024-10-01 17:38:29.165241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.791 qpair failed and we were unable to recover it. 00:38:30.791 [2024-10-01 17:38:29.165520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.791 [2024-10-01 17:38:29.165530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.791 qpair failed and we were unable to recover it. 00:38:30.792 [2024-10-01 17:38:29.165829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.792 [2024-10-01 17:38:29.165840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.792 qpair failed and we were unable to recover it. 00:38:30.792 [2024-10-01 17:38:29.166143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.792 [2024-10-01 17:38:29.166154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.792 qpair failed and we were unable to recover it. 00:38:30.792 [2024-10-01 17:38:29.166455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.792 [2024-10-01 17:38:29.166467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.792 qpair failed and we were unable to recover it. 00:38:30.792 [2024-10-01 17:38:29.166743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.792 [2024-10-01 17:38:29.166755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.792 qpair failed and we were unable to recover it. 00:38:30.792 [2024-10-01 17:38:29.167058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.792 [2024-10-01 17:38:29.167069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.792 qpair failed and we were unable to recover it. 00:38:30.792 [2024-10-01 17:38:29.167388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.792 [2024-10-01 17:38:29.167399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.792 qpair failed and we were unable to recover it. 00:38:30.792 [2024-10-01 17:38:29.167719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.792 [2024-10-01 17:38:29.167730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.792 qpair failed and we were unable to recover it. 00:38:30.792 [2024-10-01 17:38:29.168054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.792 [2024-10-01 17:38:29.168065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.792 qpair failed and we were unable to recover it. 00:38:30.792 [2024-10-01 17:38:29.168422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.792 [2024-10-01 17:38:29.168432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.792 qpair failed and we were unable to recover it. 00:38:30.792 [2024-10-01 17:38:29.168737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.792 [2024-10-01 17:38:29.168748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.792 qpair failed and we were unable to recover it. 00:38:30.792 [2024-10-01 17:38:29.169049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.792 [2024-10-01 17:38:29.169060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.792 qpair failed and we were unable to recover it. 00:38:30.792 [2024-10-01 17:38:29.169354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.792 [2024-10-01 17:38:29.169364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.792 qpair failed and we were unable to recover it. 00:38:30.792 [2024-10-01 17:38:29.169664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.792 [2024-10-01 17:38:29.169675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.792 qpair failed and we were unable to recover it. 00:38:30.792 [2024-10-01 17:38:29.170028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.792 [2024-10-01 17:38:29.170040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.792 qpair failed and we were unable to recover it. 00:38:30.792 [2024-10-01 17:38:29.170287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.792 [2024-10-01 17:38:29.170298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.792 qpair failed and we were unable to recover it. 00:38:30.792 [2024-10-01 17:38:29.170588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.792 [2024-10-01 17:38:29.170599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.792 qpair failed and we were unable to recover it. 00:38:30.792 [2024-10-01 17:38:29.170902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.792 [2024-10-01 17:38:29.170913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.792 qpair failed and we were unable to recover it. 00:38:30.792 [2024-10-01 17:38:29.171187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.792 [2024-10-01 17:38:29.171197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.792 qpair failed and we were unable to recover it. 00:38:30.792 [2024-10-01 17:38:29.171378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.792 [2024-10-01 17:38:29.171390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.792 qpair failed and we were unable to recover it. 00:38:30.792 [2024-10-01 17:38:29.171694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.792 [2024-10-01 17:38:29.171705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.792 qpair failed and we were unable to recover it. 00:38:30.792 [2024-10-01 17:38:29.172001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.792 [2024-10-01 17:38:29.172013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.792 qpair failed and we were unable to recover it. 00:38:30.792 [2024-10-01 17:38:29.172353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.792 [2024-10-01 17:38:29.172364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.792 qpair failed and we were unable to recover it. 00:38:30.792 [2024-10-01 17:38:29.172664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.792 [2024-10-01 17:38:29.172674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.792 qpair failed and we were unable to recover it. 00:38:30.792 [2024-10-01 17:38:29.173006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.792 [2024-10-01 17:38:29.173017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.792 qpair failed and we were unable to recover it. 00:38:30.792 [2024-10-01 17:38:29.173318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.792 [2024-10-01 17:38:29.173328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.792 qpair failed and we were unable to recover it. 00:38:30.792 [2024-10-01 17:38:29.173628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.792 [2024-10-01 17:38:29.173638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.792 qpair failed and we were unable to recover it. 00:38:30.792 [2024-10-01 17:38:29.173961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.792 [2024-10-01 17:38:29.173972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.792 qpair failed and we were unable to recover it. 00:38:30.792 [2024-10-01 17:38:29.174295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.792 [2024-10-01 17:38:29.174306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.792 qpair failed and we were unable to recover it. 00:38:30.792 [2024-10-01 17:38:29.174608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.792 [2024-10-01 17:38:29.174619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.792 qpair failed and we were unable to recover it. 00:38:30.792 [2024-10-01 17:38:29.174916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.792 [2024-10-01 17:38:29.174929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.792 qpair failed and we were unable to recover it. 00:38:30.793 [2024-10-01 17:38:29.175236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.793 [2024-10-01 17:38:29.175248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.793 qpair failed and we were unable to recover it. 00:38:30.793 [2024-10-01 17:38:29.175570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.793 [2024-10-01 17:38:29.175580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.793 qpair failed and we were unable to recover it. 00:38:30.793 [2024-10-01 17:38:29.175884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.793 [2024-10-01 17:38:29.175894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.793 qpair failed and we were unable to recover it. 00:38:30.793 [2024-10-01 17:38:29.176171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.793 [2024-10-01 17:38:29.176184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.793 qpair failed and we were unable to recover it. 00:38:30.793 [2024-10-01 17:38:29.176509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.793 [2024-10-01 17:38:29.176521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.793 qpair failed and we were unable to recover it. 00:38:30.793 [2024-10-01 17:38:29.176846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.793 [2024-10-01 17:38:29.176856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.793 qpair failed and we were unable to recover it. 00:38:30.793 [2024-10-01 17:38:29.177165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.793 [2024-10-01 17:38:29.177176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.793 qpair failed and we were unable to recover it. 00:38:30.793 [2024-10-01 17:38:29.177485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.793 [2024-10-01 17:38:29.177496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.793 qpair failed and we were unable to recover it. 00:38:30.793 [2024-10-01 17:38:29.177796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.793 [2024-10-01 17:38:29.177807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.793 qpair failed and we were unable to recover it. 00:38:30.793 [2024-10-01 17:38:29.178131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.793 [2024-10-01 17:38:29.178142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.793 qpair failed and we were unable to recover it. 00:38:30.793 [2024-10-01 17:38:29.178344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.793 [2024-10-01 17:38:29.178354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.793 qpair failed and we were unable to recover it. 00:38:30.793 [2024-10-01 17:38:29.178652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.793 [2024-10-01 17:38:29.178662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.793 qpair failed and we were unable to recover it. 00:38:30.793 [2024-10-01 17:38:29.178990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.793 [2024-10-01 17:38:29.179005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.793 qpair failed and we were unable to recover it. 00:38:30.793 [2024-10-01 17:38:29.179289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.793 [2024-10-01 17:38:29.179300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.793 qpair failed and we were unable to recover it. 00:38:30.793 [2024-10-01 17:38:29.179593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.793 [2024-10-01 17:38:29.179604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.793 qpair failed and we were unable to recover it. 00:38:30.793 [2024-10-01 17:38:29.179892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.793 [2024-10-01 17:38:29.179902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.793 qpair failed and we were unable to recover it. 00:38:30.793 [2024-10-01 17:38:29.180198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.793 [2024-10-01 17:38:29.180209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.793 qpair failed and we were unable to recover it. 00:38:30.793 [2024-10-01 17:38:29.180486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.793 [2024-10-01 17:38:29.180496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.793 qpair failed and we were unable to recover it. 00:38:30.793 [2024-10-01 17:38:29.180790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.793 [2024-10-01 17:38:29.180801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.793 qpair failed and we were unable to recover it. 00:38:30.793 [2024-10-01 17:38:29.181108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.793 [2024-10-01 17:38:29.181119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.793 qpair failed and we were unable to recover it. 00:38:30.793 [2024-10-01 17:38:29.181425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.793 [2024-10-01 17:38:29.181436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.793 qpair failed and we were unable to recover it. 00:38:30.793 [2024-10-01 17:38:29.181762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.793 [2024-10-01 17:38:29.181773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.793 qpair failed and we were unable to recover it. 00:38:30.793 [2024-10-01 17:38:29.182102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.793 [2024-10-01 17:38:29.182114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.793 qpair failed and we were unable to recover it. 00:38:30.793 [2024-10-01 17:38:29.182438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.793 [2024-10-01 17:38:29.182449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.793 qpair failed and we were unable to recover it. 00:38:30.793 [2024-10-01 17:38:29.182751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.793 [2024-10-01 17:38:29.182761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.793 qpair failed and we were unable to recover it. 00:38:30.793 [2024-10-01 17:38:29.183042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.793 [2024-10-01 17:38:29.183054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.793 qpair failed and we were unable to recover it. 00:38:30.793 [2024-10-01 17:38:29.183352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.793 [2024-10-01 17:38:29.183365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.793 qpair failed and we were unable to recover it. 00:38:30.793 [2024-10-01 17:38:29.183661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.793 [2024-10-01 17:38:29.183673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.793 qpair failed and we were unable to recover it. 00:38:30.793 [2024-10-01 17:38:29.184003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.793 [2024-10-01 17:38:29.184015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.793 qpair failed and we were unable to recover it. 00:38:30.793 [2024-10-01 17:38:29.184227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.793 [2024-10-01 17:38:29.184237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.793 qpair failed and we were unable to recover it. 00:38:30.793 [2024-10-01 17:38:29.184539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.793 [2024-10-01 17:38:29.184550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.793 qpair failed and we were unable to recover it. 00:38:30.793 [2024-10-01 17:38:29.184853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.793 [2024-10-01 17:38:29.184864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.793 qpair failed and we were unable to recover it. 00:38:30.793 [2024-10-01 17:38:29.185165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.793 [2024-10-01 17:38:29.185176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.793 qpair failed and we were unable to recover it. 00:38:30.793 [2024-10-01 17:38:29.185478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.793 [2024-10-01 17:38:29.185488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.793 qpair failed and we were unable to recover it. 00:38:30.793 [2024-10-01 17:38:29.185785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.793 [2024-10-01 17:38:29.185796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.793 qpair failed and we were unable to recover it. 00:38:30.793 [2024-10-01 17:38:29.186098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.793 [2024-10-01 17:38:29.186109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.794 qpair failed and we were unable to recover it. 00:38:30.794 [2024-10-01 17:38:29.186435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.794 [2024-10-01 17:38:29.186446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.794 qpair failed and we were unable to recover it. 00:38:30.794 [2024-10-01 17:38:29.186772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.794 [2024-10-01 17:38:29.186782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.794 qpair failed and we were unable to recover it. 00:38:30.794 [2024-10-01 17:38:29.187086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.794 [2024-10-01 17:38:29.187097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.794 qpair failed and we were unable to recover it. 00:38:30.794 [2024-10-01 17:38:29.187396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.794 [2024-10-01 17:38:29.187406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.794 qpair failed and we were unable to recover it. 00:38:30.794 [2024-10-01 17:38:29.187710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.794 [2024-10-01 17:38:29.187721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.794 qpair failed and we were unable to recover it. 00:38:30.794 [2024-10-01 17:38:29.187933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.794 [2024-10-01 17:38:29.187943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.794 qpair failed and we were unable to recover it. 00:38:30.794 [2024-10-01 17:38:29.188245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.794 [2024-10-01 17:38:29.188256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.794 qpair failed and we were unable to recover it. 00:38:30.794 [2024-10-01 17:38:29.188559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.794 [2024-10-01 17:38:29.188570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.794 qpair failed and we were unable to recover it. 00:38:30.794 [2024-10-01 17:38:29.188897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.794 [2024-10-01 17:38:29.188909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.794 qpair failed and we were unable to recover it. 00:38:30.794 [2024-10-01 17:38:29.189184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.794 [2024-10-01 17:38:29.189195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.794 qpair failed and we were unable to recover it. 00:38:30.794 [2024-10-01 17:38:29.189419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.794 [2024-10-01 17:38:29.189429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.794 qpair failed and we were unable to recover it. 00:38:30.794 [2024-10-01 17:38:29.189732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.794 [2024-10-01 17:38:29.189743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.794 qpair failed and we were unable to recover it. 00:38:30.794 [2024-10-01 17:38:29.190043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.794 [2024-10-01 17:38:29.190054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.794 qpair failed and we were unable to recover it. 00:38:30.794 [2024-10-01 17:38:29.190378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.794 [2024-10-01 17:38:29.190389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.794 qpair failed and we were unable to recover it. 00:38:30.794 [2024-10-01 17:38:29.190692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.794 [2024-10-01 17:38:29.190707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.794 qpair failed and we were unable to recover it. 00:38:30.794 [2024-10-01 17:38:29.191030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.794 [2024-10-01 17:38:29.191041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.794 qpair failed and we were unable to recover it. 00:38:30.794 [2024-10-01 17:38:29.191345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.794 [2024-10-01 17:38:29.191355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.794 qpair failed and we were unable to recover it. 00:38:30.794 [2024-10-01 17:38:29.191682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.794 [2024-10-01 17:38:29.191693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.794 qpair failed and we were unable to recover it. 00:38:30.794 [2024-10-01 17:38:29.191993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.794 [2024-10-01 17:38:29.192008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.794 qpair failed and we were unable to recover it. 00:38:30.794 [2024-10-01 17:38:29.192326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.794 [2024-10-01 17:38:29.192336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.794 qpair failed and we were unable to recover it. 00:38:30.794 [2024-10-01 17:38:29.192596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.794 [2024-10-01 17:38:29.192607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.794 qpair failed and we were unable to recover it. 00:38:30.794 [2024-10-01 17:38:29.192776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.794 [2024-10-01 17:38:29.192789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.794 qpair failed and we were unable to recover it. 00:38:30.794 [2024-10-01 17:38:29.193114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.794 [2024-10-01 17:38:29.193125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.794 qpair failed and we were unable to recover it. 00:38:30.794 [2024-10-01 17:38:29.193429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.794 [2024-10-01 17:38:29.193439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.794 qpair failed and we were unable to recover it. 00:38:30.794 [2024-10-01 17:38:29.193741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.794 [2024-10-01 17:38:29.193752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.794 qpair failed and we were unable to recover it. 00:38:30.794 [2024-10-01 17:38:29.194084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.794 [2024-10-01 17:38:29.194096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.794 qpair failed and we were unable to recover it. 00:38:30.794 [2024-10-01 17:38:29.194401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.794 [2024-10-01 17:38:29.194411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.794 qpair failed and we were unable to recover it. 00:38:30.794 [2024-10-01 17:38:29.194714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.794 [2024-10-01 17:38:29.194726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.794 qpair failed and we were unable to recover it. 00:38:30.794 [2024-10-01 17:38:29.195029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.794 [2024-10-01 17:38:29.195041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.794 qpair failed and we were unable to recover it. 00:38:30.794 [2024-10-01 17:38:29.195329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.794 [2024-10-01 17:38:29.195340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.794 qpair failed and we were unable to recover it. 00:38:30.794 [2024-10-01 17:38:29.195640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.794 [2024-10-01 17:38:29.195650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.794 qpair failed and we were unable to recover it. 00:38:30.794 [2024-10-01 17:38:29.195955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.794 [2024-10-01 17:38:29.195968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.794 qpair failed and we were unable to recover it. 00:38:30.794 [2024-10-01 17:38:29.196299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.794 [2024-10-01 17:38:29.196309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.794 qpair failed and we were unable to recover it. 00:38:30.794 [2024-10-01 17:38:29.196592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.794 [2024-10-01 17:38:29.196603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.794 qpair failed and we were unable to recover it. 00:38:30.794 [2024-10-01 17:38:29.196907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.794 [2024-10-01 17:38:29.196917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.794 qpair failed and we were unable to recover it. 00:38:30.795 [2024-10-01 17:38:29.197235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.795 [2024-10-01 17:38:29.197246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.795 qpair failed and we were unable to recover it. 00:38:30.795 [2024-10-01 17:38:29.197467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.795 [2024-10-01 17:38:29.197478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.795 qpair failed and we were unable to recover it. 00:38:30.795 [2024-10-01 17:38:29.197761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.795 [2024-10-01 17:38:29.197772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.795 qpair failed and we were unable to recover it. 00:38:30.795 [2024-10-01 17:38:29.198083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.795 [2024-10-01 17:38:29.198094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.795 qpair failed and we were unable to recover it. 00:38:30.795 [2024-10-01 17:38:29.198395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.795 [2024-10-01 17:38:29.198406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.795 qpair failed and we were unable to recover it. 00:38:30.795 [2024-10-01 17:38:29.198709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.795 [2024-10-01 17:38:29.198720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.795 qpair failed and we were unable to recover it. 00:38:30.795 [2024-10-01 17:38:29.199043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.795 [2024-10-01 17:38:29.199054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.795 qpair failed and we were unable to recover it. 00:38:30.795 [2024-10-01 17:38:29.199368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.795 [2024-10-01 17:38:29.199378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.795 qpair failed and we were unable to recover it. 00:38:30.795 [2024-10-01 17:38:29.199681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.795 [2024-10-01 17:38:29.199692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.795 qpair failed and we were unable to recover it. 00:38:30.795 [2024-10-01 17:38:29.200019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.795 [2024-10-01 17:38:29.200030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.795 qpair failed and we were unable to recover it. 00:38:30.795 [2024-10-01 17:38:29.200319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.795 [2024-10-01 17:38:29.200331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.795 qpair failed and we were unable to recover it. 00:38:30.795 [2024-10-01 17:38:29.200632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.795 [2024-10-01 17:38:29.200643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.795 qpair failed and we were unable to recover it. 00:38:30.795 [2024-10-01 17:38:29.200946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.795 [2024-10-01 17:38:29.200958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.795 qpair failed and we were unable to recover it. 00:38:30.795 [2024-10-01 17:38:29.201264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.795 [2024-10-01 17:38:29.201276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.795 qpair failed and we were unable to recover it. 00:38:30.795 [2024-10-01 17:38:29.201552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.795 [2024-10-01 17:38:29.201563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.795 qpair failed and we were unable to recover it. 00:38:30.795 [2024-10-01 17:38:29.201865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.795 [2024-10-01 17:38:29.201877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.795 qpair failed and we were unable to recover it. 00:38:30.795 [2024-10-01 17:38:29.202173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.795 [2024-10-01 17:38:29.202184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.795 qpair failed and we were unable to recover it. 00:38:30.795 [2024-10-01 17:38:29.202514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.795 [2024-10-01 17:38:29.202525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.795 qpair failed and we were unable to recover it. 00:38:30.795 [2024-10-01 17:38:29.202850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.795 [2024-10-01 17:38:29.202861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.795 qpair failed and we were unable to recover it. 00:38:30.795 [2024-10-01 17:38:29.203122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.795 [2024-10-01 17:38:29.203134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.795 qpair failed and we were unable to recover it. 00:38:30.795 [2024-10-01 17:38:29.203310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.795 [2024-10-01 17:38:29.203323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.795 qpair failed and we were unable to recover it. 00:38:30.795 [2024-10-01 17:38:29.203584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.795 [2024-10-01 17:38:29.203595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.795 qpair failed and we were unable to recover it. 00:38:30.795 [2024-10-01 17:38:29.203858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.795 [2024-10-01 17:38:29.203869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.795 qpair failed and we were unable to recover it. 00:38:30.795 [2024-10-01 17:38:29.204152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.795 [2024-10-01 17:38:29.204166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.795 qpair failed and we were unable to recover it. 00:38:30.795 [2024-10-01 17:38:29.204480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.795 [2024-10-01 17:38:29.204491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.795 qpair failed and we were unable to recover it. 00:38:30.795 [2024-10-01 17:38:29.204793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.795 [2024-10-01 17:38:29.204803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.795 qpair failed and we were unable to recover it. 00:38:30.795 [2024-10-01 17:38:29.205180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.795 [2024-10-01 17:38:29.205191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.795 qpair failed and we were unable to recover it. 00:38:30.795 [2024-10-01 17:38:29.205486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.795 [2024-10-01 17:38:29.205497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.795 qpair failed and we were unable to recover it. 00:38:30.795 [2024-10-01 17:38:29.205854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.795 [2024-10-01 17:38:29.205865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.795 qpair failed and we were unable to recover it. 00:38:30.795 [2024-10-01 17:38:29.206163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.795 [2024-10-01 17:38:29.206174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.795 qpair failed and we were unable to recover it. 00:38:30.795 [2024-10-01 17:38:29.206467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.795 [2024-10-01 17:38:29.206478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.795 qpair failed and we were unable to recover it. 00:38:30.795 [2024-10-01 17:38:29.206779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.795 [2024-10-01 17:38:29.206790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.795 qpair failed and we were unable to recover it. 00:38:30.796 [2024-10-01 17:38:29.207089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.796 [2024-10-01 17:38:29.207100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.796 qpair failed and we were unable to recover it. 00:38:30.796 [2024-10-01 17:38:29.207404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.796 [2024-10-01 17:38:29.207415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.796 qpair failed and we were unable to recover it. 00:38:30.796 [2024-10-01 17:38:29.207753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.796 [2024-10-01 17:38:29.207764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.796 qpair failed and we were unable to recover it. 00:38:30.796 [2024-10-01 17:38:29.208061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.796 [2024-10-01 17:38:29.208072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.796 qpair failed and we were unable to recover it. 00:38:30.796 [2024-10-01 17:38:29.208268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.796 [2024-10-01 17:38:29.208279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.796 qpair failed and we were unable to recover it. 00:38:30.796 [2024-10-01 17:38:29.208462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.796 [2024-10-01 17:38:29.208474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.796 qpair failed and we were unable to recover it. 00:38:30.796 [2024-10-01 17:38:29.208691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.796 [2024-10-01 17:38:29.208701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.796 qpair failed and we were unable to recover it. 00:38:30.796 [2024-10-01 17:38:29.209055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.796 [2024-10-01 17:38:29.209066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.796 qpair failed and we were unable to recover it. 00:38:30.796 [2024-10-01 17:38:29.209368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.796 [2024-10-01 17:38:29.209378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.796 qpair failed and we were unable to recover it. 00:38:30.796 [2024-10-01 17:38:29.209537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.796 [2024-10-01 17:38:29.209549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.796 qpair failed and we were unable to recover it. 00:38:30.796 [2024-10-01 17:38:29.209845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.796 [2024-10-01 17:38:29.209856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.796 qpair failed and we were unable to recover it. 00:38:30.796 [2024-10-01 17:38:29.210123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.796 [2024-10-01 17:38:29.210135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.796 qpair failed and we were unable to recover it. 00:38:30.796 [2024-10-01 17:38:29.210423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.796 [2024-10-01 17:38:29.210433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.796 qpair failed and we were unable to recover it. 00:38:30.796 [2024-10-01 17:38:29.210745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.796 [2024-10-01 17:38:29.210756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.796 qpair failed and we were unable to recover it. 00:38:30.796 [2024-10-01 17:38:29.211083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.796 [2024-10-01 17:38:29.211095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.796 qpair failed and we were unable to recover it. 00:38:30.796 [2024-10-01 17:38:29.211314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.796 [2024-10-01 17:38:29.211325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.796 qpair failed and we were unable to recover it. 00:38:30.796 [2024-10-01 17:38:29.211596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.796 [2024-10-01 17:38:29.211607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.796 qpair failed and we were unable to recover it. 00:38:30.796 [2024-10-01 17:38:29.211917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.796 [2024-10-01 17:38:29.211928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.796 qpair failed and we were unable to recover it. 00:38:30.796 [2024-10-01 17:38:29.212182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.796 [2024-10-01 17:38:29.212194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.796 qpair failed and we were unable to recover it. 00:38:30.796 [2024-10-01 17:38:29.212512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.796 [2024-10-01 17:38:29.212524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.796 qpair failed and we were unable to recover it. 00:38:30.796 [2024-10-01 17:38:29.212819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.796 [2024-10-01 17:38:29.212829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.796 qpair failed and we were unable to recover it. 00:38:30.796 [2024-10-01 17:38:29.213085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.796 [2024-10-01 17:38:29.213095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.796 qpair failed and we were unable to recover it. 00:38:30.796 [2024-10-01 17:38:29.213398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.796 [2024-10-01 17:38:29.213408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.796 qpair failed and we were unable to recover it. 00:38:30.796 [2024-10-01 17:38:29.213750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.796 [2024-10-01 17:38:29.213760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.796 qpair failed and we were unable to recover it. 00:38:30.796 [2024-10-01 17:38:29.214086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.796 [2024-10-01 17:38:29.214098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.796 qpair failed and we were unable to recover it. 00:38:30.796 [2024-10-01 17:38:29.214430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.796 [2024-10-01 17:38:29.214441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.796 qpair failed and we were unable to recover it. 00:38:30.796 [2024-10-01 17:38:29.214777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.796 [2024-10-01 17:38:29.214787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.797 qpair failed and we were unable to recover it. 00:38:30.797 [2024-10-01 17:38:29.215113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.797 [2024-10-01 17:38:29.215124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.797 qpair failed and we were unable to recover it. 00:38:30.797 [2024-10-01 17:38:29.215430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.797 [2024-10-01 17:38:29.215440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.797 qpair failed and we were unable to recover it. 00:38:30.797 [2024-10-01 17:38:29.215727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.797 [2024-10-01 17:38:29.215738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.797 qpair failed and we were unable to recover it. 00:38:30.797 [2024-10-01 17:38:29.216073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.797 [2024-10-01 17:38:29.216084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.797 qpair failed and we were unable to recover it. 00:38:30.797 [2024-10-01 17:38:29.216386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.797 [2024-10-01 17:38:29.216396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.797 qpair failed and we were unable to recover it. 00:38:30.797 [2024-10-01 17:38:29.216693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.797 [2024-10-01 17:38:29.216705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.797 qpair failed and we were unable to recover it. 00:38:30.797 [2024-10-01 17:38:29.216992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.797 [2024-10-01 17:38:29.217008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.797 qpair failed and we were unable to recover it. 00:38:30.797 [2024-10-01 17:38:29.217326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.797 [2024-10-01 17:38:29.217337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.797 qpair failed and we were unable to recover it. 00:38:30.797 [2024-10-01 17:38:29.217677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.797 [2024-10-01 17:38:29.217688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.797 qpair failed and we were unable to recover it. 00:38:30.797 [2024-10-01 17:38:29.217961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.797 [2024-10-01 17:38:29.217971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.797 qpair failed and we were unable to recover it. 00:38:30.797 [2024-10-01 17:38:29.218290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.797 [2024-10-01 17:38:29.218301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.797 qpair failed and we were unable to recover it. 00:38:30.797 [2024-10-01 17:38:29.218591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.797 [2024-10-01 17:38:29.218602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.797 qpair failed and we were unable to recover it. 00:38:30.797 [2024-10-01 17:38:29.218899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.797 [2024-10-01 17:38:29.218910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.797 qpair failed and we were unable to recover it. 00:38:30.797 [2024-10-01 17:38:29.219173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.797 [2024-10-01 17:38:29.219185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.797 qpair failed and we were unable to recover it. 00:38:30.797 [2024-10-01 17:38:29.219312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.797 [2024-10-01 17:38:29.219325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.797 qpair failed and we were unable to recover it. 00:38:30.797 [2024-10-01 17:38:29.219787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.797 [2024-10-01 17:38:29.219879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:30.797 qpair failed and we were unable to recover it. 00:38:30.797 [2024-10-01 17:38:29.220401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.797 [2024-10-01 17:38:29.220494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:30.797 qpair failed and we were unable to recover it. 00:38:30.797 [2024-10-01 17:38:29.220913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.797 [2024-10-01 17:38:29.220952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:30.797 qpair failed and we were unable to recover it. 00:38:30.797 [2024-10-01 17:38:29.221432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.797 [2024-10-01 17:38:29.221525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:30.797 qpair failed and we were unable to recover it. 00:38:30.797 [2024-10-01 17:38:29.221887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.797 [2024-10-01 17:38:29.221899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.797 qpair failed and we were unable to recover it. 00:38:30.797 [2024-10-01 17:38:29.222164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.797 [2024-10-01 17:38:29.222175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.797 qpair failed and we were unable to recover it. 00:38:30.797 [2024-10-01 17:38:29.222513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.797 [2024-10-01 17:38:29.222523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.797 qpair failed and we were unable to recover it. 00:38:30.797 [2024-10-01 17:38:29.222845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.797 [2024-10-01 17:38:29.222856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.797 qpair failed and we were unable to recover it. 00:38:30.797 [2024-10-01 17:38:29.223045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.797 [2024-10-01 17:38:29.223057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.797 qpair failed and we were unable to recover it. 00:38:30.797 [2024-10-01 17:38:29.223342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.797 [2024-10-01 17:38:29.223352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.797 qpair failed and we were unable to recover it. 00:38:30.797 [2024-10-01 17:38:29.223651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.797 [2024-10-01 17:38:29.223662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.797 qpair failed and we were unable to recover it. 00:38:30.797 [2024-10-01 17:38:29.223978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.797 [2024-10-01 17:38:29.223988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.797 qpair failed and we were unable to recover it. 00:38:30.797 [2024-10-01 17:38:29.224315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.797 [2024-10-01 17:38:29.224326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.797 qpair failed and we were unable to recover it. 00:38:30.797 [2024-10-01 17:38:29.224654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.797 [2024-10-01 17:38:29.224664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.797 qpair failed and we were unable to recover it. 00:38:30.797 [2024-10-01 17:38:29.224850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.797 [2024-10-01 17:38:29.224862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.797 qpair failed and we were unable to recover it. 00:38:30.798 [2024-10-01 17:38:29.225182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.798 [2024-10-01 17:38:29.225193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.798 qpair failed and we were unable to recover it. 00:38:30.798 [2024-10-01 17:38:29.225514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.798 [2024-10-01 17:38:29.225524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.798 qpair failed and we were unable to recover it. 00:38:30.798 [2024-10-01 17:38:29.225797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.798 [2024-10-01 17:38:29.225810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.798 qpair failed and we were unable to recover it. 00:38:30.798 [2024-10-01 17:38:29.226107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.798 [2024-10-01 17:38:29.226118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.798 qpair failed and we were unable to recover it. 00:38:30.798 [2024-10-01 17:38:29.226450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.798 [2024-10-01 17:38:29.226461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.798 qpair failed and we were unable to recover it. 00:38:30.798 [2024-10-01 17:38:29.226786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.798 [2024-10-01 17:38:29.226797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.798 qpair failed and we were unable to recover it. 00:38:30.798 [2024-10-01 17:38:29.227091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.798 [2024-10-01 17:38:29.227102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.798 qpair failed and we were unable to recover it. 00:38:30.798 [2024-10-01 17:38:29.227406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.798 [2024-10-01 17:38:29.227417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.798 qpair failed and we were unable to recover it. 00:38:30.798 [2024-10-01 17:38:29.227584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.798 [2024-10-01 17:38:29.227595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.798 qpair failed and we were unable to recover it. 00:38:30.798 [2024-10-01 17:38:29.227915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.798 [2024-10-01 17:38:29.227926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.798 qpair failed and we were unable to recover it. 00:38:30.798 [2024-10-01 17:38:29.228220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.798 [2024-10-01 17:38:29.228231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.798 qpair failed and we were unable to recover it. 00:38:30.798 [2024-10-01 17:38:29.228548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.798 [2024-10-01 17:38:29.228559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.798 qpair failed and we were unable to recover it. 00:38:30.798 [2024-10-01 17:38:29.228887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.798 [2024-10-01 17:38:29.228898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.798 qpair failed and we were unable to recover it. 00:38:30.798 [2024-10-01 17:38:29.229239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.798 [2024-10-01 17:38:29.229251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.798 qpair failed and we were unable to recover it. 00:38:30.798 [2024-10-01 17:38:29.229532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.798 [2024-10-01 17:38:29.229542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.798 qpair failed and we were unable to recover it. 00:38:30.798 [2024-10-01 17:38:29.229848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.798 [2024-10-01 17:38:29.229859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.798 qpair failed and we were unable to recover it. 00:38:30.798 [2024-10-01 17:38:29.230084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.798 [2024-10-01 17:38:29.230096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.798 qpair failed and we were unable to recover it. 00:38:30.798 [2024-10-01 17:38:29.230392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.798 [2024-10-01 17:38:29.230403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.798 qpair failed and we were unable to recover it. 00:38:30.798 [2024-10-01 17:38:29.230614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.798 [2024-10-01 17:38:29.230626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.798 qpair failed and we were unable to recover it. 00:38:30.798 [2024-10-01 17:38:29.230809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.798 [2024-10-01 17:38:29.230821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.798 qpair failed and we were unable to recover it. 00:38:30.798 [2024-10-01 17:38:29.231122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.798 [2024-10-01 17:38:29.231133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.798 qpair failed and we were unable to recover it. 00:38:30.798 [2024-10-01 17:38:29.231465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.798 [2024-10-01 17:38:29.231475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.798 qpair failed and we were unable to recover it. 00:38:30.798 [2024-10-01 17:38:29.231800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.798 [2024-10-01 17:38:29.231811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.798 qpair failed and we were unable to recover it. 00:38:30.798 [2024-10-01 17:38:29.232131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.798 [2024-10-01 17:38:29.232142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.798 qpair failed and we were unable to recover it. 00:38:30.798 [2024-10-01 17:38:29.232446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.798 [2024-10-01 17:38:29.232457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.798 qpair failed and we were unable to recover it. 00:38:30.798 [2024-10-01 17:38:29.232717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.798 [2024-10-01 17:38:29.232727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.798 qpair failed and we were unable to recover it. 00:38:30.798 [2024-10-01 17:38:29.233003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.798 [2024-10-01 17:38:29.233014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.798 qpair failed and we were unable to recover it. 00:38:30.798 [2024-10-01 17:38:29.233327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.798 [2024-10-01 17:38:29.233338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.798 qpair failed and we were unable to recover it. 00:38:30.798 [2024-10-01 17:38:29.233656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.798 [2024-10-01 17:38:29.233666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.798 qpair failed and we were unable to recover it. 00:38:30.798 [2024-10-01 17:38:29.233944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.798 [2024-10-01 17:38:29.233954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.798 qpair failed and we were unable to recover it. 00:38:30.798 [2024-10-01 17:38:29.234224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.798 [2024-10-01 17:38:29.234235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.798 qpair failed and we were unable to recover it. 00:38:30.798 [2024-10-01 17:38:29.234435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.798 [2024-10-01 17:38:29.234445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.798 qpair failed and we were unable to recover it. 00:38:30.798 [2024-10-01 17:38:29.234754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.798 [2024-10-01 17:38:29.234765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.798 qpair failed and we were unable to recover it. 00:38:30.798 [2024-10-01 17:38:29.235096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.798 [2024-10-01 17:38:29.235107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.798 qpair failed and we were unable to recover it. 00:38:30.798 [2024-10-01 17:38:29.235404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.798 [2024-10-01 17:38:29.235416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.798 qpair failed and we were unable to recover it. 00:38:30.799 [2024-10-01 17:38:29.235701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.799 [2024-10-01 17:38:29.235713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.799 qpair failed and we were unable to recover it. 00:38:30.799 [2024-10-01 17:38:29.236051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.799 [2024-10-01 17:38:29.236063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.799 qpair failed and we were unable to recover it. 00:38:30.799 [2024-10-01 17:38:29.236374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.799 [2024-10-01 17:38:29.236384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.799 qpair failed and we were unable to recover it. 00:38:30.799 [2024-10-01 17:38:29.236713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.799 [2024-10-01 17:38:29.236724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.799 qpair failed and we were unable to recover it. 00:38:30.799 [2024-10-01 17:38:29.237049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.799 [2024-10-01 17:38:29.237060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.799 qpair failed and we were unable to recover it. 00:38:30.799 [2024-10-01 17:38:29.237359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.799 [2024-10-01 17:38:29.237369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.799 qpair failed and we were unable to recover it. 00:38:30.799 [2024-10-01 17:38:29.237671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.799 [2024-10-01 17:38:29.237682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.799 qpair failed and we were unable to recover it. 00:38:30.799 [2024-10-01 17:38:29.238008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.799 [2024-10-01 17:38:29.238020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.799 qpair failed and we were unable to recover it. 00:38:30.799 [2024-10-01 17:38:29.238290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.799 [2024-10-01 17:38:29.238303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.799 qpair failed and we were unable to recover it. 00:38:30.799 [2024-10-01 17:38:29.238603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.799 [2024-10-01 17:38:29.238613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.799 qpair failed and we were unable to recover it. 00:38:30.799 [2024-10-01 17:38:29.238876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.799 [2024-10-01 17:38:29.238887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.799 qpair failed and we were unable to recover it. 00:38:30.799 [2024-10-01 17:38:29.239210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.799 [2024-10-01 17:38:29.239221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.799 qpair failed and we were unable to recover it. 00:38:30.799 [2024-10-01 17:38:29.239514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.799 [2024-10-01 17:38:29.239524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.799 qpair failed and we were unable to recover it. 00:38:30.799 [2024-10-01 17:38:29.239871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.799 [2024-10-01 17:38:29.239882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.799 qpair failed and we were unable to recover it. 00:38:30.799 [2024-10-01 17:38:29.240157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.799 [2024-10-01 17:38:29.240168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.799 qpair failed and we were unable to recover it. 00:38:30.799 [2024-10-01 17:38:29.240460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.799 [2024-10-01 17:38:29.240470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.799 qpair failed and we were unable to recover it. 00:38:30.799 [2024-10-01 17:38:29.240769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.799 [2024-10-01 17:38:29.240781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.799 qpair failed and we were unable to recover it. 00:38:30.799 [2024-10-01 17:38:29.240990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.799 [2024-10-01 17:38:29.241011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.799 qpair failed and we were unable to recover it. 00:38:30.799 [2024-10-01 17:38:29.241295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.799 [2024-10-01 17:38:29.241306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.799 qpair failed and we were unable to recover it. 00:38:30.799 [2024-10-01 17:38:29.241597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.799 [2024-10-01 17:38:29.241608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.799 qpair failed and we were unable to recover it. 00:38:30.799 [2024-10-01 17:38:29.241939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.799 [2024-10-01 17:38:29.241949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.799 qpair failed and we were unable to recover it. 00:38:30.799 [2024-10-01 17:38:29.242249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.799 [2024-10-01 17:38:29.242261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.799 qpair failed and we were unable to recover it. 00:38:30.799 [2024-10-01 17:38:29.242580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.799 [2024-10-01 17:38:29.242592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.799 qpair failed and we were unable to recover it. 00:38:30.799 [2024-10-01 17:38:29.242867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.799 [2024-10-01 17:38:29.242877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.799 qpair failed and we were unable to recover it. 00:38:30.799 [2024-10-01 17:38:29.243071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.799 [2024-10-01 17:38:29.243083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.799 qpair failed and we were unable to recover it. 00:38:30.799 [2024-10-01 17:38:29.243404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.799 [2024-10-01 17:38:29.243415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.799 qpair failed and we were unable to recover it. 00:38:30.799 [2024-10-01 17:38:29.243716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.799 [2024-10-01 17:38:29.243727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.799 qpair failed and we were unable to recover it. 00:38:30.799 [2024-10-01 17:38:29.244052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.799 [2024-10-01 17:38:29.244064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.799 qpair failed and we were unable to recover it. 00:38:30.799 [2024-10-01 17:38:29.244385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.799 [2024-10-01 17:38:29.244396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.799 qpair failed and we were unable to recover it. 00:38:30.799 [2024-10-01 17:38:29.244702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.799 [2024-10-01 17:38:29.244713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.799 qpair failed and we were unable to recover it. 00:38:30.799 [2024-10-01 17:38:29.245018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.799 [2024-10-01 17:38:29.245030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.799 qpair failed and we were unable to recover it. 00:38:30.799 [2024-10-01 17:38:29.245342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.799 [2024-10-01 17:38:29.245353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.799 qpair failed and we were unable to recover it. 00:38:30.799 [2024-10-01 17:38:29.245616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.799 [2024-10-01 17:38:29.245626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.799 qpair failed and we were unable to recover it. 00:38:30.800 [2024-10-01 17:38:29.245926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.800 [2024-10-01 17:38:29.245937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.800 qpair failed and we were unable to recover it. 00:38:30.800 [2024-10-01 17:38:29.246257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.800 [2024-10-01 17:38:29.246268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.800 qpair failed and we were unable to recover it. 00:38:30.800 [2024-10-01 17:38:29.246545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.800 [2024-10-01 17:38:29.246560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.800 qpair failed and we were unable to recover it. 00:38:30.800 [2024-10-01 17:38:29.246855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.800 [2024-10-01 17:38:29.246866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.800 qpair failed and we were unable to recover it. 00:38:30.800 [2024-10-01 17:38:29.247173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.800 [2024-10-01 17:38:29.247183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.800 qpair failed and we were unable to recover it. 00:38:30.800 [2024-10-01 17:38:29.247425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.800 [2024-10-01 17:38:29.247436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.800 qpair failed and we were unable to recover it. 00:38:30.800 [2024-10-01 17:38:29.247736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.800 [2024-10-01 17:38:29.247747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.800 qpair failed and we were unable to recover it. 00:38:30.800 [2024-10-01 17:38:29.248049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.800 [2024-10-01 17:38:29.248060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.800 qpair failed and we were unable to recover it. 00:38:30.800 [2024-10-01 17:38:29.248404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.800 [2024-10-01 17:38:29.248415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.800 qpair failed and we were unable to recover it. 00:38:30.800 [2024-10-01 17:38:29.248673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.800 [2024-10-01 17:38:29.248683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.800 qpair failed and we were unable to recover it. 00:38:30.800 [2024-10-01 17:38:29.248866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.800 [2024-10-01 17:38:29.248878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.800 qpair failed and we were unable to recover it. 00:38:30.800 [2024-10-01 17:38:29.249180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.800 [2024-10-01 17:38:29.249191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.800 qpair failed and we were unable to recover it. 00:38:30.800 [2024-10-01 17:38:29.249504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.800 [2024-10-01 17:38:29.249515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.800 qpair failed and we were unable to recover it. 00:38:30.800 [2024-10-01 17:38:29.249833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.800 [2024-10-01 17:38:29.249844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.800 qpair failed and we were unable to recover it. 00:38:30.800 [2024-10-01 17:38:29.250138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.800 [2024-10-01 17:38:29.250149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.800 qpair failed and we were unable to recover it. 00:38:30.800 [2024-10-01 17:38:29.250434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.800 [2024-10-01 17:38:29.250446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.800 qpair failed and we were unable to recover it. 00:38:30.800 [2024-10-01 17:38:29.250757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.800 [2024-10-01 17:38:29.250768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.800 qpair failed and we were unable to recover it. 00:38:30.800 [2024-10-01 17:38:29.250956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.800 [2024-10-01 17:38:29.250967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.800 qpair failed and we were unable to recover it. 00:38:30.800 [2024-10-01 17:38:29.251262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.800 [2024-10-01 17:38:29.251273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.800 qpair failed and we were unable to recover it. 00:38:30.800 [2024-10-01 17:38:29.251601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.800 [2024-10-01 17:38:29.251611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.800 qpair failed and we were unable to recover it. 00:38:30.800 [2024-10-01 17:38:29.251805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.800 [2024-10-01 17:38:29.251816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.800 qpair failed and we were unable to recover it. 00:38:30.800 [2024-10-01 17:38:29.252150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.800 [2024-10-01 17:38:29.252160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.800 qpair failed and we were unable to recover it. 00:38:30.800 [2024-10-01 17:38:29.252331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.800 [2024-10-01 17:38:29.252343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.800 qpair failed and we were unable to recover it. 00:38:30.800 [2024-10-01 17:38:29.252670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.800 [2024-10-01 17:38:29.252681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.800 qpair failed and we were unable to recover it. 00:38:30.800 [2024-10-01 17:38:29.253029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.800 [2024-10-01 17:38:29.253040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.800 qpair failed and we were unable to recover it. 00:38:30.800 [2024-10-01 17:38:29.253221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.800 [2024-10-01 17:38:29.253233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.800 qpair failed and we were unable to recover it. 00:38:30.800 [2024-10-01 17:38:29.253551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.800 [2024-10-01 17:38:29.253561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.800 qpair failed and we were unable to recover it. 00:38:30.800 [2024-10-01 17:38:29.253780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.800 [2024-10-01 17:38:29.253792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.800 qpair failed and we were unable to recover it. 00:38:30.800 [2024-10-01 17:38:29.253971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.800 [2024-10-01 17:38:29.253983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.800 qpair failed and we were unable to recover it. 00:38:30.800 [2024-10-01 17:38:29.254255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.800 [2024-10-01 17:38:29.254267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.800 qpair failed and we were unable to recover it. 00:38:30.800 [2024-10-01 17:38:29.254566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.800 [2024-10-01 17:38:29.254576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.800 qpair failed and we were unable to recover it. 00:38:30.800 [2024-10-01 17:38:29.254879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.800 [2024-10-01 17:38:29.254890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.800 qpair failed and we were unable to recover it. 00:38:30.800 [2024-10-01 17:38:29.255193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.800 [2024-10-01 17:38:29.255204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.801 qpair failed and we were unable to recover it. 00:38:30.801 [2024-10-01 17:38:29.255489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.801 [2024-10-01 17:38:29.255500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.801 qpair failed and we were unable to recover it. 00:38:30.801 [2024-10-01 17:38:29.255805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.801 [2024-10-01 17:38:29.255816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.801 qpair failed and we were unable to recover it. 00:38:30.801 [2024-10-01 17:38:29.256138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.801 [2024-10-01 17:38:29.256150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.801 qpair failed and we were unable to recover it. 00:38:30.801 [2024-10-01 17:38:29.256455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.801 [2024-10-01 17:38:29.256467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.801 qpair failed and we were unable to recover it. 00:38:30.801 [2024-10-01 17:38:29.256725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.801 [2024-10-01 17:38:29.256736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.801 qpair failed and we were unable to recover it. 00:38:30.801 [2024-10-01 17:38:29.256918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.801 [2024-10-01 17:38:29.256930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.801 qpair failed and we were unable to recover it. 00:38:30.801 [2024-10-01 17:38:29.257237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.801 [2024-10-01 17:38:29.257248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.801 qpair failed and we were unable to recover it. 00:38:30.801 [2024-10-01 17:38:29.257551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.801 [2024-10-01 17:38:29.257562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.801 qpair failed and we were unable to recover it. 00:38:30.801 [2024-10-01 17:38:29.257894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.801 [2024-10-01 17:38:29.257904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.801 qpair failed and we were unable to recover it. 00:38:30.801 [2024-10-01 17:38:29.258214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.801 [2024-10-01 17:38:29.258225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.801 qpair failed and we were unable to recover it. 00:38:30.801 [2024-10-01 17:38:29.258496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.801 [2024-10-01 17:38:29.258508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.801 qpair failed and we were unable to recover it. 00:38:30.801 [2024-10-01 17:38:29.258792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.801 [2024-10-01 17:38:29.258802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.801 qpair failed and we were unable to recover it. 00:38:30.801 [2024-10-01 17:38:29.259131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.801 [2024-10-01 17:38:29.259143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.801 qpair failed and we were unable to recover it. 00:38:30.801 [2024-10-01 17:38:29.259471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.801 [2024-10-01 17:38:29.259483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.801 qpair failed and we were unable to recover it. 00:38:30.801 [2024-10-01 17:38:29.259806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.801 [2024-10-01 17:38:29.259818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.801 qpair failed and we were unable to recover it. 00:38:30.801 [2024-10-01 17:38:29.260027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.801 [2024-10-01 17:38:29.260038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.801 qpair failed and we were unable to recover it. 00:38:30.801 [2024-10-01 17:38:29.260320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.801 [2024-10-01 17:38:29.260331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.801 qpair failed and we were unable to recover it. 00:38:30.801 [2024-10-01 17:38:29.260533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.801 [2024-10-01 17:38:29.260544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.801 qpair failed and we were unable to recover it. 00:38:30.801 [2024-10-01 17:38:29.260845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.801 [2024-10-01 17:38:29.260856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.801 qpair failed and we were unable to recover it. 00:38:30.801 [2024-10-01 17:38:29.261211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.801 [2024-10-01 17:38:29.261222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.801 qpair failed and we were unable to recover it. 00:38:30.801 [2024-10-01 17:38:29.261524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.801 [2024-10-01 17:38:29.261534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.801 qpair failed and we were unable to recover it. 00:38:30.801 [2024-10-01 17:38:29.261737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.801 [2024-10-01 17:38:29.261747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.801 qpair failed and we were unable to recover it. 00:38:30.801 [2024-10-01 17:38:29.262001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.801 [2024-10-01 17:38:29.262013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.801 qpair failed and we were unable to recover it. 00:38:30.801 [2024-10-01 17:38:29.262220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.801 [2024-10-01 17:38:29.262231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.801 qpair failed and we were unable to recover it. 00:38:30.801 [2024-10-01 17:38:29.262543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.801 [2024-10-01 17:38:29.262554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.801 qpair failed and we were unable to recover it. 00:38:30.801 [2024-10-01 17:38:29.262876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.801 [2024-10-01 17:38:29.262886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.801 qpair failed and we were unable to recover it. 00:38:30.801 [2024-10-01 17:38:29.263210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.801 [2024-10-01 17:38:29.263221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.802 qpair failed and we were unable to recover it. 00:38:30.802 [2024-10-01 17:38:29.263523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.802 [2024-10-01 17:38:29.263533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.802 qpair failed and we were unable to recover it. 00:38:30.802 [2024-10-01 17:38:29.263835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.802 [2024-10-01 17:38:29.263846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.802 qpair failed and we were unable to recover it. 00:38:30.802 [2024-10-01 17:38:29.264164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.802 [2024-10-01 17:38:29.264175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.802 qpair failed and we were unable to recover it. 00:38:30.802 [2024-10-01 17:38:29.264487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.802 [2024-10-01 17:38:29.264497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.802 qpair failed and we were unable to recover it. 00:38:30.802 [2024-10-01 17:38:29.264818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.802 [2024-10-01 17:38:29.264829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.802 qpair failed and we were unable to recover it. 00:38:30.802 [2024-10-01 17:38:29.265160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.802 [2024-10-01 17:38:29.265172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.802 qpair failed and we were unable to recover it. 00:38:30.802 [2024-10-01 17:38:29.265502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.802 [2024-10-01 17:38:29.265514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.802 qpair failed and we were unable to recover it. 00:38:30.802 [2024-10-01 17:38:29.265814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.802 [2024-10-01 17:38:29.265826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.802 qpair failed and we were unable to recover it. 00:38:30.802 [2024-10-01 17:38:29.266094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.802 [2024-10-01 17:38:29.266106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.802 qpair failed and we were unable to recover it. 00:38:30.802 [2024-10-01 17:38:29.266449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.802 [2024-10-01 17:38:29.266460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.802 qpair failed and we were unable to recover it. 00:38:30.802 [2024-10-01 17:38:29.266800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.802 [2024-10-01 17:38:29.266811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.802 qpair failed and we were unable to recover it. 00:38:30.802 [2024-10-01 17:38:29.267116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.802 [2024-10-01 17:38:29.267128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.802 qpair failed and we were unable to recover it. 00:38:30.802 [2024-10-01 17:38:29.267432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.802 [2024-10-01 17:38:29.267442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.802 qpair failed and we were unable to recover it. 00:38:30.802 [2024-10-01 17:38:29.267744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.802 [2024-10-01 17:38:29.267755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.802 qpair failed and we were unable to recover it. 00:38:30.802 [2024-10-01 17:38:29.268076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.802 [2024-10-01 17:38:29.268087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.802 qpair failed and we were unable to recover it. 00:38:30.802 [2024-10-01 17:38:29.268390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.802 [2024-10-01 17:38:29.268401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.802 qpair failed and we were unable to recover it. 00:38:30.802 [2024-10-01 17:38:29.268698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.802 [2024-10-01 17:38:29.268709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.802 qpair failed and we were unable to recover it. 00:38:30.802 [2024-10-01 17:38:29.269013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.802 [2024-10-01 17:38:29.269025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.802 qpair failed and we were unable to recover it. 00:38:30.802 [2024-10-01 17:38:29.269350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.802 [2024-10-01 17:38:29.269360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.802 qpair failed and we were unable to recover it. 00:38:30.802 [2024-10-01 17:38:29.269663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.802 [2024-10-01 17:38:29.269673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.802 qpair failed and we were unable to recover it. 00:38:30.802 [2024-10-01 17:38:29.269980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.802 [2024-10-01 17:38:29.269991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.802 qpair failed and we were unable to recover it. 00:38:30.802 [2024-10-01 17:38:29.270274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.802 [2024-10-01 17:38:29.270285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.802 qpair failed and we were unable to recover it. 00:38:30.802 [2024-10-01 17:38:29.270610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.802 [2024-10-01 17:38:29.270620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.802 qpair failed and we were unable to recover it. 00:38:30.802 [2024-10-01 17:38:29.270888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.802 [2024-10-01 17:38:29.270898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.802 qpair failed and we were unable to recover it. 00:38:30.802 [2024-10-01 17:38:29.271216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.802 [2024-10-01 17:38:29.271228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.802 qpair failed and we were unable to recover it. 00:38:30.802 [2024-10-01 17:38:29.271520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.802 [2024-10-01 17:38:29.271531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.802 qpair failed and we were unable to recover it. 00:38:30.802 [2024-10-01 17:38:29.271803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.802 [2024-10-01 17:38:29.271814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.802 qpair failed and we were unable to recover it. 00:38:30.802 [2024-10-01 17:38:29.272026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.802 [2024-10-01 17:38:29.272038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.802 qpair failed and we were unable to recover it. 00:38:30.802 [2024-10-01 17:38:29.272337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.802 [2024-10-01 17:38:29.272347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.802 qpair failed and we were unable to recover it. 00:38:30.802 [2024-10-01 17:38:29.272647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.802 [2024-10-01 17:38:29.272658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.802 qpair failed and we were unable to recover it. 00:38:30.802 [2024-10-01 17:38:29.272983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.802 [2024-10-01 17:38:29.272997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.802 qpair failed and we were unable to recover it. 00:38:30.802 [2024-10-01 17:38:29.273314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.802 [2024-10-01 17:38:29.273325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.802 qpair failed and we were unable to recover it. 00:38:30.802 [2024-10-01 17:38:29.273631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.802 [2024-10-01 17:38:29.273641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.802 qpair failed and we were unable to recover it. 00:38:30.802 [2024-10-01 17:38:29.273905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.802 [2024-10-01 17:38:29.273915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.802 qpair failed and we were unable to recover it. 00:38:30.802 [2024-10-01 17:38:29.274227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.802 [2024-10-01 17:38:29.274239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.802 qpair failed and we were unable to recover it. 00:38:30.802 [2024-10-01 17:38:29.274568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.803 [2024-10-01 17:38:29.274579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.803 qpair failed and we were unable to recover it. 00:38:30.803 [2024-10-01 17:38:29.274881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.803 [2024-10-01 17:38:29.274893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.803 qpair failed and we were unable to recover it. 00:38:30.803 [2024-10-01 17:38:29.275081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.803 [2024-10-01 17:38:29.275093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.803 qpair failed and we were unable to recover it. 00:38:30.803 [2024-10-01 17:38:29.275386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.803 [2024-10-01 17:38:29.275398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.803 qpair failed and we were unable to recover it. 00:38:30.803 [2024-10-01 17:38:29.275721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.803 [2024-10-01 17:38:29.275732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.803 qpair failed and we were unable to recover it. 00:38:30.803 [2024-10-01 17:38:29.276035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.803 [2024-10-01 17:38:29.276046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.803 qpair failed and we were unable to recover it. 00:38:30.803 [2024-10-01 17:38:29.276349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.803 [2024-10-01 17:38:29.276360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.803 qpair failed and we were unable to recover it. 00:38:30.803 [2024-10-01 17:38:29.276682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.803 [2024-10-01 17:38:29.276693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.803 qpair failed and we were unable to recover it. 00:38:30.803 [2024-10-01 17:38:29.276988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.803 [2024-10-01 17:38:29.277002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.803 qpair failed and we were unable to recover it. 00:38:30.803 [2024-10-01 17:38:29.277300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.803 [2024-10-01 17:38:29.277310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.803 qpair failed and we were unable to recover it. 00:38:30.803 [2024-10-01 17:38:29.277645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.803 [2024-10-01 17:38:29.277656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.803 qpair failed and we were unable to recover it. 00:38:30.803 [2024-10-01 17:38:29.277981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.803 [2024-10-01 17:38:29.277992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.803 qpair failed and we were unable to recover it. 00:38:30.803 [2024-10-01 17:38:29.278273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.803 [2024-10-01 17:38:29.278287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.803 qpair failed and we were unable to recover it. 00:38:30.803 [2024-10-01 17:38:29.278612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.803 [2024-10-01 17:38:29.278623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.803 qpair failed and we were unable to recover it. 00:38:30.803 [2024-10-01 17:38:29.278954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.803 [2024-10-01 17:38:29.278964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.803 qpair failed and we were unable to recover it. 00:38:30.803 [2024-10-01 17:38:29.279295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.803 [2024-10-01 17:38:29.279306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.803 qpair failed and we were unable to recover it. 00:38:30.803 [2024-10-01 17:38:29.279613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.803 [2024-10-01 17:38:29.279626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.803 qpair failed and we were unable to recover it. 00:38:30.803 [2024-10-01 17:38:29.279938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.803 [2024-10-01 17:38:29.279949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.803 qpair failed and we were unable to recover it. 00:38:30.803 [2024-10-01 17:38:29.280213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.803 [2024-10-01 17:38:29.280224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.803 qpair failed and we were unable to recover it. 00:38:30.803 [2024-10-01 17:38:29.280521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.803 [2024-10-01 17:38:29.280531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.803 qpair failed and we were unable to recover it. 00:38:30.803 [2024-10-01 17:38:29.280832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.803 [2024-10-01 17:38:29.280842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.803 qpair failed and we were unable to recover it. 00:38:30.803 [2024-10-01 17:38:29.281145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.803 [2024-10-01 17:38:29.281157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.803 qpair failed and we were unable to recover it. 00:38:30.803 [2024-10-01 17:38:29.281485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.803 [2024-10-01 17:38:29.281496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.803 qpair failed and we were unable to recover it. 00:38:30.803 [2024-10-01 17:38:29.281822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.803 [2024-10-01 17:38:29.281834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.803 qpair failed and we were unable to recover it. 00:38:30.803 [2024-10-01 17:38:29.282137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.803 [2024-10-01 17:38:29.282148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.803 qpair failed and we were unable to recover it. 00:38:30.803 [2024-10-01 17:38:29.282472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.803 [2024-10-01 17:38:29.282483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.803 qpair failed and we were unable to recover it. 00:38:30.803 [2024-10-01 17:38:29.282781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.803 [2024-10-01 17:38:29.282791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.803 qpair failed and we were unable to recover it. 00:38:30.803 [2024-10-01 17:38:29.283119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.803 [2024-10-01 17:38:29.283131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.803 qpair failed and we were unable to recover it. 00:38:30.803 [2024-10-01 17:38:29.283435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.803 [2024-10-01 17:38:29.283446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.803 qpair failed and we were unable to recover it. 00:38:30.803 [2024-10-01 17:38:29.283757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.803 [2024-10-01 17:38:29.283768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.803 qpair failed and we were unable to recover it. 00:38:30.803 [2024-10-01 17:38:29.284102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.803 [2024-10-01 17:38:29.284114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.803 qpair failed and we were unable to recover it. 00:38:30.803 [2024-10-01 17:38:29.284442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.803 [2024-10-01 17:38:29.284453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.803 qpair failed and we were unable to recover it. 00:38:30.803 [2024-10-01 17:38:29.284779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.803 [2024-10-01 17:38:29.284790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.803 qpair failed and we were unable to recover it. 00:38:30.803 [2024-10-01 17:38:29.285090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.803 [2024-10-01 17:38:29.285101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.803 qpair failed and we were unable to recover it. 00:38:30.803 [2024-10-01 17:38:29.285423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.803 [2024-10-01 17:38:29.285434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.803 qpair failed and we were unable to recover it. 00:38:30.804 [2024-10-01 17:38:29.285761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.804 [2024-10-01 17:38:29.285771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.804 qpair failed and we were unable to recover it. 00:38:30.804 [2024-10-01 17:38:29.285971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.804 [2024-10-01 17:38:29.285981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.804 qpair failed and we were unable to recover it. 00:38:30.804 [2024-10-01 17:38:29.286305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.804 [2024-10-01 17:38:29.286316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.804 qpair failed and we were unable to recover it. 00:38:30.804 [2024-10-01 17:38:29.286594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.804 [2024-10-01 17:38:29.286604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.804 qpair failed and we were unable to recover it. 00:38:30.804 [2024-10-01 17:38:29.286955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.804 [2024-10-01 17:38:29.286966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.804 qpair failed and we were unable to recover it. 00:38:30.804 [2024-10-01 17:38:29.287259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.804 [2024-10-01 17:38:29.287270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.804 qpair failed and we were unable to recover it. 00:38:30.804 [2024-10-01 17:38:29.287439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.804 [2024-10-01 17:38:29.287451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.804 qpair failed and we were unable to recover it. 00:38:30.804 [2024-10-01 17:38:29.287774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.804 [2024-10-01 17:38:29.287784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.804 qpair failed and we were unable to recover it. 00:38:30.804 [2024-10-01 17:38:29.288112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.804 [2024-10-01 17:38:29.288123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.804 qpair failed and we were unable to recover it. 00:38:30.804 [2024-10-01 17:38:29.288433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.804 [2024-10-01 17:38:29.288444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.804 qpair failed and we were unable to recover it. 00:38:30.804 [2024-10-01 17:38:29.288788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.804 [2024-10-01 17:38:29.288800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.804 qpair failed and we were unable to recover it. 00:38:30.804 [2024-10-01 17:38:29.289054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.804 [2024-10-01 17:38:29.289066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.804 qpair failed and we were unable to recover it. 00:38:30.804 [2024-10-01 17:38:29.289355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.804 [2024-10-01 17:38:29.289366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.804 qpair failed and we were unable to recover it. 00:38:30.804 [2024-10-01 17:38:29.289704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.804 [2024-10-01 17:38:29.289715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.804 qpair failed and we were unable to recover it. 00:38:30.804 [2024-10-01 17:38:29.290016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.804 [2024-10-01 17:38:29.290027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.804 qpair failed and we were unable to recover it. 00:38:30.804 [2024-10-01 17:38:29.290350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.804 [2024-10-01 17:38:29.290360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.804 qpair failed and we were unable to recover it. 00:38:30.804 [2024-10-01 17:38:29.290566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.804 [2024-10-01 17:38:29.290576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.804 qpair failed and we were unable to recover it. 00:38:30.804 [2024-10-01 17:38:29.290894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.804 [2024-10-01 17:38:29.290904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.804 qpair failed and we were unable to recover it. 00:38:30.804 [2024-10-01 17:38:29.291240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.804 [2024-10-01 17:38:29.291251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.804 qpair failed and we were unable to recover it. 00:38:30.804 [2024-10-01 17:38:29.291571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.804 [2024-10-01 17:38:29.291581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.804 qpair failed and we were unable to recover it. 00:38:30.804 [2024-10-01 17:38:29.291908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.804 [2024-10-01 17:38:29.291918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.804 qpair failed and we were unable to recover it. 00:38:30.804 [2024-10-01 17:38:29.292263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.804 [2024-10-01 17:38:29.292274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.804 qpair failed and we were unable to recover it. 00:38:30.804 [2024-10-01 17:38:29.292605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.804 [2024-10-01 17:38:29.292619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.804 qpair failed and we were unable to recover it. 00:38:30.804 [2024-10-01 17:38:29.292877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.804 [2024-10-01 17:38:29.292888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.804 qpair failed and we were unable to recover it. 00:38:30.804 [2024-10-01 17:38:29.293218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.804 [2024-10-01 17:38:29.293229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.804 qpair failed and we were unable to recover it. 00:38:30.804 [2024-10-01 17:38:29.293544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.804 [2024-10-01 17:38:29.293554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.804 qpair failed and we were unable to recover it. 00:38:30.804 [2024-10-01 17:38:29.293758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.804 [2024-10-01 17:38:29.293768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.804 qpair failed and we were unable to recover it. 00:38:30.804 [2024-10-01 17:38:29.294066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.804 [2024-10-01 17:38:29.294077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.804 qpair failed and we were unable to recover it. 00:38:30.804 [2024-10-01 17:38:29.294367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.804 [2024-10-01 17:38:29.294379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.804 qpair failed and we were unable to recover it. 00:38:30.805 [2024-10-01 17:38:29.294691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.805 [2024-10-01 17:38:29.294702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.805 qpair failed and we were unable to recover it. 00:38:30.805 [2024-10-01 17:38:29.295005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.805 [2024-10-01 17:38:29.295017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.805 qpair failed and we were unable to recover it. 00:38:30.805 [2024-10-01 17:38:29.295215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.805 [2024-10-01 17:38:29.295226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.805 qpair failed and we were unable to recover it. 00:38:30.805 [2024-10-01 17:38:29.295529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.805 [2024-10-01 17:38:29.295539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.805 qpair failed and we were unable to recover it. 00:38:30.805 [2024-10-01 17:38:29.295838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.805 [2024-10-01 17:38:29.295849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.805 qpair failed and we were unable to recover it. 00:38:30.805 [2024-10-01 17:38:29.296188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.805 [2024-10-01 17:38:29.296199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.805 qpair failed and we were unable to recover it. 00:38:30.805 [2024-10-01 17:38:29.296521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.805 [2024-10-01 17:38:29.296531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.805 qpair failed and we were unable to recover it. 00:38:30.805 [2024-10-01 17:38:29.296867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.805 [2024-10-01 17:38:29.296877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.805 qpair failed and we were unable to recover it. 00:38:30.805 [2024-10-01 17:38:29.297215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.805 [2024-10-01 17:38:29.297226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.805 qpair failed and we were unable to recover it. 00:38:30.805 [2024-10-01 17:38:29.297534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.805 [2024-10-01 17:38:29.297544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.805 qpair failed and we were unable to recover it. 00:38:30.805 [2024-10-01 17:38:29.297862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.805 [2024-10-01 17:38:29.297873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.805 qpair failed and we were unable to recover it. 00:38:30.805 [2024-10-01 17:38:29.298196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.805 [2024-10-01 17:38:29.298207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.805 qpair failed and we were unable to recover it. 00:38:30.805 [2024-10-01 17:38:29.298504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.805 [2024-10-01 17:38:29.298515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.805 qpair failed and we were unable to recover it. 00:38:30.805 [2024-10-01 17:38:29.298818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.805 [2024-10-01 17:38:29.298829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.805 qpair failed and we were unable to recover it. 00:38:30.805 [2024-10-01 17:38:29.299139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.805 [2024-10-01 17:38:29.299150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.805 qpair failed and we were unable to recover it. 00:38:30.805 [2024-10-01 17:38:29.299438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.805 [2024-10-01 17:38:29.299449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.805 qpair failed and we were unable to recover it. 00:38:30.805 [2024-10-01 17:38:29.299747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.805 [2024-10-01 17:38:29.299759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.805 qpair failed and we were unable to recover it. 00:38:30.805 [2024-10-01 17:38:29.300060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.805 [2024-10-01 17:38:29.300071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.805 qpair failed and we were unable to recover it. 00:38:30.805 [2024-10-01 17:38:29.300373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.805 [2024-10-01 17:38:29.300384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.805 qpair failed and we were unable to recover it. 00:38:30.805 [2024-10-01 17:38:29.300689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.805 [2024-10-01 17:38:29.300700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.805 qpair failed and we were unable to recover it. 00:38:30.805 [2024-10-01 17:38:29.301000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.805 [2024-10-01 17:38:29.301014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.805 qpair failed and we were unable to recover it. 00:38:30.805 [2024-10-01 17:38:29.301328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.805 [2024-10-01 17:38:29.301338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.805 qpair failed and we were unable to recover it. 00:38:30.805 [2024-10-01 17:38:29.301641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.805 [2024-10-01 17:38:29.301651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.805 qpair failed and we were unable to recover it. 00:38:30.805 [2024-10-01 17:38:29.301975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.805 [2024-10-01 17:38:29.301985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.805 qpair failed and we were unable to recover it. 00:38:30.805 [2024-10-01 17:38:29.302298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.805 [2024-10-01 17:38:29.302308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.805 qpair failed and we were unable to recover it. 00:38:30.805 [2024-10-01 17:38:29.302572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.805 [2024-10-01 17:38:29.302583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.805 qpair failed and we were unable to recover it. 00:38:30.805 [2024-10-01 17:38:29.302882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.805 [2024-10-01 17:38:29.302893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.805 qpair failed and we were unable to recover it. 00:38:30.805 [2024-10-01 17:38:29.303200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.805 [2024-10-01 17:38:29.303211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.805 qpair failed and we were unable to recover it. 00:38:30.805 [2024-10-01 17:38:29.303471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.805 [2024-10-01 17:38:29.303482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.805 qpair failed and we were unable to recover it. 00:38:30.805 [2024-10-01 17:38:29.303777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.805 [2024-10-01 17:38:29.303788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.805 qpair failed and we were unable to recover it. 00:38:30.805 [2024-10-01 17:38:29.304004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.806 [2024-10-01 17:38:29.304015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.806 qpair failed and we were unable to recover it. 00:38:30.806 [2024-10-01 17:38:29.304285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.806 [2024-10-01 17:38:29.304296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.806 qpair failed and we were unable to recover it. 00:38:30.806 [2024-10-01 17:38:29.304597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.806 [2024-10-01 17:38:29.304608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.806 qpair failed and we were unable to recover it. 00:38:30.806 [2024-10-01 17:38:29.304908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.806 [2024-10-01 17:38:29.304919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.806 qpair failed and we were unable to recover it. 00:38:30.806 [2024-10-01 17:38:29.305221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.806 [2024-10-01 17:38:29.305232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.806 qpair failed and we were unable to recover it. 00:38:30.806 [2024-10-01 17:38:29.305555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.806 [2024-10-01 17:38:29.305566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.806 qpair failed and we were unable to recover it. 00:38:30.806 [2024-10-01 17:38:29.305877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.806 [2024-10-01 17:38:29.305887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.806 qpair failed and we were unable to recover it. 00:38:30.806 [2024-10-01 17:38:29.306215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.806 [2024-10-01 17:38:29.306226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.806 qpair failed and we were unable to recover it. 00:38:30.806 [2024-10-01 17:38:29.306484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.806 [2024-10-01 17:38:29.306495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.806 qpair failed and we were unable to recover it. 00:38:30.806 [2024-10-01 17:38:29.306817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.806 [2024-10-01 17:38:29.306828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.806 qpair failed and we were unable to recover it. 00:38:30.806 [2024-10-01 17:38:29.307128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.806 [2024-10-01 17:38:29.307139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:30.806 qpair failed and we were unable to recover it. 00:38:31.085 [2024-10-01 17:38:29.307437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.085 [2024-10-01 17:38:29.307450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.085 qpair failed and we were unable to recover it. 00:38:31.085 [2024-10-01 17:38:29.307739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.085 [2024-10-01 17:38:29.307750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.085 qpair failed and we were unable to recover it. 00:38:31.085 [2024-10-01 17:38:29.308076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.085 [2024-10-01 17:38:29.308087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.085 qpair failed and we were unable to recover it. 00:38:31.085 [2024-10-01 17:38:29.308390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.085 [2024-10-01 17:38:29.308400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.085 qpair failed and we were unable to recover it. 00:38:31.085 [2024-10-01 17:38:29.308698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.085 [2024-10-01 17:38:29.308709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.085 qpair failed and we were unable to recover it. 00:38:31.085 [2024-10-01 17:38:29.309022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.085 [2024-10-01 17:38:29.309033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.085 qpair failed and we were unable to recover it. 00:38:31.085 [2024-10-01 17:38:29.309329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.085 [2024-10-01 17:38:29.309339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.085 qpair failed and we were unable to recover it. 00:38:31.085 [2024-10-01 17:38:29.309665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.085 [2024-10-01 17:38:29.309676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.085 qpair failed and we were unable to recover it. 00:38:31.085 [2024-10-01 17:38:29.309966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.085 [2024-10-01 17:38:29.309978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.085 qpair failed and we were unable to recover it. 00:38:31.085 [2024-10-01 17:38:29.310300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.085 [2024-10-01 17:38:29.310312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.085 qpair failed and we were unable to recover it. 00:38:31.085 [2024-10-01 17:38:29.310639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.085 [2024-10-01 17:38:29.310650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.085 qpair failed and we were unable to recover it. 00:38:31.085 [2024-10-01 17:38:29.310947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.085 [2024-10-01 17:38:29.310958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.085 qpair failed and we were unable to recover it. 00:38:31.085 [2024-10-01 17:38:29.311264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.085 [2024-10-01 17:38:29.311275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.085 qpair failed and we were unable to recover it. 00:38:31.086 [2024-10-01 17:38:29.311567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.086 [2024-10-01 17:38:29.311579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.086 qpair failed and we were unable to recover it. 00:38:31.086 [2024-10-01 17:38:29.311912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.086 [2024-10-01 17:38:29.311923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.086 qpair failed and we were unable to recover it. 00:38:31.086 [2024-10-01 17:38:29.312222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.086 [2024-10-01 17:38:29.312233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.086 qpair failed and we were unable to recover it. 00:38:31.086 [2024-10-01 17:38:29.312537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.086 [2024-10-01 17:38:29.312549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.086 qpair failed and we were unable to recover it. 00:38:31.086 [2024-10-01 17:38:29.312868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.086 [2024-10-01 17:38:29.312879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.086 qpair failed and we were unable to recover it. 00:38:31.086 [2024-10-01 17:38:29.313157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.086 [2024-10-01 17:38:29.313169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.086 qpair failed and we were unable to recover it. 00:38:31.086 [2024-10-01 17:38:29.313473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.086 [2024-10-01 17:38:29.313484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.086 qpair failed and we were unable to recover it. 00:38:31.086 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3297441 Killed "${NVMF_APP[@]}" "$@" 00:38:31.086 [2024-10-01 17:38:29.313795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.086 [2024-10-01 17:38:29.313807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.086 qpair failed and we were unable to recover it. 00:38:31.086 [2024-10-01 17:38:29.314102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.086 [2024-10-01 17:38:29.314113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.086 qpair failed and we were unable to recover it. 00:38:31.086 [2024-10-01 17:38:29.314280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.086 [2024-10-01 17:38:29.314292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.086 qpair failed and we were unable to recover it. 00:38:31.086 17:38:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:38:31.086 [2024-10-01 17:38:29.314623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.086 [2024-10-01 17:38:29.314635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.086 qpair failed and we were unable to recover it. 00:38:31.086 17:38:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:38:31.086 [2024-10-01 17:38:29.314991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.086 [2024-10-01 17:38:29.315008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.086 qpair failed and we were unable to recover it. 00:38:31.086 17:38:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:31.086 [2024-10-01 17:38:29.315329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.086 [2024-10-01 17:38:29.315341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.086 qpair failed and we were unable to recover it. 00:38:31.086 17:38:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:31.086 [2024-10-01 17:38:29.315636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.086 [2024-10-01 17:38:29.315648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.086 qpair failed and we were unable to recover it. 00:38:31.086 17:38:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:31.086 [2024-10-01 17:38:29.315919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.086 [2024-10-01 17:38:29.315931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.086 qpair failed and we were unable to recover it. 00:38:31.086 [2024-10-01 17:38:29.316258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.086 [2024-10-01 17:38:29.316270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.086 qpair failed and we were unable to recover it. 00:38:31.086 [2024-10-01 17:38:29.316598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.086 [2024-10-01 17:38:29.316609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.086 qpair failed and we were unable to recover it. 00:38:31.086 [2024-10-01 17:38:29.316935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.086 [2024-10-01 17:38:29.316946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.086 qpair failed and we were unable to recover it. 00:38:31.086 [2024-10-01 17:38:29.317233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.086 [2024-10-01 17:38:29.317245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.086 qpair failed and we were unable to recover it. 00:38:31.086 [2024-10-01 17:38:29.317574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.086 [2024-10-01 17:38:29.317585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.086 qpair failed and we were unable to recover it. 00:38:31.086 [2024-10-01 17:38:29.317906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.086 [2024-10-01 17:38:29.317917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.086 qpair failed and we were unable to recover it. 00:38:31.086 [2024-10-01 17:38:29.318251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.086 [2024-10-01 17:38:29.318262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.086 qpair failed and we were unable to recover it. 00:38:31.086 [2024-10-01 17:38:29.318559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.086 [2024-10-01 17:38:29.318570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.086 qpair failed and we were unable to recover it. 00:38:31.086 [2024-10-01 17:38:29.318881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.086 [2024-10-01 17:38:29.318891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.086 qpair failed and we were unable to recover it. 00:38:31.086 [2024-10-01 17:38:29.319160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.086 [2024-10-01 17:38:29.319171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.086 qpair failed and we were unable to recover it. 00:38:31.086 [2024-10-01 17:38:29.319456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.086 [2024-10-01 17:38:29.319466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.086 qpair failed and we were unable to recover it. 00:38:31.086 [2024-10-01 17:38:29.319664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.086 [2024-10-01 17:38:29.319675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.086 qpair failed and we were unable to recover it. 00:38:31.086 [2024-10-01 17:38:29.319974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.086 [2024-10-01 17:38:29.319984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.086 qpair failed and we were unable to recover it. 00:38:31.086 [2024-10-01 17:38:29.320324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.086 [2024-10-01 17:38:29.320336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.086 qpair failed and we were unable to recover it. 00:38:31.086 [2024-10-01 17:38:29.320644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.086 [2024-10-01 17:38:29.320655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.086 qpair failed and we were unable to recover it. 00:38:31.087 [2024-10-01 17:38:29.320966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.087 [2024-10-01 17:38:29.320977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.087 qpair failed and we were unable to recover it. 00:38:31.087 [2024-10-01 17:38:29.321304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.087 [2024-10-01 17:38:29.321316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.087 qpair failed and we were unable to recover it. 00:38:31.087 [2024-10-01 17:38:29.321642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.087 [2024-10-01 17:38:29.321653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.087 qpair failed and we were unable to recover it. 00:38:31.087 [2024-10-01 17:38:29.321985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.087 [2024-10-01 17:38:29.322000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.087 qpair failed and we were unable to recover it. 00:38:31.087 [2024-10-01 17:38:29.322328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.087 [2024-10-01 17:38:29.322339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.087 qpair failed and we were unable to recover it. 00:38:31.087 [2024-10-01 17:38:29.322644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.087 [2024-10-01 17:38:29.322656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.087 qpair failed and we were unable to recover it. 00:38:31.087 [2024-10-01 17:38:29.322978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.087 [2024-10-01 17:38:29.322989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.087 qpair failed and we were unable to recover it. 00:38:31.087 [2024-10-01 17:38:29.323310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.087 [2024-10-01 17:38:29.323321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.087 qpair failed and we were unable to recover it. 00:38:31.087 [2024-10-01 17:38:29.323624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.087 [2024-10-01 17:38:29.323636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.087 qpair failed and we were unable to recover it. 00:38:31.087 17:38:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=3298373 00:38:31.087 [2024-10-01 17:38:29.323825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.087 [2024-10-01 17:38:29.323836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.087 qpair failed and we were unable to recover it. 00:38:31.087 17:38:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 3298373 00:38:31.087 [2024-10-01 17:38:29.324119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.087 [2024-10-01 17:38:29.324131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.087 qpair failed and we were unable to recover it. 00:38:31.087 17:38:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:38:31.087 17:38:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3298373 ']' 00:38:31.087 [2024-10-01 17:38:29.324471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.087 [2024-10-01 17:38:29.324483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.087 qpair failed and we were unable to recover it. 00:38:31.087 [2024-10-01 17:38:29.324599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.087 [2024-10-01 17:38:29.324611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.087 qpair failed and we were unable to recover it. 00:38:31.087 17:38:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:31.087 [2024-10-01 17:38:29.324834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.087 [2024-10-01 17:38:29.324846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.087 qpair failed and we were unable to recover it. 00:38:31.087 17:38:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:31.087 [2024-10-01 17:38:29.325061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.087 [2024-10-01 17:38:29.325074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.087 qpair failed and we were unable to recover it. 00:38:31.087 17:38:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:31.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:31.087 [2024-10-01 17:38:29.325381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.087 [2024-10-01 17:38:29.325393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.087 qpair failed and we were unable to recover it. 00:38:31.087 17:38:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:31.087 [2024-10-01 17:38:29.325600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.087 [2024-10-01 17:38:29.325612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.087 qpair failed and we were unable to recover it. 00:38:31.087 17:38:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:31.087 [2024-10-01 17:38:29.325876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.087 [2024-10-01 17:38:29.325888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.087 qpair failed and we were unable to recover it. 00:38:31.087 [2024-10-01 17:38:29.326198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.087 [2024-10-01 17:38:29.326209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.087 qpair failed and we were unable to recover it. 00:38:31.087 [2024-10-01 17:38:29.326561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.087 [2024-10-01 17:38:29.326573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.087 qpair failed and we were unable to recover it. 00:38:31.087 [2024-10-01 17:38:29.326864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.087 [2024-10-01 17:38:29.326876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.087 qpair failed and we were unable to recover it. 00:38:31.087 [2024-10-01 17:38:29.327205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.087 [2024-10-01 17:38:29.327217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.087 qpair failed and we were unable to recover it. 00:38:31.087 [2024-10-01 17:38:29.327591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.087 [2024-10-01 17:38:29.327602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.087 qpair failed and we were unable to recover it. 00:38:31.087 [2024-10-01 17:38:29.327883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.087 [2024-10-01 17:38:29.327895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.087 qpair failed and we were unable to recover it. 00:38:31.087 [2024-10-01 17:38:29.328220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.087 [2024-10-01 17:38:29.328233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.087 qpair failed and we were unable to recover it. 00:38:31.087 [2024-10-01 17:38:29.328462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.087 [2024-10-01 17:38:29.328474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.087 qpair failed and we were unable to recover it. 00:38:31.087 [2024-10-01 17:38:29.328804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.087 [2024-10-01 17:38:29.328815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.088 qpair failed and we were unable to recover it. 00:38:31.088 [2024-10-01 17:38:29.329136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.088 [2024-10-01 17:38:29.329147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.088 qpair failed and we were unable to recover it. 00:38:31.088 [2024-10-01 17:38:29.329493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.088 [2024-10-01 17:38:29.329505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.088 qpair failed and we were unable to recover it. 00:38:31.088 [2024-10-01 17:38:29.329805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.088 [2024-10-01 17:38:29.329817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.088 qpair failed and we were unable to recover it. 00:38:31.088 [2024-10-01 17:38:29.330037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.088 [2024-10-01 17:38:29.330050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.088 qpair failed and we were unable to recover it. 00:38:31.088 [2024-10-01 17:38:29.330252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.088 [2024-10-01 17:38:29.330263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.088 qpair failed and we were unable to recover it. 00:38:31.088 [2024-10-01 17:38:29.330571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.088 [2024-10-01 17:38:29.330582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.088 qpair failed and we were unable to recover it. 00:38:31.088 [2024-10-01 17:38:29.330881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.088 [2024-10-01 17:38:29.330893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.088 qpair failed and we were unable to recover it. 00:38:31.088 [2024-10-01 17:38:29.331170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.088 [2024-10-01 17:38:29.331182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.088 qpair failed and we were unable to recover it. 00:38:31.088 [2024-10-01 17:38:29.331521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.088 [2024-10-01 17:38:29.331533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.088 qpair failed and we were unable to recover it. 00:38:31.088 [2024-10-01 17:38:29.331839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.088 [2024-10-01 17:38:29.331851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.088 qpair failed and we were unable to recover it. 00:38:31.088 [2024-10-01 17:38:29.332161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.088 [2024-10-01 17:38:29.332173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.088 qpair failed and we were unable to recover it. 00:38:31.088 [2024-10-01 17:38:29.332442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.088 [2024-10-01 17:38:29.332455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.088 qpair failed and we were unable to recover it. 00:38:31.088 [2024-10-01 17:38:29.332790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.088 [2024-10-01 17:38:29.332803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.088 qpair failed and we were unable to recover it. 00:38:31.088 [2024-10-01 17:38:29.333161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.088 [2024-10-01 17:38:29.333173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.088 qpair failed and we were unable to recover it. 00:38:31.088 [2024-10-01 17:38:29.333474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.088 [2024-10-01 17:38:29.333486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.088 qpair failed and we were unable to recover it. 00:38:31.088 [2024-10-01 17:38:29.333820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.088 [2024-10-01 17:38:29.333832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.088 qpair failed and we were unable to recover it. 00:38:31.088 [2024-10-01 17:38:29.334020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.088 [2024-10-01 17:38:29.334033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.088 qpair failed and we were unable to recover it. 00:38:31.088 [2024-10-01 17:38:29.334315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.088 [2024-10-01 17:38:29.334327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.088 qpair failed and we were unable to recover it. 00:38:31.088 [2024-10-01 17:38:29.334524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.088 [2024-10-01 17:38:29.334536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.088 qpair failed and we were unable to recover it. 00:38:31.088 [2024-10-01 17:38:29.334873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.088 [2024-10-01 17:38:29.334886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.088 qpair failed and we were unable to recover it. 00:38:31.088 [2024-10-01 17:38:29.335206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.088 [2024-10-01 17:38:29.335218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.088 qpair failed and we were unable to recover it. 00:38:31.088 [2024-10-01 17:38:29.335564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.088 [2024-10-01 17:38:29.335576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.088 qpair failed and we were unable to recover it. 00:38:31.088 [2024-10-01 17:38:29.335910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.088 [2024-10-01 17:38:29.335922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.088 qpair failed and we were unable to recover it. 00:38:31.088 [2024-10-01 17:38:29.336232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.088 [2024-10-01 17:38:29.336243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.088 qpair failed and we were unable to recover it. 00:38:31.088 [2024-10-01 17:38:29.336521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.088 [2024-10-01 17:38:29.336534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.088 qpair failed and we were unable to recover it. 00:38:31.088 [2024-10-01 17:38:29.336802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.088 [2024-10-01 17:38:29.336812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.088 qpair failed and we were unable to recover it. 00:38:31.088 [2024-10-01 17:38:29.337173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.088 [2024-10-01 17:38:29.337184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.088 qpair failed and we were unable to recover it. 00:38:31.088 [2024-10-01 17:38:29.337505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.088 [2024-10-01 17:38:29.337516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.088 qpair failed and we were unable to recover it. 00:38:31.088 [2024-10-01 17:38:29.337802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.088 [2024-10-01 17:38:29.337813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.088 qpair failed and we were unable to recover it. 00:38:31.088 [2024-10-01 17:38:29.338102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.088 [2024-10-01 17:38:29.338113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.088 qpair failed and we were unable to recover it. 00:38:31.088 [2024-10-01 17:38:29.338322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.088 [2024-10-01 17:38:29.338333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.088 qpair failed and we were unable to recover it. 00:38:31.089 [2024-10-01 17:38:29.338587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.089 [2024-10-01 17:38:29.338598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.089 qpair failed and we were unable to recover it. 00:38:31.089 [2024-10-01 17:38:29.338908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.089 [2024-10-01 17:38:29.338921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.089 qpair failed and we were unable to recover it. 00:38:31.089 [2024-10-01 17:38:29.339109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.089 [2024-10-01 17:38:29.339121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.089 qpair failed and we were unable to recover it. 00:38:31.089 [2024-10-01 17:38:29.339399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.089 [2024-10-01 17:38:29.339410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.089 qpair failed and we were unable to recover it. 00:38:31.089 [2024-10-01 17:38:29.339682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.089 [2024-10-01 17:38:29.339693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.089 qpair failed and we were unable to recover it. 00:38:31.089 [2024-10-01 17:38:29.339984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.089 [2024-10-01 17:38:29.340000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.089 qpair failed and we were unable to recover it. 00:38:31.089 [2024-10-01 17:38:29.340203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.089 [2024-10-01 17:38:29.340214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.089 qpair failed and we were unable to recover it. 00:38:31.089 [2024-10-01 17:38:29.340510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.089 [2024-10-01 17:38:29.340521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.089 qpair failed and we were unable to recover it. 00:38:31.089 [2024-10-01 17:38:29.340783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.089 [2024-10-01 17:38:29.340794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.089 qpair failed and we were unable to recover it. 00:38:31.089 [2024-10-01 17:38:29.340984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.089 [2024-10-01 17:38:29.341005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.089 qpair failed and we were unable to recover it. 00:38:31.089 [2024-10-01 17:38:29.341324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.089 [2024-10-01 17:38:29.341335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.089 qpair failed and we were unable to recover it. 00:38:31.089 [2024-10-01 17:38:29.341677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.089 [2024-10-01 17:38:29.341689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.089 qpair failed and we were unable to recover it. 00:38:31.089 [2024-10-01 17:38:29.342006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.089 [2024-10-01 17:38:29.342018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.089 qpair failed and we were unable to recover it. 00:38:31.089 [2024-10-01 17:38:29.342317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.089 [2024-10-01 17:38:29.342328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.089 qpair failed and we were unable to recover it. 00:38:31.089 [2024-10-01 17:38:29.342636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.089 [2024-10-01 17:38:29.342648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.089 qpair failed and we were unable to recover it. 00:38:31.089 [2024-10-01 17:38:29.342953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.089 [2024-10-01 17:38:29.342964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.089 qpair failed and we were unable to recover it. 00:38:31.089 [2024-10-01 17:38:29.343275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.089 [2024-10-01 17:38:29.343286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.089 qpair failed and we were unable to recover it. 00:38:31.089 [2024-10-01 17:38:29.343595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.089 [2024-10-01 17:38:29.343607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.089 qpair failed and we were unable to recover it. 00:38:31.089 [2024-10-01 17:38:29.343780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.089 [2024-10-01 17:38:29.343791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.089 qpair failed and we were unable to recover it. 00:38:31.089 [2024-10-01 17:38:29.344114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.089 [2024-10-01 17:38:29.344125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.089 qpair failed and we were unable to recover it. 00:38:31.089 [2024-10-01 17:38:29.344448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.089 [2024-10-01 17:38:29.344459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.089 qpair failed and we were unable to recover it. 00:38:31.089 [2024-10-01 17:38:29.344794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.089 [2024-10-01 17:38:29.344806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.089 qpair failed and we were unable to recover it. 00:38:31.089 [2024-10-01 17:38:29.345129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.089 [2024-10-01 17:38:29.345141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.089 qpair failed and we were unable to recover it. 00:38:31.089 [2024-10-01 17:38:29.345366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.089 [2024-10-01 17:38:29.345378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.089 qpair failed and we were unable to recover it. 00:38:31.089 [2024-10-01 17:38:29.345702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.089 [2024-10-01 17:38:29.345713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.089 qpair failed and we were unable to recover it. 00:38:31.089 [2024-10-01 17:38:29.346002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.089 [2024-10-01 17:38:29.346014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.089 qpair failed and we were unable to recover it. 00:38:31.089 [2024-10-01 17:38:29.346228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.089 [2024-10-01 17:38:29.346240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.089 qpair failed and we were unable to recover it. 00:38:31.089 [2024-10-01 17:38:29.346432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.089 [2024-10-01 17:38:29.346445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.089 qpair failed and we were unable to recover it. 00:38:31.089 [2024-10-01 17:38:29.346650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.089 [2024-10-01 17:38:29.346662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.089 qpair failed and we were unable to recover it. 00:38:31.089 [2024-10-01 17:38:29.346961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.089 [2024-10-01 17:38:29.346972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.089 qpair failed and we were unable to recover it. 00:38:31.089 [2024-10-01 17:38:29.347183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.089 [2024-10-01 17:38:29.347194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.089 qpair failed and we were unable to recover it. 00:38:31.089 [2024-10-01 17:38:29.347515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.089 [2024-10-01 17:38:29.347526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.089 qpair failed and we were unable to recover it. 00:38:31.089 [2024-10-01 17:38:29.347864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.089 [2024-10-01 17:38:29.347874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.089 qpair failed and we were unable to recover it. 00:38:31.089 [2024-10-01 17:38:29.348185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.089 [2024-10-01 17:38:29.348196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.090 qpair failed and we were unable to recover it. 00:38:31.090 [2024-10-01 17:38:29.348528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.090 [2024-10-01 17:38:29.348539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.090 qpair failed and we were unable to recover it. 00:38:31.090 [2024-10-01 17:38:29.348850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.090 [2024-10-01 17:38:29.348861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.090 qpair failed and we were unable to recover it. 00:38:31.090 [2024-10-01 17:38:29.349159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.090 [2024-10-01 17:38:29.349171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.090 qpair failed and we were unable to recover it. 00:38:31.090 [2024-10-01 17:38:29.349504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.090 [2024-10-01 17:38:29.349515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.090 qpair failed and we were unable to recover it. 00:38:31.090 [2024-10-01 17:38:29.349717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.090 [2024-10-01 17:38:29.349728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.090 qpair failed and we were unable to recover it. 00:38:31.090 [2024-10-01 17:38:29.350036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.090 [2024-10-01 17:38:29.350047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.090 qpair failed and we were unable to recover it. 00:38:31.090 [2024-10-01 17:38:29.350389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.090 [2024-10-01 17:38:29.350400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.090 qpair failed and we were unable to recover it. 00:38:31.090 [2024-10-01 17:38:29.350696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.090 [2024-10-01 17:38:29.350706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.090 qpair failed and we were unable to recover it. 00:38:31.090 [2024-10-01 17:38:29.351010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.090 [2024-10-01 17:38:29.351021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.090 qpair failed and we were unable to recover it. 00:38:31.090 [2024-10-01 17:38:29.351329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.090 [2024-10-01 17:38:29.351340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.090 qpair failed and we were unable to recover it. 00:38:31.090 [2024-10-01 17:38:29.351604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.090 [2024-10-01 17:38:29.351614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.090 qpair failed and we were unable to recover it. 00:38:31.090 [2024-10-01 17:38:29.351815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.090 [2024-10-01 17:38:29.351828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.090 qpair failed and we were unable to recover it. 00:38:31.090 [2024-10-01 17:38:29.352145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.090 [2024-10-01 17:38:29.352157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.090 qpair failed and we were unable to recover it. 00:38:31.090 [2024-10-01 17:38:29.352477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.090 [2024-10-01 17:38:29.352489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.090 qpair failed and we were unable to recover it. 00:38:31.090 [2024-10-01 17:38:29.352663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.090 [2024-10-01 17:38:29.352672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.090 qpair failed and we were unable to recover it. 00:38:31.090 [2024-10-01 17:38:29.353001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.090 [2024-10-01 17:38:29.353012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.090 qpair failed and we were unable to recover it. 00:38:31.090 [2024-10-01 17:38:29.353318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.090 [2024-10-01 17:38:29.353328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.090 qpair failed and we were unable to recover it. 00:38:31.090 [2024-10-01 17:38:29.353644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.090 [2024-10-01 17:38:29.353654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.090 qpair failed and we were unable to recover it. 00:38:31.090 [2024-10-01 17:38:29.354001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.090 [2024-10-01 17:38:29.354012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.090 qpair failed and we were unable to recover it. 00:38:31.090 [2024-10-01 17:38:29.354341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.090 [2024-10-01 17:38:29.354352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.090 qpair failed and we were unable to recover it. 00:38:31.090 [2024-10-01 17:38:29.354665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.090 [2024-10-01 17:38:29.354676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.090 qpair failed and we were unable to recover it. 00:38:31.090 [2024-10-01 17:38:29.354986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.090 [2024-10-01 17:38:29.355002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.090 qpair failed and we were unable to recover it. 00:38:31.090 [2024-10-01 17:38:29.355282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.090 [2024-10-01 17:38:29.355294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.090 qpair failed and we were unable to recover it. 00:38:31.090 [2024-10-01 17:38:29.355627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.090 [2024-10-01 17:38:29.355637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.090 qpair failed and we were unable to recover it. 00:38:31.090 [2024-10-01 17:38:29.355817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.090 [2024-10-01 17:38:29.355829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.090 qpair failed and we were unable to recover it. 00:38:31.090 [2024-10-01 17:38:29.356143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.090 [2024-10-01 17:38:29.356154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.090 qpair failed and we were unable to recover it. 00:38:31.090 [2024-10-01 17:38:29.356477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.090 [2024-10-01 17:38:29.356488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.090 qpair failed and we were unable to recover it. 00:38:31.090 [2024-10-01 17:38:29.356768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.090 [2024-10-01 17:38:29.356806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.090 qpair failed and we were unable to recover it. 00:38:31.090 [2024-10-01 17:38:29.356990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.090 [2024-10-01 17:38:29.357006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.090 qpair failed and we were unable to recover it. 00:38:31.090 [2024-10-01 17:38:29.357182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.090 [2024-10-01 17:38:29.357195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.091 qpair failed and we were unable to recover it. 00:38:31.091 [2024-10-01 17:38:29.357508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.091 [2024-10-01 17:38:29.357518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.091 qpair failed and we were unable to recover it. 00:38:31.091 [2024-10-01 17:38:29.357717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.091 [2024-10-01 17:38:29.357728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.091 qpair failed and we were unable to recover it. 00:38:31.091 [2024-10-01 17:38:29.358075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.091 [2024-10-01 17:38:29.358086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.091 qpair failed and we were unable to recover it. 00:38:31.091 [2024-10-01 17:38:29.358430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.091 [2024-10-01 17:38:29.358440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.091 qpair failed and we were unable to recover it. 00:38:31.091 [2024-10-01 17:38:29.358764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.091 [2024-10-01 17:38:29.358775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.091 qpair failed and we were unable to recover it. 00:38:31.091 [2024-10-01 17:38:29.359063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.091 [2024-10-01 17:38:29.359074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.091 qpair failed and we were unable to recover it. 00:38:31.091 [2024-10-01 17:38:29.359399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.091 [2024-10-01 17:38:29.359410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.091 qpair failed and we were unable to recover it. 00:38:31.091 [2024-10-01 17:38:29.359717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.091 [2024-10-01 17:38:29.359728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.091 qpair failed and we were unable to recover it. 00:38:31.091 [2024-10-01 17:38:29.360035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.091 [2024-10-01 17:38:29.360047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.091 qpair failed and we were unable to recover it. 00:38:31.091 [2024-10-01 17:38:29.360351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.091 [2024-10-01 17:38:29.360362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.091 qpair failed and we were unable to recover it. 00:38:31.091 [2024-10-01 17:38:29.360551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.091 [2024-10-01 17:38:29.360563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.091 qpair failed and we were unable to recover it. 00:38:31.091 [2024-10-01 17:38:29.360873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.091 [2024-10-01 17:38:29.360884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.091 qpair failed and we were unable to recover it. 00:38:31.091 [2024-10-01 17:38:29.361151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.091 [2024-10-01 17:38:29.361162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.091 qpair failed and we were unable to recover it. 00:38:31.091 [2024-10-01 17:38:29.361366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.091 [2024-10-01 17:38:29.361377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.091 qpair failed and we were unable to recover it. 00:38:31.091 [2024-10-01 17:38:29.361709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.091 [2024-10-01 17:38:29.361720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.091 qpair failed and we were unable to recover it. 00:38:31.091 [2024-10-01 17:38:29.362036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.091 [2024-10-01 17:38:29.362048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.091 qpair failed and we were unable to recover it. 00:38:31.091 [2024-10-01 17:38:29.362415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.091 [2024-10-01 17:38:29.362426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.091 qpair failed and we were unable to recover it. 00:38:31.091 [2024-10-01 17:38:29.362713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.091 [2024-10-01 17:38:29.362724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.091 qpair failed and we were unable to recover it. 00:38:31.091 [2024-10-01 17:38:29.362952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.091 [2024-10-01 17:38:29.362962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.091 qpair failed and we were unable to recover it. 00:38:31.091 [2024-10-01 17:38:29.363262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.091 [2024-10-01 17:38:29.363273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.091 qpair failed and we were unable to recover it. 00:38:31.091 [2024-10-01 17:38:29.363581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.091 [2024-10-01 17:38:29.363591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.091 qpair failed and we were unable to recover it. 00:38:31.091 [2024-10-01 17:38:29.363887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.091 [2024-10-01 17:38:29.363898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.091 qpair failed and we were unable to recover it. 00:38:31.091 [2024-10-01 17:38:29.364204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.091 [2024-10-01 17:38:29.364215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.091 qpair failed and we were unable to recover it. 00:38:31.091 [2024-10-01 17:38:29.364539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.091 [2024-10-01 17:38:29.364550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.091 qpair failed and we were unable to recover it. 00:38:31.091 [2024-10-01 17:38:29.364882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.091 [2024-10-01 17:38:29.364893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.091 qpair failed and we were unable to recover it. 00:38:31.091 [2024-10-01 17:38:29.365201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.091 [2024-10-01 17:38:29.365212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.091 qpair failed and we were unable to recover it. 00:38:31.091 [2024-10-01 17:38:29.365534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.091 [2024-10-01 17:38:29.365544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.091 qpair failed and we were unable to recover it. 00:38:31.091 [2024-10-01 17:38:29.365864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.091 [2024-10-01 17:38:29.365874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.091 qpair failed and we were unable to recover it. 00:38:31.091 [2024-10-01 17:38:29.366152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.091 [2024-10-01 17:38:29.366163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.091 qpair failed and we were unable to recover it. 00:38:31.091 [2024-10-01 17:38:29.366453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.092 [2024-10-01 17:38:29.366464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.092 qpair failed and we were unable to recover it. 00:38:31.092 [2024-10-01 17:38:29.366771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.092 [2024-10-01 17:38:29.366782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.092 qpair failed and we were unable to recover it. 00:38:31.092 [2024-10-01 17:38:29.367116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.092 [2024-10-01 17:38:29.367128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.092 qpair failed and we were unable to recover it. 00:38:31.092 [2024-10-01 17:38:29.367436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.092 [2024-10-01 17:38:29.367447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.092 qpair failed and we were unable to recover it. 00:38:31.092 [2024-10-01 17:38:29.367793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.092 [2024-10-01 17:38:29.367804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.092 qpair failed and we were unable to recover it. 00:38:31.092 [2024-10-01 17:38:29.368127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.092 [2024-10-01 17:38:29.368140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.092 qpair failed and we were unable to recover it. 00:38:31.092 [2024-10-01 17:38:29.368454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.092 [2024-10-01 17:38:29.368465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.092 qpair failed and we were unable to recover it. 00:38:31.092 [2024-10-01 17:38:29.368780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.092 [2024-10-01 17:38:29.368791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.092 qpair failed and we were unable to recover it. 00:38:31.092 [2024-10-01 17:38:29.369134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.092 [2024-10-01 17:38:29.369145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.092 qpair failed and we were unable to recover it. 00:38:31.092 [2024-10-01 17:38:29.369339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.092 [2024-10-01 17:38:29.369353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.092 qpair failed and we were unable to recover it. 00:38:31.092 [2024-10-01 17:38:29.369526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.092 [2024-10-01 17:38:29.369536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.092 qpair failed and we were unable to recover it. 00:38:31.092 [2024-10-01 17:38:29.369882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.092 [2024-10-01 17:38:29.369893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.092 qpair failed and we were unable to recover it. 00:38:31.092 [2024-10-01 17:38:29.370221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.092 [2024-10-01 17:38:29.370232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.092 qpair failed and we were unable to recover it. 00:38:31.092 [2024-10-01 17:38:29.370552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.092 [2024-10-01 17:38:29.370563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.092 qpair failed and we were unable to recover it. 00:38:31.092 [2024-10-01 17:38:29.370879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.092 [2024-10-01 17:38:29.370891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.092 qpair failed and we were unable to recover it. 00:38:31.092 [2024-10-01 17:38:29.371057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.092 [2024-10-01 17:38:29.371070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.092 qpair failed and we were unable to recover it. 00:38:31.092 [2024-10-01 17:38:29.371381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.092 [2024-10-01 17:38:29.371392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.092 qpair failed and we were unable to recover it. 00:38:31.092 [2024-10-01 17:38:29.371699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.092 [2024-10-01 17:38:29.371710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.092 qpair failed and we were unable to recover it. 00:38:31.092 [2024-10-01 17:38:29.372014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.092 [2024-10-01 17:38:29.372025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.092 qpair failed and we were unable to recover it. 00:38:31.092 [2024-10-01 17:38:29.372327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.092 [2024-10-01 17:38:29.372338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.092 qpair failed and we were unable to recover it. 00:38:31.092 [2024-10-01 17:38:29.372616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.092 [2024-10-01 17:38:29.372626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.092 qpair failed and we were unable to recover it. 00:38:31.092 [2024-10-01 17:38:29.372935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.092 [2024-10-01 17:38:29.372945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.092 qpair failed and we were unable to recover it. 00:38:31.092 [2024-10-01 17:38:29.373265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.092 [2024-10-01 17:38:29.373276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.092 qpair failed and we were unable to recover it. 00:38:31.092 [2024-10-01 17:38:29.373472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.092 [2024-10-01 17:38:29.373483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.092 qpair failed and we were unable to recover it. 00:38:31.092 [2024-10-01 17:38:29.373787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.092 [2024-10-01 17:38:29.373798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.092 qpair failed and we were unable to recover it. 00:38:31.092 [2024-10-01 17:38:29.374093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.092 [2024-10-01 17:38:29.374105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.092 qpair failed and we were unable to recover it. 00:38:31.092 [2024-10-01 17:38:29.374312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.092 [2024-10-01 17:38:29.374324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.092 qpair failed and we were unable to recover it. 00:38:31.092 [2024-10-01 17:38:29.374652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.092 [2024-10-01 17:38:29.374664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.092 qpair failed and we were unable to recover it. 00:38:31.092 [2024-10-01 17:38:29.375009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.092 [2024-10-01 17:38:29.375021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.092 qpair failed and we were unable to recover it. 00:38:31.092 [2024-10-01 17:38:29.375356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.092 [2024-10-01 17:38:29.375368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.092 qpair failed and we were unable to recover it. 00:38:31.092 [2024-10-01 17:38:29.375685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.092 [2024-10-01 17:38:29.375697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.092 qpair failed and we were unable to recover it. 00:38:31.092 [2024-10-01 17:38:29.376008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.093 [2024-10-01 17:38:29.376020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.093 qpair failed and we were unable to recover it. 00:38:31.093 [2024-10-01 17:38:29.376331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.093 [2024-10-01 17:38:29.376342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.093 qpair failed and we were unable to recover it. 00:38:31.093 [2024-10-01 17:38:29.376654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.093 [2024-10-01 17:38:29.376665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.093 qpair failed and we were unable to recover it. 00:38:31.093 [2024-10-01 17:38:29.376842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.093 [2024-10-01 17:38:29.376854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.093 qpair failed and we were unable to recover it. 00:38:31.093 [2024-10-01 17:38:29.377156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.093 [2024-10-01 17:38:29.377168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.093 qpair failed and we were unable to recover it. 00:38:31.093 [2024-10-01 17:38:29.377502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.093 [2024-10-01 17:38:29.377518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.093 qpair failed and we were unable to recover it. 00:38:31.093 [2024-10-01 17:38:29.377838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.093 [2024-10-01 17:38:29.377850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.093 qpair failed and we were unable to recover it. 00:38:31.093 [2024-10-01 17:38:29.377846] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:38:31.093 [2024-10-01 17:38:29.377890] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:31.093 [2024-10-01 17:38:29.378019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.093 [2024-10-01 17:38:29.378031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.093 qpair failed and we were unable to recover it. 00:38:31.093 [2024-10-01 17:38:29.378177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.093 [2024-10-01 17:38:29.378187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.093 qpair failed and we were unable to recover it. 00:38:31.093 [2024-10-01 17:38:29.378490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.093 [2024-10-01 17:38:29.378500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.093 qpair failed and we were unable to recover it. 00:38:31.093 [2024-10-01 17:38:29.378837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.093 [2024-10-01 17:38:29.378848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.093 qpair failed and we were unable to recover it. 00:38:31.093 [2024-10-01 17:38:29.379157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.093 [2024-10-01 17:38:29.379169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.093 qpair failed and we were unable to recover it. 00:38:31.093 [2024-10-01 17:38:29.379502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.093 [2024-10-01 17:38:29.379514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.093 qpair failed and we were unable to recover it. 00:38:31.093 [2024-10-01 17:38:29.379825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.093 [2024-10-01 17:38:29.379837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.093 qpair failed and we were unable to recover it. 00:38:31.093 [2024-10-01 17:38:29.380176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.093 [2024-10-01 17:38:29.380189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.093 qpair failed and we were unable to recover it. 00:38:31.093 [2024-10-01 17:38:29.380459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.093 [2024-10-01 17:38:29.380470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.093 qpair failed and we were unable to recover it. 00:38:31.093 [2024-10-01 17:38:29.380783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.093 [2024-10-01 17:38:29.380796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.093 qpair failed and we were unable to recover it. 00:38:31.093 [2024-10-01 17:38:29.381139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.093 [2024-10-01 17:38:29.381151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.093 qpair failed and we were unable to recover it. 00:38:31.093 [2024-10-01 17:38:29.381502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.093 [2024-10-01 17:38:29.381514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.093 qpair failed and we were unable to recover it. 00:38:31.093 [2024-10-01 17:38:29.381830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.093 [2024-10-01 17:38:29.381842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.093 qpair failed and we were unable to recover it. 00:38:31.093 [2024-10-01 17:38:29.382195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.093 [2024-10-01 17:38:29.382207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.093 qpair failed and we were unable to recover it. 00:38:31.093 [2024-10-01 17:38:29.382464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.093 [2024-10-01 17:38:29.382475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.093 qpair failed and we were unable to recover it. 00:38:31.093 [2024-10-01 17:38:29.382764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.093 [2024-10-01 17:38:29.382776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.093 qpair failed and we were unable to recover it. 00:38:31.093 [2024-10-01 17:38:29.382836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.093 [2024-10-01 17:38:29.382847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.093 qpair failed and we were unable to recover it. 00:38:31.093 [2024-10-01 17:38:29.383141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.093 [2024-10-01 17:38:29.383154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.093 qpair failed and we were unable to recover it. 00:38:31.093 [2024-10-01 17:38:29.383324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.093 [2024-10-01 17:38:29.383335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.093 qpair failed and we were unable to recover it. 00:38:31.093 [2024-10-01 17:38:29.383648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.093 [2024-10-01 17:38:29.383661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.093 qpair failed and we were unable to recover it. 00:38:31.093 [2024-10-01 17:38:29.384009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.093 [2024-10-01 17:38:29.384023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.093 qpair failed and we were unable to recover it. 00:38:31.093 [2024-10-01 17:38:29.384263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.093 [2024-10-01 17:38:29.384275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.093 qpair failed and we were unable to recover it. 00:38:31.093 [2024-10-01 17:38:29.384578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.093 [2024-10-01 17:38:29.384590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.093 qpair failed and we were unable to recover it. 00:38:31.093 [2024-10-01 17:38:29.384895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.093 [2024-10-01 17:38:29.384906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.093 qpair failed and we were unable to recover it. 00:38:31.093 [2024-10-01 17:38:29.385248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.093 [2024-10-01 17:38:29.385263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.093 qpair failed and we were unable to recover it. 00:38:31.093 [2024-10-01 17:38:29.385575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.093 [2024-10-01 17:38:29.385586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.093 qpair failed and we were unable to recover it. 00:38:31.094 [2024-10-01 17:38:29.385919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.094 [2024-10-01 17:38:29.385931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.094 qpair failed and we were unable to recover it. 00:38:31.094 [2024-10-01 17:38:29.386239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.094 [2024-10-01 17:38:29.386251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.094 qpair failed and we were unable to recover it. 00:38:31.094 [2024-10-01 17:38:29.386538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.094 [2024-10-01 17:38:29.386550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.094 qpair failed and we were unable to recover it. 00:38:31.094 [2024-10-01 17:38:29.386763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.094 [2024-10-01 17:38:29.386775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.094 qpair failed and we were unable to recover it. 00:38:31.094 [2024-10-01 17:38:29.387078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.094 [2024-10-01 17:38:29.387091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.094 qpair failed and we were unable to recover it. 00:38:31.094 [2024-10-01 17:38:29.387366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.094 [2024-10-01 17:38:29.387378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.094 qpair failed and we were unable to recover it. 00:38:31.094 [2024-10-01 17:38:29.387541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.094 [2024-10-01 17:38:29.387553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.094 qpair failed and we were unable to recover it. 00:38:31.094 [2024-10-01 17:38:29.387926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.094 [2024-10-01 17:38:29.387938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.094 qpair failed and we were unable to recover it. 00:38:31.094 [2024-10-01 17:38:29.388243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.094 [2024-10-01 17:38:29.388255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.094 qpair failed and we were unable to recover it. 00:38:31.094 [2024-10-01 17:38:29.388531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.094 [2024-10-01 17:38:29.388543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.094 qpair failed and we were unable to recover it. 00:38:31.094 [2024-10-01 17:38:29.388736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.094 [2024-10-01 17:38:29.388748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.094 qpair failed and we were unable to recover it. 00:38:31.094 [2024-10-01 17:38:29.389071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.094 [2024-10-01 17:38:29.389083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.094 qpair failed and we were unable to recover it. 00:38:31.094 [2024-10-01 17:38:29.389416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.094 [2024-10-01 17:38:29.389427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.094 qpair failed and we were unable to recover it. 00:38:31.094 [2024-10-01 17:38:29.389771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.094 [2024-10-01 17:38:29.389783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.094 qpair failed and we were unable to recover it. 00:38:31.094 [2024-10-01 17:38:29.389969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.094 [2024-10-01 17:38:29.389982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.094 qpair failed and we were unable to recover it. 00:38:31.094 [2024-10-01 17:38:29.390194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.094 [2024-10-01 17:38:29.390206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.094 qpair failed and we were unable to recover it. 00:38:31.094 [2024-10-01 17:38:29.390416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.094 [2024-10-01 17:38:29.390428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.094 qpair failed and we were unable to recover it. 00:38:31.094 [2024-10-01 17:38:29.390764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.094 [2024-10-01 17:38:29.390776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.094 qpair failed and we were unable to recover it. 00:38:31.094 [2024-10-01 17:38:29.391084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.094 [2024-10-01 17:38:29.391096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.094 qpair failed and we were unable to recover it. 00:38:31.094 [2024-10-01 17:38:29.391443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.094 [2024-10-01 17:38:29.391455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.094 qpair failed and we were unable to recover it. 00:38:31.094 [2024-10-01 17:38:29.391764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.094 [2024-10-01 17:38:29.391776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.094 qpair failed and we were unable to recover it. 00:38:31.094 [2024-10-01 17:38:29.392117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.094 [2024-10-01 17:38:29.392130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.094 qpair failed and we were unable to recover it. 00:38:31.094 [2024-10-01 17:38:29.392461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.094 [2024-10-01 17:38:29.392473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.094 qpair failed and we were unable to recover it. 00:38:31.094 [2024-10-01 17:38:29.392787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.094 [2024-10-01 17:38:29.392799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.094 qpair failed and we were unable to recover it. 00:38:31.094 [2024-10-01 17:38:29.393136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.094 [2024-10-01 17:38:29.393148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.094 qpair failed and we were unable to recover it. 00:38:31.094 [2024-10-01 17:38:29.393483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.094 [2024-10-01 17:38:29.393494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.094 qpair failed and we were unable to recover it. 00:38:31.094 [2024-10-01 17:38:29.393670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.094 [2024-10-01 17:38:29.393682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.094 qpair failed and we were unable to recover it. 00:38:31.094 [2024-10-01 17:38:29.393990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.094 [2024-10-01 17:38:29.394006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.094 qpair failed and we were unable to recover it. 00:38:31.094 [2024-10-01 17:38:29.394283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.094 [2024-10-01 17:38:29.394296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.094 qpair failed and we were unable to recover it. 00:38:31.094 [2024-10-01 17:38:29.394622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.094 [2024-10-01 17:38:29.394634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.094 qpair failed and we were unable to recover it. 00:38:31.094 [2024-10-01 17:38:29.394923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.094 [2024-10-01 17:38:29.394935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.094 qpair failed and we were unable to recover it. 00:38:31.094 [2024-10-01 17:38:29.395253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.095 [2024-10-01 17:38:29.395266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.095 qpair failed and we were unable to recover it. 00:38:31.095 [2024-10-01 17:38:29.395619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.095 [2024-10-01 17:38:29.395631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.095 qpair failed and we were unable to recover it. 00:38:31.095 [2024-10-01 17:38:29.395796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.095 [2024-10-01 17:38:29.395809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.095 qpair failed and we were unable to recover it. 00:38:31.095 [2024-10-01 17:38:29.396147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.095 [2024-10-01 17:38:29.396159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.095 qpair failed and we were unable to recover it. 00:38:31.095 [2024-10-01 17:38:29.396460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.095 [2024-10-01 17:38:29.396472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.095 qpair failed and we were unable to recover it. 00:38:31.095 [2024-10-01 17:38:29.396774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.095 [2024-10-01 17:38:29.396785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.095 qpair failed and we were unable to recover it. 00:38:31.095 [2024-10-01 17:38:29.396947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.095 [2024-10-01 17:38:29.396958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.095 qpair failed and we were unable to recover it. 00:38:31.095 [2024-10-01 17:38:29.397148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.095 [2024-10-01 17:38:29.397160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.095 qpair failed and we were unable to recover it. 00:38:31.095 [2024-10-01 17:38:29.397508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.095 [2024-10-01 17:38:29.397522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.095 qpair failed and we were unable to recover it. 00:38:31.095 [2024-10-01 17:38:29.397849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.095 [2024-10-01 17:38:29.397861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.095 qpair failed and we were unable to recover it. 00:38:31.095 [2024-10-01 17:38:29.398161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.095 [2024-10-01 17:38:29.398174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.095 qpair failed and we were unable to recover it. 00:38:31.095 [2024-10-01 17:38:29.398386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.095 [2024-10-01 17:38:29.398397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.095 qpair failed and we were unable to recover it. 00:38:31.095 [2024-10-01 17:38:29.398693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.095 [2024-10-01 17:38:29.398705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.095 qpair failed and we were unable to recover it. 00:38:31.095 [2024-10-01 17:38:29.399011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.095 [2024-10-01 17:38:29.399023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.095 qpair failed and we were unable to recover it. 00:38:31.095 [2024-10-01 17:38:29.399280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.095 [2024-10-01 17:38:29.399291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.095 qpair failed and we were unable to recover it. 00:38:31.095 [2024-10-01 17:38:29.399572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.095 [2024-10-01 17:38:29.399583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.095 qpair failed and we were unable to recover it. 00:38:31.095 [2024-10-01 17:38:29.399775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.095 [2024-10-01 17:38:29.399787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.095 qpair failed and we were unable to recover it. 00:38:31.095 [2024-10-01 17:38:29.400091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.095 [2024-10-01 17:38:29.400102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.095 qpair failed and we were unable to recover it. 00:38:31.095 [2024-10-01 17:38:29.400385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.095 [2024-10-01 17:38:29.400396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.095 qpair failed and we were unable to recover it. 00:38:31.095 [2024-10-01 17:38:29.400729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.095 [2024-10-01 17:38:29.400740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.095 qpair failed and we were unable to recover it. 00:38:31.095 [2024-10-01 17:38:29.401049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.095 [2024-10-01 17:38:29.401060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.095 qpair failed and we were unable to recover it. 00:38:31.095 [2024-10-01 17:38:29.401391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.095 [2024-10-01 17:38:29.401401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.095 qpair failed and we were unable to recover it. 00:38:31.095 [2024-10-01 17:38:29.401661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.095 [2024-10-01 17:38:29.401672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.095 qpair failed and we were unable to recover it. 00:38:31.095 [2024-10-01 17:38:29.402012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.095 [2024-10-01 17:38:29.402024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.095 qpair failed and we were unable to recover it. 00:38:31.095 [2024-10-01 17:38:29.402251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.095 [2024-10-01 17:38:29.402262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.095 qpair failed and we were unable to recover it. 00:38:31.095 [2024-10-01 17:38:29.402572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.095 [2024-10-01 17:38:29.402583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.095 qpair failed and we were unable to recover it. 00:38:31.095 [2024-10-01 17:38:29.402889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.095 [2024-10-01 17:38:29.402900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.095 qpair failed and we were unable to recover it. 00:38:31.095 [2024-10-01 17:38:29.403197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.095 [2024-10-01 17:38:29.403209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.095 qpair failed and we were unable to recover it. 00:38:31.095 [2024-10-01 17:38:29.403528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.095 [2024-10-01 17:38:29.403538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.095 qpair failed and we were unable to recover it. 00:38:31.095 [2024-10-01 17:38:29.403870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.095 [2024-10-01 17:38:29.403881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.095 qpair failed and we were unable to recover it. 00:38:31.095 [2024-10-01 17:38:29.404177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.095 [2024-10-01 17:38:29.404189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.095 qpair failed and we were unable to recover it. 00:38:31.095 [2024-10-01 17:38:29.404486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.095 [2024-10-01 17:38:29.404497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.095 qpair failed and we were unable to recover it. 00:38:31.095 [2024-10-01 17:38:29.404797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.095 [2024-10-01 17:38:29.404808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.095 qpair failed and we were unable to recover it. 00:38:31.095 [2024-10-01 17:38:29.405084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.095 [2024-10-01 17:38:29.405097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.095 qpair failed and we were unable to recover it. 00:38:31.096 [2024-10-01 17:38:29.405424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.096 [2024-10-01 17:38:29.405436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.096 qpair failed and we were unable to recover it. 00:38:31.096 [2024-10-01 17:38:29.405776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.096 [2024-10-01 17:38:29.405790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.096 qpair failed and we were unable to recover it. 00:38:31.096 [2024-10-01 17:38:29.406123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.096 [2024-10-01 17:38:29.406134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.096 qpair failed and we were unable to recover it. 00:38:31.096 [2024-10-01 17:38:29.406308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.096 [2024-10-01 17:38:29.406319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.096 qpair failed and we were unable to recover it. 00:38:31.096 [2024-10-01 17:38:29.406633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.096 [2024-10-01 17:38:29.406644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.096 qpair failed and we were unable to recover it. 00:38:31.096 [2024-10-01 17:38:29.406954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.096 [2024-10-01 17:38:29.406964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.096 qpair failed and we were unable to recover it. 00:38:31.096 [2024-10-01 17:38:29.407313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.096 [2024-10-01 17:38:29.407325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.096 qpair failed and we were unable to recover it. 00:38:31.096 [2024-10-01 17:38:29.407628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.096 [2024-10-01 17:38:29.407639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.096 qpair failed and we were unable to recover it. 00:38:31.096 [2024-10-01 17:38:29.407951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.096 [2024-10-01 17:38:29.407961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.096 qpair failed and we were unable to recover it. 00:38:31.096 [2024-10-01 17:38:29.408324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.096 [2024-10-01 17:38:29.408335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.096 qpair failed and we were unable to recover it. 00:38:31.096 [2024-10-01 17:38:29.408670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.096 [2024-10-01 17:38:29.408681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.096 qpair failed and we were unable to recover it. 00:38:31.096 [2024-10-01 17:38:29.408989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.096 [2024-10-01 17:38:29.409004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.096 qpair failed and we were unable to recover it. 00:38:31.096 [2024-10-01 17:38:29.409294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.096 [2024-10-01 17:38:29.409306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.096 qpair failed and we were unable to recover it. 00:38:31.096 [2024-10-01 17:38:29.409631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.096 [2024-10-01 17:38:29.409643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.096 qpair failed and we were unable to recover it. 00:38:31.096 [2024-10-01 17:38:29.409972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.096 [2024-10-01 17:38:29.409983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.096 qpair failed and we were unable to recover it. 00:38:31.096 [2024-10-01 17:38:29.410302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.096 [2024-10-01 17:38:29.410313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.096 qpair failed and we were unable to recover it. 00:38:31.096 [2024-10-01 17:38:29.410619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.096 [2024-10-01 17:38:29.410629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.096 qpair failed and we were unable to recover it. 00:38:31.096 [2024-10-01 17:38:29.410899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.096 [2024-10-01 17:38:29.410910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.096 qpair failed and we were unable to recover it. 00:38:31.096 [2024-10-01 17:38:29.411188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.096 [2024-10-01 17:38:29.411199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.096 qpair failed and we were unable to recover it. 00:38:31.096 [2024-10-01 17:38:29.411386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.096 [2024-10-01 17:38:29.411398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.096 qpair failed and we were unable to recover it. 00:38:31.096 [2024-10-01 17:38:29.411586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.096 [2024-10-01 17:38:29.411598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.096 qpair failed and we were unable to recover it. 00:38:31.096 [2024-10-01 17:38:29.411929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.096 [2024-10-01 17:38:29.411940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.096 qpair failed and we were unable to recover it. 00:38:31.096 [2024-10-01 17:38:29.412248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.096 [2024-10-01 17:38:29.412259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.096 qpair failed and we were unable to recover it. 00:38:31.096 [2024-10-01 17:38:29.412547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.096 [2024-10-01 17:38:29.412557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.096 qpair failed and we were unable to recover it. 00:38:31.096 [2024-10-01 17:38:29.412881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.096 [2024-10-01 17:38:29.412892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.096 qpair failed and we were unable to recover it. 00:38:31.096 [2024-10-01 17:38:29.413246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.096 [2024-10-01 17:38:29.413257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.096 qpair failed and we were unable to recover it. 00:38:31.096 [2024-10-01 17:38:29.413550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.096 [2024-10-01 17:38:29.413561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.096 qpair failed and we were unable to recover it. 00:38:31.096 [2024-10-01 17:38:29.413724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.097 [2024-10-01 17:38:29.413736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.097 qpair failed and we were unable to recover it. 00:38:31.097 [2024-10-01 17:38:29.414060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.097 [2024-10-01 17:38:29.414071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.097 qpair failed and we were unable to recover it. 00:38:31.097 [2024-10-01 17:38:29.414300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.097 [2024-10-01 17:38:29.414312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.097 qpair failed and we were unable to recover it. 00:38:31.097 [2024-10-01 17:38:29.414651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.097 [2024-10-01 17:38:29.414662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.097 qpair failed and we were unable to recover it. 00:38:31.097 [2024-10-01 17:38:29.414969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.097 [2024-10-01 17:38:29.414981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.097 qpair failed and we were unable to recover it. 00:38:31.097 [2024-10-01 17:38:29.415282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.097 [2024-10-01 17:38:29.415293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.097 qpair failed and we were unable to recover it. 00:38:31.097 [2024-10-01 17:38:29.415489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.097 [2024-10-01 17:38:29.415501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.097 qpair failed and we were unable to recover it. 00:38:31.097 [2024-10-01 17:38:29.415840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.097 [2024-10-01 17:38:29.415851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.097 qpair failed and we were unable to recover it. 00:38:31.097 [2024-10-01 17:38:29.416162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.097 [2024-10-01 17:38:29.416174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.097 qpair failed and we were unable to recover it. 00:38:31.097 [2024-10-01 17:38:29.416377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.097 [2024-10-01 17:38:29.416389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.097 qpair failed and we were unable to recover it. 00:38:31.097 [2024-10-01 17:38:29.416573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.097 [2024-10-01 17:38:29.416583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.097 qpair failed and we were unable to recover it. 00:38:31.097 [2024-10-01 17:38:29.416913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.097 [2024-10-01 17:38:29.416924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.097 qpair failed and we were unable to recover it. 00:38:31.097 [2024-10-01 17:38:29.417174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.097 [2024-10-01 17:38:29.417186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.097 qpair failed and we were unable to recover it. 00:38:31.097 [2024-10-01 17:38:29.417393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.097 [2024-10-01 17:38:29.417404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.097 qpair failed and we were unable to recover it. 00:38:31.097 [2024-10-01 17:38:29.417700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.097 [2024-10-01 17:38:29.417711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.097 qpair failed and we were unable to recover it. 00:38:31.097 [2024-10-01 17:38:29.418032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.097 [2024-10-01 17:38:29.418045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.097 qpair failed and we were unable to recover it. 00:38:31.097 [2024-10-01 17:38:29.418371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.097 [2024-10-01 17:38:29.418382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.097 qpair failed and we were unable to recover it. 00:38:31.097 [2024-10-01 17:38:29.418691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.097 [2024-10-01 17:38:29.418701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.097 qpair failed and we were unable to recover it. 00:38:31.097 [2024-10-01 17:38:29.419009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.097 [2024-10-01 17:38:29.419021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.097 qpair failed and we were unable to recover it. 00:38:31.097 [2024-10-01 17:38:29.419319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.097 [2024-10-01 17:38:29.419330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.097 qpair failed and we were unable to recover it. 00:38:31.097 [2024-10-01 17:38:29.419616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.097 [2024-10-01 17:38:29.419626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.097 qpair failed and we were unable to recover it. 00:38:31.097 [2024-10-01 17:38:29.419934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.097 [2024-10-01 17:38:29.419945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.097 qpair failed and we were unable to recover it. 00:38:31.097 [2024-10-01 17:38:29.420280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.097 [2024-10-01 17:38:29.420291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.097 qpair failed and we were unable to recover it. 00:38:31.097 [2024-10-01 17:38:29.420574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.097 [2024-10-01 17:38:29.420585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.097 qpair failed and we were unable to recover it. 00:38:31.097 [2024-10-01 17:38:29.420883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.097 [2024-10-01 17:38:29.420894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.097 qpair failed and we were unable to recover it. 00:38:31.097 [2024-10-01 17:38:29.421170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.097 [2024-10-01 17:38:29.421181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.097 qpair failed and we were unable to recover it. 00:38:31.097 [2024-10-01 17:38:29.421518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.097 [2024-10-01 17:38:29.421529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.097 qpair failed and we were unable to recover it. 00:38:31.097 [2024-10-01 17:38:29.421814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.097 [2024-10-01 17:38:29.421826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.097 qpair failed and we were unable to recover it. 00:38:31.097 [2024-10-01 17:38:29.422093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.097 [2024-10-01 17:38:29.422105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.097 qpair failed and we were unable to recover it. 00:38:31.097 [2024-10-01 17:38:29.422435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.097 [2024-10-01 17:38:29.422447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.097 qpair failed and we were unable to recover it. 00:38:31.097 [2024-10-01 17:38:29.422778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.097 [2024-10-01 17:38:29.422790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.097 qpair failed and we were unable to recover it. 00:38:31.097 [2024-10-01 17:38:29.423095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.097 [2024-10-01 17:38:29.423107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.097 qpair failed and we were unable to recover it. 00:38:31.097 [2024-10-01 17:38:29.423435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.097 [2024-10-01 17:38:29.423445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.097 qpair failed and we were unable to recover it. 00:38:31.097 [2024-10-01 17:38:29.423746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.098 [2024-10-01 17:38:29.423756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.098 qpair failed and we were unable to recover it. 00:38:31.098 [2024-10-01 17:38:29.424058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.098 [2024-10-01 17:38:29.424070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.098 qpair failed and we were unable to recover it. 00:38:31.098 [2024-10-01 17:38:29.424283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.098 [2024-10-01 17:38:29.424294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.098 qpair failed and we were unable to recover it. 00:38:31.098 [2024-10-01 17:38:29.424415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.098 [2024-10-01 17:38:29.424425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.098 qpair failed and we were unable to recover it. 00:38:31.098 [2024-10-01 17:38:29.424614] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70d0b0 is same with the state(6) to be set 00:38:31.098 [2024-10-01 17:38:29.425021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.098 [2024-10-01 17:38:29.425101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:31.098 qpair failed and we were unable to recover it. 00:38:31.098 [2024-10-01 17:38:29.425443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.098 [2024-10-01 17:38:29.425477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:31.098 qpair failed and we were unable to recover it. 00:38:31.098 [2024-10-01 17:38:29.425801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.098 [2024-10-01 17:38:29.425814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.098 qpair failed and we were unable to recover it. 00:38:31.098 [2024-10-01 17:38:29.426131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.098 [2024-10-01 17:38:29.426142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.098 qpair failed and we were unable to recover it. 00:38:31.098 [2024-10-01 17:38:29.426310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.098 [2024-10-01 17:38:29.426322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.098 qpair failed and we were unable to recover it. 00:38:31.098 [2024-10-01 17:38:29.426580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.098 [2024-10-01 17:38:29.426591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.098 qpair failed and we were unable to recover it. 00:38:31.098 [2024-10-01 17:38:29.426917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.098 [2024-10-01 17:38:29.426927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.098 qpair failed and we were unable to recover it. 00:38:31.098 [2024-10-01 17:38:29.427205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.098 [2024-10-01 17:38:29.427216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.098 qpair failed and we were unable to recover it. 00:38:31.098 [2024-10-01 17:38:29.427576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.098 [2024-10-01 17:38:29.427587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.098 qpair failed and we were unable to recover it. 00:38:31.098 [2024-10-01 17:38:29.427888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.098 [2024-10-01 17:38:29.427899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.098 qpair failed and we were unable to recover it. 00:38:31.098 [2024-10-01 17:38:29.428062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.098 [2024-10-01 17:38:29.428073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.098 qpair failed and we were unable to recover it. 00:38:31.098 [2024-10-01 17:38:29.428344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.098 [2024-10-01 17:38:29.428354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.098 qpair failed and we were unable to recover it. 00:38:31.098 [2024-10-01 17:38:29.428638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.098 [2024-10-01 17:38:29.428649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.098 qpair failed and we were unable to recover it. 00:38:31.098 [2024-10-01 17:38:29.428982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.098 [2024-10-01 17:38:29.428992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.098 qpair failed and we were unable to recover it. 00:38:31.098 [2024-10-01 17:38:29.429254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.098 [2024-10-01 17:38:29.429265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.098 qpair failed and we were unable to recover it. 00:38:31.098 [2024-10-01 17:38:29.429594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.098 [2024-10-01 17:38:29.429605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.098 qpair failed and we were unable to recover it. 00:38:31.098 [2024-10-01 17:38:29.429892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.098 [2024-10-01 17:38:29.429904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.098 qpair failed and we were unable to recover it. 00:38:31.098 [2024-10-01 17:38:29.430190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.098 [2024-10-01 17:38:29.430201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.098 qpair failed and we were unable to recover it. 00:38:31.098 [2024-10-01 17:38:29.430457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.098 [2024-10-01 17:38:29.430468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.098 qpair failed and we were unable to recover it. 00:38:31.098 [2024-10-01 17:38:29.430732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.098 [2024-10-01 17:38:29.430743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.098 qpair failed and we were unable to recover it. 00:38:31.098 [2024-10-01 17:38:29.431046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.098 [2024-10-01 17:38:29.431057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.098 qpair failed and we were unable to recover it. 00:38:31.098 [2024-10-01 17:38:29.431390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.098 [2024-10-01 17:38:29.431401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.098 qpair failed and we were unable to recover it. 00:38:31.098 [2024-10-01 17:38:29.431776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.098 [2024-10-01 17:38:29.431787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.098 qpair failed and we were unable to recover it. 00:38:31.098 [2024-10-01 17:38:29.432114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.098 [2024-10-01 17:38:29.432125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.098 qpair failed and we were unable to recover it. 00:38:31.098 [2024-10-01 17:38:29.432429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.098 [2024-10-01 17:38:29.432440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.098 qpair failed and we were unable to recover it. 00:38:31.098 [2024-10-01 17:38:29.432635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.098 [2024-10-01 17:38:29.432645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.098 qpair failed and we were unable to recover it. 00:38:31.098 [2024-10-01 17:38:29.432799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.098 [2024-10-01 17:38:29.432811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.099 qpair failed and we were unable to recover it. 00:38:31.099 [2024-10-01 17:38:29.433121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.099 [2024-10-01 17:38:29.433132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.099 qpair failed and we were unable to recover it. 00:38:31.099 [2024-10-01 17:38:29.433306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.099 [2024-10-01 17:38:29.433318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.099 qpair failed and we were unable to recover it. 00:38:31.099 [2024-10-01 17:38:29.433598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.099 [2024-10-01 17:38:29.433609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.099 qpair failed and we were unable to recover it. 00:38:31.099 [2024-10-01 17:38:29.433887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.099 [2024-10-01 17:38:29.433898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.099 qpair failed and we were unable to recover it. 00:38:31.099 [2024-10-01 17:38:29.434107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.099 [2024-10-01 17:38:29.434119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.099 qpair failed and we were unable to recover it. 00:38:31.099 [2024-10-01 17:38:29.434289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.099 [2024-10-01 17:38:29.434301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.099 qpair failed and we were unable to recover it. 00:38:31.099 [2024-10-01 17:38:29.434467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.099 [2024-10-01 17:38:29.434479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.099 qpair failed and we were unable to recover it. 00:38:31.099 [2024-10-01 17:38:29.434782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.099 [2024-10-01 17:38:29.434792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.099 qpair failed and we were unable to recover it. 00:38:31.099 [2024-10-01 17:38:29.434963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.099 [2024-10-01 17:38:29.434974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.099 qpair failed and we were unable to recover it. 00:38:31.099 [2024-10-01 17:38:29.435160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.099 [2024-10-01 17:38:29.435171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.099 qpair failed and we were unable to recover it. 00:38:31.099 [2024-10-01 17:38:29.435452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.099 [2024-10-01 17:38:29.435463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.099 qpair failed and we were unable to recover it. 00:38:31.099 [2024-10-01 17:38:29.435743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.099 [2024-10-01 17:38:29.435755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.099 qpair failed and we were unable to recover it. 00:38:31.099 [2024-10-01 17:38:29.436061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.099 [2024-10-01 17:38:29.436072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.099 qpair failed and we were unable to recover it. 00:38:31.099 [2024-10-01 17:38:29.436283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.099 [2024-10-01 17:38:29.436293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.099 qpair failed and we were unable to recover it. 00:38:31.099 [2024-10-01 17:38:29.436579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.099 [2024-10-01 17:38:29.436589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.099 qpair failed and we were unable to recover it. 00:38:31.099 [2024-10-01 17:38:29.436893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.099 [2024-10-01 17:38:29.436904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.099 qpair failed and we were unable to recover it. 00:38:31.099 [2024-10-01 17:38:29.437210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.099 [2024-10-01 17:38:29.437221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.099 qpair failed and we were unable to recover it. 00:38:31.099 [2024-10-01 17:38:29.437494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.099 [2024-10-01 17:38:29.437505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.099 qpair failed and we were unable to recover it. 00:38:31.099 [2024-10-01 17:38:29.437686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.099 [2024-10-01 17:38:29.437697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.099 qpair failed and we were unable to recover it. 00:38:31.099 [2024-10-01 17:38:29.438013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.099 [2024-10-01 17:38:29.438025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.099 qpair failed and we were unable to recover it. 00:38:31.099 [2024-10-01 17:38:29.438361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.099 [2024-10-01 17:38:29.438372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.099 qpair failed and we were unable to recover it. 00:38:31.099 [2024-10-01 17:38:29.438579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.099 [2024-10-01 17:38:29.438590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.099 qpair failed and we were unable to recover it. 00:38:31.099 [2024-10-01 17:38:29.438891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.099 [2024-10-01 17:38:29.438902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.099 qpair failed and we were unable to recover it. 00:38:31.099 [2024-10-01 17:38:29.439057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.099 [2024-10-01 17:38:29.439069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.099 qpair failed and we were unable to recover it. 00:38:31.099 [2024-10-01 17:38:29.439417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.099 [2024-10-01 17:38:29.439428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.099 qpair failed and we were unable to recover it. 00:38:31.099 [2024-10-01 17:38:29.439594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.099 [2024-10-01 17:38:29.439605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.099 qpair failed and we were unable to recover it. 00:38:31.099 [2024-10-01 17:38:29.439764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.099 [2024-10-01 17:38:29.439775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.099 qpair failed and we were unable to recover it. 00:38:31.099 [2024-10-01 17:38:29.440050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.099 [2024-10-01 17:38:29.440061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.099 qpair failed and we were unable to recover it. 00:38:31.099 [2024-10-01 17:38:29.440395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.099 [2024-10-01 17:38:29.440406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.099 qpair failed and we were unable to recover it. 00:38:31.099 [2024-10-01 17:38:29.440712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.099 [2024-10-01 17:38:29.440723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.099 qpair failed and we were unable to recover it. 00:38:31.099 [2024-10-01 17:38:29.440905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.099 [2024-10-01 17:38:29.440917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.099 qpair failed and we were unable to recover it. 00:38:31.099 [2024-10-01 17:38:29.441229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.099 [2024-10-01 17:38:29.441240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.099 qpair failed and we were unable to recover it. 00:38:31.099 [2024-10-01 17:38:29.441533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.099 [2024-10-01 17:38:29.441546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.099 qpair failed and we were unable to recover it. 00:38:31.099 [2024-10-01 17:38:29.441756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.099 [2024-10-01 17:38:29.441767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.099 qpair failed and we were unable to recover it. 00:38:31.100 [2024-10-01 17:38:29.441976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.100 [2024-10-01 17:38:29.441989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.100 qpair failed and we were unable to recover it. 00:38:31.100 [2024-10-01 17:38:29.442294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.100 [2024-10-01 17:38:29.442305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.100 qpair failed and we were unable to recover it. 00:38:31.100 [2024-10-01 17:38:29.442661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.100 [2024-10-01 17:38:29.442672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.100 qpair failed and we were unable to recover it. 00:38:31.100 [2024-10-01 17:38:29.442977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.100 [2024-10-01 17:38:29.442987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.100 qpair failed and we were unable to recover it. 00:38:31.100 [2024-10-01 17:38:29.443206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.100 [2024-10-01 17:38:29.443217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.100 qpair failed and we were unable to recover it. 00:38:31.100 [2024-10-01 17:38:29.443582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.100 [2024-10-01 17:38:29.443592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.100 qpair failed and we were unable to recover it. 00:38:31.100 [2024-10-01 17:38:29.443803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.100 [2024-10-01 17:38:29.443814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.100 qpair failed and we were unable to recover it. 00:38:31.100 [2024-10-01 17:38:29.444106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.100 [2024-10-01 17:38:29.444117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.100 qpair failed and we were unable to recover it. 00:38:31.100 [2024-10-01 17:38:29.444422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.100 [2024-10-01 17:38:29.444433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.100 qpair failed and we were unable to recover it. 00:38:31.100 [2024-10-01 17:38:29.444735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.100 [2024-10-01 17:38:29.444746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.100 qpair failed and we were unable to recover it. 00:38:31.100 [2024-10-01 17:38:29.445029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.100 [2024-10-01 17:38:29.445041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.100 qpair failed and we were unable to recover it. 00:38:31.100 [2024-10-01 17:38:29.445294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.100 [2024-10-01 17:38:29.445304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.100 qpair failed and we were unable to recover it. 00:38:31.100 [2024-10-01 17:38:29.445608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.100 [2024-10-01 17:38:29.445619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.100 qpair failed and we were unable to recover it. 00:38:31.100 [2024-10-01 17:38:29.445951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.100 [2024-10-01 17:38:29.445962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.100 qpair failed and we were unable to recover it. 00:38:31.100 [2024-10-01 17:38:29.446252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.100 [2024-10-01 17:38:29.446263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.100 qpair failed and we were unable to recover it. 00:38:31.100 [2024-10-01 17:38:29.446443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.100 [2024-10-01 17:38:29.446455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.100 qpair failed and we were unable to recover it. 00:38:31.100 [2024-10-01 17:38:29.446745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.100 [2024-10-01 17:38:29.446755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.100 qpair failed and we were unable to recover it. 00:38:31.100 [2024-10-01 17:38:29.447063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.100 [2024-10-01 17:38:29.447074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.100 qpair failed and we were unable to recover it. 00:38:31.100 [2024-10-01 17:38:29.447391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.100 [2024-10-01 17:38:29.447402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.100 qpair failed and we were unable to recover it. 00:38:31.100 [2024-10-01 17:38:29.447588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.100 [2024-10-01 17:38:29.447599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.100 qpair failed and we were unable to recover it. 00:38:31.100 [2024-10-01 17:38:29.447914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.100 [2024-10-01 17:38:29.447925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.100 qpair failed and we were unable to recover it. 00:38:31.100 [2024-10-01 17:38:29.448234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.100 [2024-10-01 17:38:29.448246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.100 qpair failed and we were unable to recover it. 00:38:31.100 [2024-10-01 17:38:29.448558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.100 [2024-10-01 17:38:29.448569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.100 qpair failed and we were unable to recover it. 00:38:31.100 [2024-10-01 17:38:29.448873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.100 [2024-10-01 17:38:29.448884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.100 qpair failed and we were unable to recover it. 00:38:31.100 [2024-10-01 17:38:29.449209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.100 [2024-10-01 17:38:29.449220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.100 qpair failed and we were unable to recover it. 00:38:31.100 [2024-10-01 17:38:29.449551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.100 [2024-10-01 17:38:29.449562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.100 qpair failed and we were unable to recover it. 00:38:31.100 [2024-10-01 17:38:29.449885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.100 [2024-10-01 17:38:29.449897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.100 qpair failed and we were unable to recover it. 00:38:31.100 [2024-10-01 17:38:29.450078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.100 [2024-10-01 17:38:29.450089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.100 qpair failed and we were unable to recover it. 00:38:31.100 [2024-10-01 17:38:29.450365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.100 [2024-10-01 17:38:29.450376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.100 qpair failed and we were unable to recover it. 00:38:31.100 [2024-10-01 17:38:29.450703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.100 [2024-10-01 17:38:29.450714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.100 qpair failed and we were unable to recover it. 00:38:31.100 [2024-10-01 17:38:29.451050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.100 [2024-10-01 17:38:29.451061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.100 qpair failed and we were unable to recover it. 00:38:31.101 [2024-10-01 17:38:29.451382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.101 [2024-10-01 17:38:29.451393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.101 qpair failed and we were unable to recover it. 00:38:31.101 [2024-10-01 17:38:29.451580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.101 [2024-10-01 17:38:29.451592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.101 qpair failed and we were unable to recover it. 00:38:31.101 [2024-10-01 17:38:29.451919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.101 [2024-10-01 17:38:29.451929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.101 qpair failed and we were unable to recover it. 00:38:31.101 [2024-10-01 17:38:29.452250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.101 [2024-10-01 17:38:29.452261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.101 qpair failed and we were unable to recover it. 00:38:31.101 [2024-10-01 17:38:29.452521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.101 [2024-10-01 17:38:29.452533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.101 qpair failed and we were unable to recover it. 00:38:31.101 [2024-10-01 17:38:29.452830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.101 [2024-10-01 17:38:29.452841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.101 qpair failed and we were unable to recover it. 00:38:31.101 [2024-10-01 17:38:29.453118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.101 [2024-10-01 17:38:29.453129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.101 qpair failed and we were unable to recover it. 00:38:31.101 [2024-10-01 17:38:29.453467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.101 [2024-10-01 17:38:29.453477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.101 qpair failed and we were unable to recover it. 00:38:31.101 [2024-10-01 17:38:29.453782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.101 [2024-10-01 17:38:29.453793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.101 qpair failed and we were unable to recover it. 00:38:31.101 [2024-10-01 17:38:29.454074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.101 [2024-10-01 17:38:29.454085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.101 qpair failed and we were unable to recover it. 00:38:31.101 [2024-10-01 17:38:29.454412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.101 [2024-10-01 17:38:29.454423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.101 qpair failed and we were unable to recover it. 00:38:31.101 [2024-10-01 17:38:29.454733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.101 [2024-10-01 17:38:29.454745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.101 qpair failed and we were unable to recover it. 00:38:31.101 [2024-10-01 17:38:29.455083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.101 [2024-10-01 17:38:29.455094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.101 qpair failed and we were unable to recover it. 00:38:31.101 [2024-10-01 17:38:29.455417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.101 [2024-10-01 17:38:29.455428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.101 qpair failed and we were unable to recover it. 00:38:31.101 [2024-10-01 17:38:29.455738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.101 [2024-10-01 17:38:29.455749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.101 qpair failed and we were unable to recover it. 00:38:31.101 [2024-10-01 17:38:29.456076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.101 [2024-10-01 17:38:29.456088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.101 qpair failed and we were unable to recover it. 00:38:31.101 [2024-10-01 17:38:29.456403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.101 [2024-10-01 17:38:29.456413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.101 qpair failed and we were unable to recover it. 00:38:31.101 [2024-10-01 17:38:29.456716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.101 [2024-10-01 17:38:29.456727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.101 qpair failed and we were unable to recover it. 00:38:31.101 [2024-10-01 17:38:29.457008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.101 [2024-10-01 17:38:29.457019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.101 qpair failed and we were unable to recover it. 00:38:31.101 [2024-10-01 17:38:29.457359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.101 [2024-10-01 17:38:29.457370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.101 qpair failed and we were unable to recover it. 00:38:31.101 [2024-10-01 17:38:29.457675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.101 [2024-10-01 17:38:29.457686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.101 qpair failed and we were unable to recover it. 00:38:31.101 [2024-10-01 17:38:29.458001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.101 [2024-10-01 17:38:29.458013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.101 qpair failed and we were unable to recover it. 00:38:31.101 [2024-10-01 17:38:29.458361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.101 [2024-10-01 17:38:29.458372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.101 qpair failed and we were unable to recover it. 00:38:31.101 [2024-10-01 17:38:29.458682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.101 [2024-10-01 17:38:29.458693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.101 qpair failed and we were unable to recover it. 00:38:31.101 [2024-10-01 17:38:29.459031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.101 [2024-10-01 17:38:29.459041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.101 qpair failed and we were unable to recover it. 00:38:31.101 [2024-10-01 17:38:29.459368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.101 [2024-10-01 17:38:29.459378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.101 qpair failed and we were unable to recover it. 00:38:31.101 [2024-10-01 17:38:29.459684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.101 [2024-10-01 17:38:29.459695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.101 qpair failed and we were unable to recover it. 00:38:31.101 [2024-10-01 17:38:29.460032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.102 [2024-10-01 17:38:29.460043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.102 qpair failed and we were unable to recover it. 00:38:31.102 [2024-10-01 17:38:29.460380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.102 [2024-10-01 17:38:29.460391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.102 qpair failed and we were unable to recover it. 00:38:31.102 [2024-10-01 17:38:29.460691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.102 [2024-10-01 17:38:29.460702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.102 qpair failed and we were unable to recover it. 00:38:31.102 [2024-10-01 17:38:29.461005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.102 [2024-10-01 17:38:29.461017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.102 qpair failed and we were unable to recover it. 00:38:31.102 [2024-10-01 17:38:29.461348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.102 [2024-10-01 17:38:29.461358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.102 qpair failed and we were unable to recover it. 00:38:31.102 [2024-10-01 17:38:29.461689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.102 [2024-10-01 17:38:29.461699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.102 qpair failed and we were unable to recover it. 00:38:31.102 [2024-10-01 17:38:29.461872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.102 [2024-10-01 17:38:29.461885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.102 qpair failed and we were unable to recover it. 00:38:31.102 [2024-10-01 17:38:29.462210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.102 [2024-10-01 17:38:29.462221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.102 qpair failed and we were unable to recover it. 00:38:31.102 [2024-10-01 17:38:29.462566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.102 [2024-10-01 17:38:29.462579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.102 qpair failed and we were unable to recover it. 00:38:31.102 [2024-10-01 17:38:29.462891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.102 [2024-10-01 17:38:29.462902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.102 qpair failed and we were unable to recover it. 00:38:31.102 [2024-10-01 17:38:29.463226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.102 [2024-10-01 17:38:29.463237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.102 qpair failed and we were unable to recover it. 00:38:31.102 [2024-10-01 17:38:29.463385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.102 [2024-10-01 17:38:29.463395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.102 qpair failed and we were unable to recover it. 00:38:31.102 [2024-10-01 17:38:29.463701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.102 [2024-10-01 17:38:29.463711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.102 qpair failed and we were unable to recover it. 00:38:31.102 [2024-10-01 17:38:29.464056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.102 [2024-10-01 17:38:29.464068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.102 qpair failed and we were unable to recover it. 00:38:31.102 [2024-10-01 17:38:29.464282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.102 [2024-10-01 17:38:29.464293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.102 qpair failed and we were unable to recover it. 00:38:31.102 [2024-10-01 17:38:29.464628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.102 [2024-10-01 17:38:29.464638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.102 qpair failed and we were unable to recover it. 00:38:31.102 [2024-10-01 17:38:29.464976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.102 [2024-10-01 17:38:29.464986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.102 qpair failed and we were unable to recover it. 00:38:31.102 [2024-10-01 17:38:29.465178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.102 [2024-10-01 17:38:29.465189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.102 qpair failed and we were unable to recover it. 00:38:31.102 [2024-10-01 17:38:29.465233] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:31.102 [2024-10-01 17:38:29.465553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.102 [2024-10-01 17:38:29.465564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.102 qpair failed and we were unable to recover it. 00:38:31.102 [2024-10-01 17:38:29.465877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.102 [2024-10-01 17:38:29.465888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.102 qpair failed and we were unable to recover it. 00:38:31.102 [2024-10-01 17:38:29.466077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.102 [2024-10-01 17:38:29.466088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.102 qpair failed and we were unable to recover it. 00:38:31.102 [2024-10-01 17:38:29.466454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.102 [2024-10-01 17:38:29.466465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.102 qpair failed and we were unable to recover it. 00:38:31.102 [2024-10-01 17:38:29.466805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.102 [2024-10-01 17:38:29.466816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.102 qpair failed and we were unable to recover it. 00:38:31.102 [2024-10-01 17:38:29.467124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.102 [2024-10-01 17:38:29.467135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.102 qpair failed and we were unable to recover it. 00:38:31.102 [2024-10-01 17:38:29.467317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.102 [2024-10-01 17:38:29.467329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.102 qpair failed and we were unable to recover it. 00:38:31.102 [2024-10-01 17:38:29.467655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.102 [2024-10-01 17:38:29.467667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.102 qpair failed and we were unable to recover it. 00:38:31.102 [2024-10-01 17:38:29.467972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.102 [2024-10-01 17:38:29.467983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.102 qpair failed and we were unable to recover it. 00:38:31.102 [2024-10-01 17:38:29.468322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.102 [2024-10-01 17:38:29.468334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.102 qpair failed and we were unable to recover it. 00:38:31.102 [2024-10-01 17:38:29.468672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.102 [2024-10-01 17:38:29.468683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.102 qpair failed and we were unable to recover it. 00:38:31.102 [2024-10-01 17:38:29.468986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.102 [2024-10-01 17:38:29.469001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.102 qpair failed and we were unable to recover it. 00:38:31.102 [2024-10-01 17:38:29.469336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.102 [2024-10-01 17:38:29.469346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.102 qpair failed and we were unable to recover it. 00:38:31.102 [2024-10-01 17:38:29.469681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.102 [2024-10-01 17:38:29.469691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.102 qpair failed and we were unable to recover it. 00:38:31.103 [2024-10-01 17:38:29.470005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.103 [2024-10-01 17:38:29.470017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.103 qpair failed and we were unable to recover it. 00:38:31.103 [2024-10-01 17:38:29.470339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.103 [2024-10-01 17:38:29.470350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.103 qpair failed and we were unable to recover it. 00:38:31.103 [2024-10-01 17:38:29.470663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.103 [2024-10-01 17:38:29.470674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.103 qpair failed and we were unable to recover it. 00:38:31.103 [2024-10-01 17:38:29.470984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.103 [2024-10-01 17:38:29.471008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.103 qpair failed and we were unable to recover it. 00:38:31.103 [2024-10-01 17:38:29.471333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.103 [2024-10-01 17:38:29.471345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.103 qpair failed and we were unable to recover it. 00:38:31.103 [2024-10-01 17:38:29.471627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.103 [2024-10-01 17:38:29.471639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.103 qpair failed and we were unable to recover it. 00:38:31.103 [2024-10-01 17:38:29.471954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.103 [2024-10-01 17:38:29.471965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.103 qpair failed and we were unable to recover it. 00:38:31.103 [2024-10-01 17:38:29.472161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.103 [2024-10-01 17:38:29.472174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.103 qpair failed and we were unable to recover it. 00:38:31.103 [2024-10-01 17:38:29.472544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.103 [2024-10-01 17:38:29.472556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.103 qpair failed and we were unable to recover it. 00:38:31.103 [2024-10-01 17:38:29.472836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.103 [2024-10-01 17:38:29.472847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.103 qpair failed and we were unable to recover it. 00:38:31.103 [2024-10-01 17:38:29.473054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.103 [2024-10-01 17:38:29.473073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.103 qpair failed and we were unable to recover it. 00:38:31.103 [2024-10-01 17:38:29.473374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.103 [2024-10-01 17:38:29.473386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.103 qpair failed and we were unable to recover it. 00:38:31.103 [2024-10-01 17:38:29.473667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.103 [2024-10-01 17:38:29.473679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.103 qpair failed and we were unable to recover it. 00:38:31.103 [2024-10-01 17:38:29.473968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.103 [2024-10-01 17:38:29.473980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.103 qpair failed and we were unable to recover it. 00:38:31.103 [2024-10-01 17:38:29.474239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.103 [2024-10-01 17:38:29.474251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.103 qpair failed and we were unable to recover it. 00:38:31.103 [2024-10-01 17:38:29.474468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.103 [2024-10-01 17:38:29.474480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.103 qpair failed and we were unable to recover it. 00:38:31.103 [2024-10-01 17:38:29.474788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.103 [2024-10-01 17:38:29.474800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.103 qpair failed and we were unable to recover it. 00:38:31.103 [2024-10-01 17:38:29.474997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.103 [2024-10-01 17:38:29.475011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.103 qpair failed and we were unable to recover it. 00:38:31.103 [2024-10-01 17:38:29.475335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.103 [2024-10-01 17:38:29.475345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.103 qpair failed and we were unable to recover it. 00:38:31.103 [2024-10-01 17:38:29.475633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.103 [2024-10-01 17:38:29.475644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.103 qpair failed and we were unable to recover it. 00:38:31.103 [2024-10-01 17:38:29.475908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.103 [2024-10-01 17:38:29.475919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.103 qpair failed and we were unable to recover it. 00:38:31.103 [2024-10-01 17:38:29.476135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.103 [2024-10-01 17:38:29.476148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.103 qpair failed and we were unable to recover it. 00:38:31.103 [2024-10-01 17:38:29.476376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.103 [2024-10-01 17:38:29.476387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.103 qpair failed and we were unable to recover it. 00:38:31.103 [2024-10-01 17:38:29.476614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.103 [2024-10-01 17:38:29.476625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.103 qpair failed and we were unable to recover it. 00:38:31.103 [2024-10-01 17:38:29.476825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.103 [2024-10-01 17:38:29.476836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.103 qpair failed and we were unable to recover it. 00:38:31.103 [2024-10-01 17:38:29.477176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.103 [2024-10-01 17:38:29.477188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.103 qpair failed and we were unable to recover it. 00:38:31.103 [2024-10-01 17:38:29.477527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.103 [2024-10-01 17:38:29.477539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.103 qpair failed and we were unable to recover it. 00:38:31.103 [2024-10-01 17:38:29.477856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.103 [2024-10-01 17:38:29.477869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.103 qpair failed and we were unable to recover it. 00:38:31.103 [2024-10-01 17:38:29.478178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.103 [2024-10-01 17:38:29.478190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.103 qpair failed and we were unable to recover it. 00:38:31.103 [2024-10-01 17:38:29.478542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.103 [2024-10-01 17:38:29.478554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.103 qpair failed and we were unable to recover it. 00:38:31.103 [2024-10-01 17:38:29.478870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.103 [2024-10-01 17:38:29.478882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.103 qpair failed and we were unable to recover it. 00:38:31.103 [2024-10-01 17:38:29.479217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.103 [2024-10-01 17:38:29.479229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.103 qpair failed and we were unable to recover it. 00:38:31.103 [2024-10-01 17:38:29.479552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.103 [2024-10-01 17:38:29.479565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.103 qpair failed and we were unable to recover it. 00:38:31.103 [2024-10-01 17:38:29.479876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.103 [2024-10-01 17:38:29.479888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.103 qpair failed and we were unable to recover it. 00:38:31.103 [2024-10-01 17:38:29.480213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.104 [2024-10-01 17:38:29.480227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.104 qpair failed and we were unable to recover it. 00:38:31.104 [2024-10-01 17:38:29.480563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.104 [2024-10-01 17:38:29.480574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.104 qpair failed and we were unable to recover it. 00:38:31.104 [2024-10-01 17:38:29.480794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.104 [2024-10-01 17:38:29.480806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.104 qpair failed and we were unable to recover it. 00:38:31.104 [2024-10-01 17:38:29.481121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.104 [2024-10-01 17:38:29.481134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.104 qpair failed and we were unable to recover it. 00:38:31.104 [2024-10-01 17:38:29.481401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.104 [2024-10-01 17:38:29.481413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.104 qpair failed and we were unable to recover it. 00:38:31.104 [2024-10-01 17:38:29.481716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.104 [2024-10-01 17:38:29.481728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.104 qpair failed and we were unable to recover it. 00:38:31.104 [2024-10-01 17:38:29.481912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.104 [2024-10-01 17:38:29.481924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.104 qpair failed and we were unable to recover it. 00:38:31.104 [2024-10-01 17:38:29.482254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.104 [2024-10-01 17:38:29.482266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.104 qpair failed and we were unable to recover it. 00:38:31.104 [2024-10-01 17:38:29.482571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.104 [2024-10-01 17:38:29.482582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.104 qpair failed and we were unable to recover it. 00:38:31.104 [2024-10-01 17:38:29.482921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.104 [2024-10-01 17:38:29.482933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.104 qpair failed and we were unable to recover it. 00:38:31.104 [2024-10-01 17:38:29.483268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.104 [2024-10-01 17:38:29.483279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.104 qpair failed and we were unable to recover it. 00:38:31.104 [2024-10-01 17:38:29.483611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.104 [2024-10-01 17:38:29.483623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.104 qpair failed and we were unable to recover it. 00:38:31.104 [2024-10-01 17:38:29.483956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.104 [2024-10-01 17:38:29.483968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.104 qpair failed and we were unable to recover it. 00:38:31.104 [2024-10-01 17:38:29.484284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.104 [2024-10-01 17:38:29.484298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.104 qpair failed and we were unable to recover it. 00:38:31.104 [2024-10-01 17:38:29.484613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.104 [2024-10-01 17:38:29.484625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.104 qpair failed and we were unable to recover it. 00:38:31.104 [2024-10-01 17:38:29.484931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.104 [2024-10-01 17:38:29.484943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.104 qpair failed and we were unable to recover it. 00:38:31.104 [2024-10-01 17:38:29.485265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.104 [2024-10-01 17:38:29.485277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.104 qpair failed and we were unable to recover it. 00:38:31.104 [2024-10-01 17:38:29.485454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.104 [2024-10-01 17:38:29.485465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.104 qpair failed and we were unable to recover it. 00:38:31.104 [2024-10-01 17:38:29.485837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.104 [2024-10-01 17:38:29.485848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.104 qpair failed and we were unable to recover it. 00:38:31.104 [2024-10-01 17:38:29.486169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.104 [2024-10-01 17:38:29.486182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.104 qpair failed and we were unable to recover it. 00:38:31.104 [2024-10-01 17:38:29.486511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.104 [2024-10-01 17:38:29.486523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.104 qpair failed and we were unable to recover it. 00:38:31.104 [2024-10-01 17:38:29.486869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.104 [2024-10-01 17:38:29.486880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.104 qpair failed and we were unable to recover it. 00:38:31.104 [2024-10-01 17:38:29.487206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.104 [2024-10-01 17:38:29.487218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.104 qpair failed and we were unable to recover it. 00:38:31.104 [2024-10-01 17:38:29.487509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.104 [2024-10-01 17:38:29.487520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.104 qpair failed and we were unable to recover it. 00:38:31.104 [2024-10-01 17:38:29.487866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.104 [2024-10-01 17:38:29.487879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.104 qpair failed and we were unable to recover it. 00:38:31.104 [2024-10-01 17:38:29.488168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.104 [2024-10-01 17:38:29.488180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.104 qpair failed and we were unable to recover it. 00:38:31.104 [2024-10-01 17:38:29.488488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.104 [2024-10-01 17:38:29.488500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.104 qpair failed and we were unable to recover it. 00:38:31.104 [2024-10-01 17:38:29.488834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.104 [2024-10-01 17:38:29.488845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.104 qpair failed and we were unable to recover it. 00:38:31.104 [2024-10-01 17:38:29.489153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.104 [2024-10-01 17:38:29.489165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.104 qpair failed and we were unable to recover it. 00:38:31.104 [2024-10-01 17:38:29.489452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.104 [2024-10-01 17:38:29.489464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.104 qpair failed and we were unable to recover it. 00:38:31.104 [2024-10-01 17:38:29.489729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.104 [2024-10-01 17:38:29.489740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.104 qpair failed and we were unable to recover it. 00:38:31.104 [2024-10-01 17:38:29.490052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.104 [2024-10-01 17:38:29.490064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.104 qpair failed and we were unable to recover it. 00:38:31.104 [2024-10-01 17:38:29.490281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.104 [2024-10-01 17:38:29.490292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.104 qpair failed and we were unable to recover it. 00:38:31.104 [2024-10-01 17:38:29.490582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.104 [2024-10-01 17:38:29.490594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.104 qpair failed and we were unable to recover it. 00:38:31.105 [2024-10-01 17:38:29.490884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.105 [2024-10-01 17:38:29.490897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.105 qpair failed and we were unable to recover it. 00:38:31.105 [2024-10-01 17:38:29.491213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.105 [2024-10-01 17:38:29.491225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.105 qpair failed and we were unable to recover it. 00:38:31.105 [2024-10-01 17:38:29.491559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.105 [2024-10-01 17:38:29.491571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.105 qpair failed and we were unable to recover it. 00:38:31.105 [2024-10-01 17:38:29.491741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.105 [2024-10-01 17:38:29.491755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.105 qpair failed and we were unable to recover it. 00:38:31.105 [2024-10-01 17:38:29.492020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.105 [2024-10-01 17:38:29.492031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.105 qpair failed and we were unable to recover it. 00:38:31.105 [2024-10-01 17:38:29.492359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.105 [2024-10-01 17:38:29.492370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.105 qpair failed and we were unable to recover it. 00:38:31.105 [2024-10-01 17:38:29.492599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.105 [2024-10-01 17:38:29.492611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.105 qpair failed and we were unable to recover it. 00:38:31.105 [2024-10-01 17:38:29.492932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.105 [2024-10-01 17:38:29.492945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.105 qpair failed and we were unable to recover it. 00:38:31.105 [2024-10-01 17:38:29.493238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.105 [2024-10-01 17:38:29.493249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.105 qpair failed and we were unable to recover it. 00:38:31.105 [2024-10-01 17:38:29.493575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.105 [2024-10-01 17:38:29.493586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.105 qpair failed and we were unable to recover it. 00:38:31.105 [2024-10-01 17:38:29.493915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.105 [2024-10-01 17:38:29.493926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.105 qpair failed and we were unable to recover it. 00:38:31.105 [2024-10-01 17:38:29.494255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.105 [2024-10-01 17:38:29.494266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.105 qpair failed and we were unable to recover it. 00:38:31.105 [2024-10-01 17:38:29.494597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.105 [2024-10-01 17:38:29.494609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.105 qpair failed and we were unable to recover it. 00:38:31.105 [2024-10-01 17:38:29.494910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.105 [2024-10-01 17:38:29.494922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.105 qpair failed and we were unable to recover it. 00:38:31.105 [2024-10-01 17:38:29.495260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.105 [2024-10-01 17:38:29.495272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.105 qpair failed and we were unable to recover it. 00:38:31.105 [2024-10-01 17:38:29.495580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.105 [2024-10-01 17:38:29.495590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.105 qpair failed and we were unable to recover it. 00:38:31.105 [2024-10-01 17:38:29.495896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.105 [2024-10-01 17:38:29.495908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.105 qpair failed and we were unable to recover it. 00:38:31.105 [2024-10-01 17:38:29.496085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.105 [2024-10-01 17:38:29.496097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.105 qpair failed and we were unable to recover it. 00:38:31.105 [2024-10-01 17:38:29.496281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.105 [2024-10-01 17:38:29.496292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.105 qpair failed and we were unable to recover it. 00:38:31.105 [2024-10-01 17:38:29.496578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.105 [2024-10-01 17:38:29.496590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.105 qpair failed and we were unable to recover it. 00:38:31.105 [2024-10-01 17:38:29.496604] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:31.105 [2024-10-01 17:38:29.496636] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:31.105 [2024-10-01 17:38:29.496644] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:31.105 [2024-10-01 17:38:29.496652] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:31.105 [2024-10-01 17:38:29.496657] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:31.105 [2024-10-01 17:38:29.496893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.105 [2024-10-01 17:38:29.496803] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:38:31.105 [2024-10-01 17:38:29.496904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.105 qpair failed and we were unable to recover it. 00:38:31.105 [2024-10-01 17:38:29.496937] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:38:31.105 [2024-10-01 17:38:29.497049] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:38:31.105 [2024-10-01 17:38:29.497051] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:38:31.105 [2024-10-01 17:38:29.497198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.105 [2024-10-01 17:38:29.497208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.105 qpair failed and we were unable to recover it. 00:38:31.105 [2024-10-01 17:38:29.497495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.105 [2024-10-01 17:38:29.497505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.105 qpair failed and we were unable to recover it. 00:38:31.105 [2024-10-01 17:38:29.497835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.105 [2024-10-01 17:38:29.497847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.105 qpair failed and we were unable to recover it. 00:38:31.105 [2024-10-01 17:38:29.498162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.105 [2024-10-01 17:38:29.498173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.105 qpair failed and we were unable to recover it. 00:38:31.105 [2024-10-01 17:38:29.498477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.105 [2024-10-01 17:38:29.498488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.105 qpair failed and we were unable to recover it. 00:38:31.105 [2024-10-01 17:38:29.498821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.105 [2024-10-01 17:38:29.498832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.105 qpair failed and we were unable to recover it. 00:38:31.105 [2024-10-01 17:38:29.499052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.105 [2024-10-01 17:38:29.499068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.105 qpair failed and we were unable to recover it. 00:38:31.105 [2024-10-01 17:38:29.499398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.105 [2024-10-01 17:38:29.499409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.105 qpair failed and we were unable to recover it. 00:38:31.105 [2024-10-01 17:38:29.499755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.105 [2024-10-01 17:38:29.499767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.105 qpair failed and we were unable to recover it. 00:38:31.106 [2024-10-01 17:38:29.500051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.106 [2024-10-01 17:38:29.500062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.106 qpair failed and we were unable to recover it. 00:38:31.106 [2024-10-01 17:38:29.500350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.106 [2024-10-01 17:38:29.500362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.106 qpair failed and we were unable to recover it. 00:38:31.106 [2024-10-01 17:38:29.500643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.106 [2024-10-01 17:38:29.500654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.106 qpair failed and we were unable to recover it. 00:38:31.106 [2024-10-01 17:38:29.500865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.106 [2024-10-01 17:38:29.500876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.106 qpair failed and we were unable to recover it. 00:38:31.106 [2024-10-01 17:38:29.501180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.106 [2024-10-01 17:38:29.501192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.106 qpair failed and we were unable to recover it. 00:38:31.106 [2024-10-01 17:38:29.501455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.106 [2024-10-01 17:38:29.501466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.106 qpair failed and we were unable to recover it. 00:38:31.106 [2024-10-01 17:38:29.501750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.106 [2024-10-01 17:38:29.501760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.106 qpair failed and we were unable to recover it. 00:38:31.106 [2024-10-01 17:38:29.502072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.106 [2024-10-01 17:38:29.502084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.106 qpair failed and we were unable to recover it. 00:38:31.106 [2024-10-01 17:38:29.502380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.106 [2024-10-01 17:38:29.502391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.106 qpair failed and we were unable to recover it. 00:38:31.106 [2024-10-01 17:38:29.502704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.106 [2024-10-01 17:38:29.502714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.106 qpair failed and we were unable to recover it. 00:38:31.106 [2024-10-01 17:38:29.503018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.106 [2024-10-01 17:38:29.503029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.106 qpair failed and we were unable to recover it. 00:38:31.106 [2024-10-01 17:38:29.503330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.106 [2024-10-01 17:38:29.503341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.106 qpair failed and we were unable to recover it. 00:38:31.106 [2024-10-01 17:38:29.503673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.106 [2024-10-01 17:38:29.503684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.106 qpair failed and we were unable to recover it. 00:38:31.106 [2024-10-01 17:38:29.503987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.106 [2024-10-01 17:38:29.504008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.106 qpair failed and we were unable to recover it. 00:38:31.106 [2024-10-01 17:38:29.504304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.106 [2024-10-01 17:38:29.504315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.106 qpair failed and we were unable to recover it. 00:38:31.106 [2024-10-01 17:38:29.504497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.106 [2024-10-01 17:38:29.504509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.106 qpair failed and we were unable to recover it. 00:38:31.106 [2024-10-01 17:38:29.504835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.106 [2024-10-01 17:38:29.504846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.106 qpair failed and we were unable to recover it. 00:38:31.106 [2024-10-01 17:38:29.505036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.106 [2024-10-01 17:38:29.505048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.106 qpair failed and we were unable to recover it. 00:38:31.106 [2024-10-01 17:38:29.505240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.106 [2024-10-01 17:38:29.505250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.106 qpair failed and we were unable to recover it. 00:38:31.106 [2024-10-01 17:38:29.505560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.106 [2024-10-01 17:38:29.505647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:31.106 qpair failed and we were unable to recover it. 00:38:31.106 [2024-10-01 17:38:29.506019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.106 [2024-10-01 17:38:29.506055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:31.106 qpair failed and we were unable to recover it. 00:38:31.106 [2024-10-01 17:38:29.506326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.106 [2024-10-01 17:38:29.506339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.106 qpair failed and we were unable to recover it. 00:38:31.106 [2024-10-01 17:38:29.506539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.106 [2024-10-01 17:38:29.506550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.106 qpair failed and we were unable to recover it. 00:38:31.106 [2024-10-01 17:38:29.506752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.106 [2024-10-01 17:38:29.506764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.106 qpair failed and we were unable to recover it. 00:38:31.106 [2024-10-01 17:38:29.507095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.106 [2024-10-01 17:38:29.507108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.106 qpair failed and we were unable to recover it. 00:38:31.106 [2024-10-01 17:38:29.507433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.106 [2024-10-01 17:38:29.507443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.106 qpair failed and we were unable to recover it. 00:38:31.106 [2024-10-01 17:38:29.507780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.106 [2024-10-01 17:38:29.507791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.106 qpair failed and we were unable to recover it. 00:38:31.106 [2024-10-01 17:38:29.507974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.106 [2024-10-01 17:38:29.507985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.106 qpair failed and we were unable to recover it. 00:38:31.106 [2024-10-01 17:38:29.508278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.106 [2024-10-01 17:38:29.508289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.106 qpair failed and we were unable to recover it. 00:38:31.106 [2024-10-01 17:38:29.508564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.106 [2024-10-01 17:38:29.508577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.106 qpair failed and we were unable to recover it. 00:38:31.106 [2024-10-01 17:38:29.508878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.106 [2024-10-01 17:38:29.508890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.106 qpair failed and we were unable to recover it. 00:38:31.106 [2024-10-01 17:38:29.509227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.107 [2024-10-01 17:38:29.509239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.107 qpair failed and we were unable to recover it. 00:38:31.107 [2024-10-01 17:38:29.509446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.107 [2024-10-01 17:38:29.509458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.107 qpair failed and we were unable to recover it. 00:38:31.107 [2024-10-01 17:38:29.509782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.107 [2024-10-01 17:38:29.509793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.107 qpair failed and we were unable to recover it. 00:38:31.107 [2024-10-01 17:38:29.510095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.107 [2024-10-01 17:38:29.510107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.107 qpair failed and we were unable to recover it. 00:38:31.107 [2024-10-01 17:38:29.510457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.107 [2024-10-01 17:38:29.510467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.107 qpair failed and we were unable to recover it. 00:38:31.107 [2024-10-01 17:38:29.510792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.107 [2024-10-01 17:38:29.510803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.107 qpair failed and we were unable to recover it. 00:38:31.107 [2024-10-01 17:38:29.511109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.107 [2024-10-01 17:38:29.511121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.107 qpair failed and we were unable to recover it. 00:38:31.107 [2024-10-01 17:38:29.511415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.107 [2024-10-01 17:38:29.511426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.107 qpair failed and we were unable to recover it. 00:38:31.107 [2024-10-01 17:38:29.511588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.107 [2024-10-01 17:38:29.511598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.107 qpair failed and we were unable to recover it. 00:38:31.107 [2024-10-01 17:38:29.511920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.107 [2024-10-01 17:38:29.511930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.107 qpair failed and we were unable to recover it. 00:38:31.107 [2024-10-01 17:38:29.512262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.107 [2024-10-01 17:38:29.512273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.107 qpair failed and we were unable to recover it. 00:38:31.107 [2024-10-01 17:38:29.512465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.107 [2024-10-01 17:38:29.512477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.107 qpair failed and we were unable to recover it. 00:38:31.107 [2024-10-01 17:38:29.512805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.107 [2024-10-01 17:38:29.512816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.107 qpair failed and we were unable to recover it. 00:38:31.107 [2024-10-01 17:38:29.513012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.107 [2024-10-01 17:38:29.513024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.107 qpair failed and we were unable to recover it. 00:38:31.107 [2024-10-01 17:38:29.513314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.107 [2024-10-01 17:38:29.513326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.107 qpair failed and we were unable to recover it. 00:38:31.107 [2024-10-01 17:38:29.513646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.107 [2024-10-01 17:38:29.513657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.107 qpair failed and we were unable to recover it. 00:38:31.107 [2024-10-01 17:38:29.513955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.107 [2024-10-01 17:38:29.513967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.107 qpair failed and we were unable to recover it. 00:38:31.107 [2024-10-01 17:38:29.514172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.107 [2024-10-01 17:38:29.514183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.107 qpair failed and we were unable to recover it. 00:38:31.107 [2024-10-01 17:38:29.514454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.107 [2024-10-01 17:38:29.514466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.107 qpair failed and we were unable to recover it. 00:38:31.107 [2024-10-01 17:38:29.514632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.107 [2024-10-01 17:38:29.514644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.107 qpair failed and we were unable to recover it. 00:38:31.107 [2024-10-01 17:38:29.514917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.107 [2024-10-01 17:38:29.514928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.107 qpair failed and we were unable to recover it. 00:38:31.107 [2024-10-01 17:38:29.515260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.107 [2024-10-01 17:38:29.515273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.107 qpair failed and we were unable to recover it. 00:38:31.107 [2024-10-01 17:38:29.515494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.107 [2024-10-01 17:38:29.515504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.107 qpair failed and we were unable to recover it. 00:38:31.107 [2024-10-01 17:38:29.515765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.107 [2024-10-01 17:38:29.515777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.107 qpair failed and we were unable to recover it. 00:38:31.107 [2024-10-01 17:38:29.515829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.107 [2024-10-01 17:38:29.515840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.107 qpair failed and we were unable to recover it. 00:38:31.107 [2024-10-01 17:38:29.516117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.107 [2024-10-01 17:38:29.516129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.107 qpair failed and we were unable to recover it. 00:38:31.107 [2024-10-01 17:38:29.516313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.108 [2024-10-01 17:38:29.516325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.108 qpair failed and we were unable to recover it. 00:38:31.108 [2024-10-01 17:38:29.516586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.108 [2024-10-01 17:38:29.516598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.108 qpair failed and we were unable to recover it. 00:38:31.108 [2024-10-01 17:38:29.516899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.108 [2024-10-01 17:38:29.516909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.108 qpair failed and we were unable to recover it. 00:38:31.108 [2024-10-01 17:38:29.517169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.108 [2024-10-01 17:38:29.517182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.108 qpair failed and we were unable to recover it. 00:38:31.108 [2024-10-01 17:38:29.517459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.108 [2024-10-01 17:38:29.517471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.108 qpair failed and we were unable to recover it. 00:38:31.108 [2024-10-01 17:38:29.517669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.108 [2024-10-01 17:38:29.517680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.108 qpair failed and we were unable to recover it. 00:38:31.108 [2024-10-01 17:38:29.517941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.108 [2024-10-01 17:38:29.517953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.108 qpair failed and we were unable to recover it. 00:38:31.108 [2024-10-01 17:38:29.518134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.108 [2024-10-01 17:38:29.518145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.108 qpair failed and we were unable to recover it. 00:38:31.108 [2024-10-01 17:38:29.518433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.108 [2024-10-01 17:38:29.518447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.108 qpair failed and we were unable to recover it. 00:38:31.108 [2024-10-01 17:38:29.518516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.108 [2024-10-01 17:38:29.518526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.108 qpair failed and we were unable to recover it. 00:38:31.108 [2024-10-01 17:38:29.518816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.108 [2024-10-01 17:38:29.518830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.108 qpair failed and we were unable to recover it. 00:38:31.108 [2024-10-01 17:38:29.519002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.108 [2024-10-01 17:38:29.519015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.108 qpair failed and we were unable to recover it. 00:38:31.108 [2024-10-01 17:38:29.519185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.108 [2024-10-01 17:38:29.519197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.108 qpair failed and we were unable to recover it. 00:38:31.108 [2024-10-01 17:38:29.519489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.108 [2024-10-01 17:38:29.519501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.108 qpair failed and we were unable to recover it. 00:38:31.108 [2024-10-01 17:38:29.519820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.108 [2024-10-01 17:38:29.519833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.108 qpair failed and we were unable to recover it. 00:38:31.108 [2024-10-01 17:38:29.520136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.108 [2024-10-01 17:38:29.520148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.108 qpair failed and we were unable to recover it. 00:38:31.108 [2024-10-01 17:38:29.520319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.108 [2024-10-01 17:38:29.520332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.108 qpair failed and we were unable to recover it. 00:38:31.108 [2024-10-01 17:38:29.520671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.108 [2024-10-01 17:38:29.520682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.108 qpair failed and we were unable to recover it. 00:38:31.108 [2024-10-01 17:38:29.520984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.108 [2024-10-01 17:38:29.521002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.108 qpair failed and we were unable to recover it. 00:38:31.108 [2024-10-01 17:38:29.521367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.108 [2024-10-01 17:38:29.521378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.108 qpair failed and we were unable to recover it. 00:38:31.108 [2024-10-01 17:38:29.521704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.108 [2024-10-01 17:38:29.521715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.108 qpair failed and we were unable to recover it. 00:38:31.108 [2024-10-01 17:38:29.522011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.108 [2024-10-01 17:38:29.522022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.108 qpair failed and we were unable to recover it. 00:38:31.108 [2024-10-01 17:38:29.522350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.108 [2024-10-01 17:38:29.522362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.108 qpair failed and we were unable to recover it. 00:38:31.108 [2024-10-01 17:38:29.522530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.108 [2024-10-01 17:38:29.522542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.108 qpair failed and we were unable to recover it. 00:38:31.108 [2024-10-01 17:38:29.522813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.108 [2024-10-01 17:38:29.522824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.108 qpair failed and we were unable to recover it. 00:38:31.108 [2024-10-01 17:38:29.523044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.108 [2024-10-01 17:38:29.523056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.108 qpair failed and we were unable to recover it. 00:38:31.108 [2024-10-01 17:38:29.523287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.108 [2024-10-01 17:38:29.523298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.108 qpair failed and we were unable to recover it. 00:38:31.108 [2024-10-01 17:38:29.523605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.108 [2024-10-01 17:38:29.523616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.108 qpair failed and we were unable to recover it. 00:38:31.108 [2024-10-01 17:38:29.523895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.108 [2024-10-01 17:38:29.523906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.108 qpair failed and we were unable to recover it. 00:38:31.108 [2024-10-01 17:38:29.524212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.108 [2024-10-01 17:38:29.524224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.108 qpair failed and we were unable to recover it. 00:38:31.108 [2024-10-01 17:38:29.524525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.108 [2024-10-01 17:38:29.524537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.108 qpair failed and we were unable to recover it. 00:38:31.108 [2024-10-01 17:38:29.524865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.108 [2024-10-01 17:38:29.524875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.108 qpair failed and we were unable to recover it. 00:38:31.108 [2024-10-01 17:38:29.525168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.108 [2024-10-01 17:38:29.525179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.108 qpair failed and we were unable to recover it. 00:38:31.108 [2024-10-01 17:38:29.525346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.108 [2024-10-01 17:38:29.525359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.108 qpair failed and we were unable to recover it. 00:38:31.108 [2024-10-01 17:38:29.525696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.109 [2024-10-01 17:38:29.525708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.109 qpair failed and we were unable to recover it. 00:38:31.109 [2024-10-01 17:38:29.526032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.109 [2024-10-01 17:38:29.526046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.109 qpair failed and we were unable to recover it. 00:38:31.109 [2024-10-01 17:38:29.526370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.109 [2024-10-01 17:38:29.526381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.109 qpair failed and we were unable to recover it. 00:38:31.109 [2024-10-01 17:38:29.526555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.109 [2024-10-01 17:38:29.526566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.109 qpair failed and we were unable to recover it. 00:38:31.109 [2024-10-01 17:38:29.526906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.109 [2024-10-01 17:38:29.526917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.109 qpair failed and we were unable to recover it. 00:38:31.109 [2024-10-01 17:38:29.527229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.109 [2024-10-01 17:38:29.527240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.109 qpair failed and we were unable to recover it. 00:38:31.109 [2024-10-01 17:38:29.527448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.109 [2024-10-01 17:38:29.527460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.109 qpair failed and we were unable to recover it. 00:38:31.109 [2024-10-01 17:38:29.527776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.109 [2024-10-01 17:38:29.527788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.109 qpair failed and we were unable to recover it. 00:38:31.109 [2024-10-01 17:38:29.528009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.109 [2024-10-01 17:38:29.528020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.109 qpair failed and we were unable to recover it. 00:38:31.109 [2024-10-01 17:38:29.528372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.109 [2024-10-01 17:38:29.528384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.109 qpair failed and we were unable to recover it. 00:38:31.109 [2024-10-01 17:38:29.528701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.109 [2024-10-01 17:38:29.528712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.109 qpair failed and we were unable to recover it. 00:38:31.109 [2024-10-01 17:38:29.528895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.109 [2024-10-01 17:38:29.528906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.109 qpair failed and we were unable to recover it. 00:38:31.109 [2024-10-01 17:38:29.529104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.109 [2024-10-01 17:38:29.529115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.109 qpair failed and we were unable to recover it. 00:38:31.109 [2024-10-01 17:38:29.529335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.109 [2024-10-01 17:38:29.529347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.109 qpair failed and we were unable to recover it. 00:38:31.109 [2024-10-01 17:38:29.529649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.109 [2024-10-01 17:38:29.529659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.109 qpair failed and we were unable to recover it. 00:38:31.109 [2024-10-01 17:38:29.529931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.109 [2024-10-01 17:38:29.529942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.109 qpair failed and we were unable to recover it. 00:38:31.109 [2024-10-01 17:38:29.530121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.109 [2024-10-01 17:38:29.530133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.109 qpair failed and we were unable to recover it. 00:38:31.109 [2024-10-01 17:38:29.530458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.109 [2024-10-01 17:38:29.530469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.109 qpair failed and we were unable to recover it. 00:38:31.109 [2024-10-01 17:38:29.530836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.109 [2024-10-01 17:38:29.530848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.109 qpair failed and we were unable to recover it. 00:38:31.109 [2024-10-01 17:38:29.531160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.109 [2024-10-01 17:38:29.531172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.109 qpair failed and we were unable to recover it. 00:38:31.109 [2024-10-01 17:38:29.531491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.109 [2024-10-01 17:38:29.531505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.109 qpair failed and we were unable to recover it. 00:38:31.109 [2024-10-01 17:38:29.531790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.109 [2024-10-01 17:38:29.531801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.109 qpair failed and we were unable to recover it. 00:38:31.109 [2024-10-01 17:38:29.531975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.109 [2024-10-01 17:38:29.531986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.109 qpair failed and we were unable to recover it. 00:38:31.109 [2024-10-01 17:38:29.532300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.109 [2024-10-01 17:38:29.532311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.109 qpair failed and we were unable to recover it. 00:38:31.109 [2024-10-01 17:38:29.532629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.109 [2024-10-01 17:38:29.532641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.109 qpair failed and we were unable to recover it. 00:38:31.109 [2024-10-01 17:38:29.532831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.109 [2024-10-01 17:38:29.532842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.109 qpair failed and we were unable to recover it. 00:38:31.109 [2024-10-01 17:38:29.533166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.109 [2024-10-01 17:38:29.533177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.109 qpair failed and we were unable to recover it. 00:38:31.109 [2024-10-01 17:38:29.533463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.109 [2024-10-01 17:38:29.533474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.109 qpair failed and we were unable to recover it. 00:38:31.109 [2024-10-01 17:38:29.533802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.109 [2024-10-01 17:38:29.533813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.109 qpair failed and we were unable to recover it. 00:38:31.109 [2024-10-01 17:38:29.534121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.109 [2024-10-01 17:38:29.534133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.109 qpair failed and we were unable to recover it. 00:38:31.109 [2024-10-01 17:38:29.534437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.109 [2024-10-01 17:38:29.534449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.109 qpair failed and we were unable to recover it. 00:38:31.109 [2024-10-01 17:38:29.534743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.109 [2024-10-01 17:38:29.534756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.109 qpair failed and we were unable to recover it. 00:38:31.110 [2024-10-01 17:38:29.535024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.110 [2024-10-01 17:38:29.535036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.110 qpair failed and we were unable to recover it. 00:38:31.110 [2024-10-01 17:38:29.535344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.110 [2024-10-01 17:38:29.535355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.110 qpair failed and we were unable to recover it. 00:38:31.110 [2024-10-01 17:38:29.535618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.110 [2024-10-01 17:38:29.535629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.110 qpair failed and we were unable to recover it. 00:38:31.110 [2024-10-01 17:38:29.535958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.110 [2024-10-01 17:38:29.535969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.110 qpair failed and we were unable to recover it. 00:38:31.110 [2024-10-01 17:38:29.536260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.110 [2024-10-01 17:38:29.536271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.110 qpair failed and we were unable to recover it. 00:38:31.110 [2024-10-01 17:38:29.536580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.110 [2024-10-01 17:38:29.536591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.110 qpair failed and we were unable to recover it. 00:38:31.110 [2024-10-01 17:38:29.536767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.110 [2024-10-01 17:38:29.536780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.110 qpair failed and we were unable to recover it. 00:38:31.110 [2024-10-01 17:38:29.536943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.110 [2024-10-01 17:38:29.536956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.110 qpair failed and we were unable to recover it. 00:38:31.110 [2024-10-01 17:38:29.537263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.110 [2024-10-01 17:38:29.537275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.110 qpair failed and we were unable to recover it. 00:38:31.110 [2024-10-01 17:38:29.537587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.110 [2024-10-01 17:38:29.537599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.110 qpair failed and we were unable to recover it. 00:38:31.110 [2024-10-01 17:38:29.537931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.110 [2024-10-01 17:38:29.537946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.110 qpair failed and we were unable to recover it. 00:38:31.110 [2024-10-01 17:38:29.538272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.110 [2024-10-01 17:38:29.538285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.110 qpair failed and we were unable to recover it. 00:38:31.110 [2024-10-01 17:38:29.538655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.110 [2024-10-01 17:38:29.538667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.110 qpair failed and we were unable to recover it. 00:38:31.110 [2024-10-01 17:38:29.538962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.110 [2024-10-01 17:38:29.538973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.110 qpair failed and we were unable to recover it. 00:38:31.110 [2024-10-01 17:38:29.539310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.110 [2024-10-01 17:38:29.539322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.110 qpair failed and we were unable to recover it. 00:38:31.110 [2024-10-01 17:38:29.539630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.110 [2024-10-01 17:38:29.539642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.110 qpair failed and we were unable to recover it. 00:38:31.110 [2024-10-01 17:38:29.539974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.110 [2024-10-01 17:38:29.539987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.110 qpair failed and we were unable to recover it. 00:38:31.110 [2024-10-01 17:38:29.540318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.110 [2024-10-01 17:38:29.540329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.110 qpair failed and we were unable to recover it. 00:38:31.110 [2024-10-01 17:38:29.540630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.110 [2024-10-01 17:38:29.540641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.110 qpair failed and we were unable to recover it. 00:38:31.110 [2024-10-01 17:38:29.540929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.110 [2024-10-01 17:38:29.540941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.110 qpair failed and we were unable to recover it. 00:38:31.110 [2024-10-01 17:38:29.541105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.110 [2024-10-01 17:38:29.541119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.110 qpair failed and we were unable to recover it. 00:38:31.110 [2024-10-01 17:38:29.541418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.110 [2024-10-01 17:38:29.541430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.110 qpair failed and we were unable to recover it. 00:38:31.110 [2024-10-01 17:38:29.541705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.110 [2024-10-01 17:38:29.541716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.110 qpair failed and we were unable to recover it. 00:38:31.110 [2024-10-01 17:38:29.542006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.110 [2024-10-01 17:38:29.542017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.110 qpair failed and we were unable to recover it. 00:38:31.110 [2024-10-01 17:38:29.542352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.110 [2024-10-01 17:38:29.542364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.110 qpair failed and we were unable to recover it. 00:38:31.110 [2024-10-01 17:38:29.542667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.110 [2024-10-01 17:38:29.542678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.110 qpair failed and we were unable to recover it. 00:38:31.110 [2024-10-01 17:38:29.542883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.110 [2024-10-01 17:38:29.542894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.110 qpair failed and we were unable to recover it. 00:38:31.110 [2024-10-01 17:38:29.543051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.110 [2024-10-01 17:38:29.543062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.110 qpair failed and we were unable to recover it. 00:38:31.110 [2024-10-01 17:38:29.543345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.110 [2024-10-01 17:38:29.543356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.110 qpair failed and we were unable to recover it. 00:38:31.110 [2024-10-01 17:38:29.543521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.110 [2024-10-01 17:38:29.543534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.110 qpair failed and we were unable to recover it. 00:38:31.110 [2024-10-01 17:38:29.543867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.110 [2024-10-01 17:38:29.543879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.110 qpair failed and we were unable to recover it. 00:38:31.110 [2024-10-01 17:38:29.544117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.110 [2024-10-01 17:38:29.544129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.110 qpair failed and we were unable to recover it. 00:38:31.110 [2024-10-01 17:38:29.544419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.110 [2024-10-01 17:38:29.544429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.110 qpair failed and we were unable to recover it. 00:38:31.110 [2024-10-01 17:38:29.544698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.110 [2024-10-01 17:38:29.544710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.110 qpair failed and we were unable to recover it. 00:38:31.110 [2024-10-01 17:38:29.544999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.110 [2024-10-01 17:38:29.545011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.110 qpair failed and we were unable to recover it. 00:38:31.111 [2024-10-01 17:38:29.545235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.111 [2024-10-01 17:38:29.545246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.111 qpair failed and we were unable to recover it. 00:38:31.111 [2024-10-01 17:38:29.545585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.111 [2024-10-01 17:38:29.545596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.111 qpair failed and we were unable to recover it. 00:38:31.111 [2024-10-01 17:38:29.545782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.111 [2024-10-01 17:38:29.545794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.111 qpair failed and we were unable to recover it. 00:38:31.111 [2024-10-01 17:38:29.546105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.111 [2024-10-01 17:38:29.546117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.111 qpair failed and we were unable to recover it. 00:38:31.111 [2024-10-01 17:38:29.546442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.111 [2024-10-01 17:38:29.546452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.111 qpair failed and we were unable to recover it. 00:38:31.111 [2024-10-01 17:38:29.546628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.111 [2024-10-01 17:38:29.546640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.111 qpair failed and we were unable to recover it. 00:38:31.111 [2024-10-01 17:38:29.546802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.111 [2024-10-01 17:38:29.546813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.111 qpair failed and we were unable to recover it. 00:38:31.111 [2024-10-01 17:38:29.547138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.111 [2024-10-01 17:38:29.547149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.111 qpair failed and we were unable to recover it. 00:38:31.111 [2024-10-01 17:38:29.547431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.111 [2024-10-01 17:38:29.547442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.111 qpair failed and we were unable to recover it. 00:38:31.111 [2024-10-01 17:38:29.547705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.111 [2024-10-01 17:38:29.547716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.111 qpair failed and we were unable to recover it. 00:38:31.111 [2024-10-01 17:38:29.547977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.111 [2024-10-01 17:38:29.547988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.111 qpair failed and we were unable to recover it. 00:38:31.111 [2024-10-01 17:38:29.548314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.111 [2024-10-01 17:38:29.548325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.111 qpair failed and we were unable to recover it. 00:38:31.111 [2024-10-01 17:38:29.548510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.111 [2024-10-01 17:38:29.548522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.111 qpair failed and we were unable to recover it. 00:38:31.111 [2024-10-01 17:38:29.548828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.111 [2024-10-01 17:38:29.548839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.111 qpair failed and we were unable to recover it. 00:38:31.111 [2024-10-01 17:38:29.549122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.111 [2024-10-01 17:38:29.549133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.111 qpair failed and we were unable to recover it. 00:38:31.111 [2024-10-01 17:38:29.549441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.111 [2024-10-01 17:38:29.549453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.111 qpair failed and we were unable to recover it. 00:38:31.111 [2024-10-01 17:38:29.549757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.111 [2024-10-01 17:38:29.549768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.111 qpair failed and we were unable to recover it. 00:38:31.111 [2024-10-01 17:38:29.550109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.111 [2024-10-01 17:38:29.550120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.111 qpair failed and we were unable to recover it. 00:38:31.111 [2024-10-01 17:38:29.550457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.111 [2024-10-01 17:38:29.550468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.111 qpair failed and we were unable to recover it. 00:38:31.111 [2024-10-01 17:38:29.550772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.111 [2024-10-01 17:38:29.550783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.111 qpair failed and we were unable to recover it. 00:38:31.111 [2024-10-01 17:38:29.550978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.111 [2024-10-01 17:38:29.550988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.111 qpair failed and we were unable to recover it. 00:38:31.111 [2024-10-01 17:38:29.551284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.111 [2024-10-01 17:38:29.551297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.111 qpair failed and we were unable to recover it. 00:38:31.111 [2024-10-01 17:38:29.551633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.111 [2024-10-01 17:38:29.551644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.111 qpair failed and we were unable to recover it. 00:38:31.111 [2024-10-01 17:38:29.551931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.111 [2024-10-01 17:38:29.551942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.111 qpair failed and we were unable to recover it. 00:38:31.111 [2024-10-01 17:38:29.552285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.111 [2024-10-01 17:38:29.552297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.111 qpair failed and we were unable to recover it. 00:38:31.111 [2024-10-01 17:38:29.552596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.111 [2024-10-01 17:38:29.552608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.111 qpair failed and we were unable to recover it. 00:38:31.111 [2024-10-01 17:38:29.552960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.111 [2024-10-01 17:38:29.552971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.111 qpair failed and we were unable to recover it. 00:38:31.111 [2024-10-01 17:38:29.553309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.111 [2024-10-01 17:38:29.553320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.111 qpair failed and we were unable to recover it. 00:38:31.111 [2024-10-01 17:38:29.553626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.111 [2024-10-01 17:38:29.553637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.111 qpair failed and we were unable to recover it. 00:38:31.111 [2024-10-01 17:38:29.553971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.111 [2024-10-01 17:38:29.553982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.111 qpair failed and we were unable to recover it. 00:38:31.111 [2024-10-01 17:38:29.554288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.111 [2024-10-01 17:38:29.554300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.111 qpair failed and we were unable to recover it. 00:38:31.111 [2024-10-01 17:38:29.554587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.111 [2024-10-01 17:38:29.554598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.112 qpair failed and we were unable to recover it. 00:38:31.112 [2024-10-01 17:38:29.554915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.112 [2024-10-01 17:38:29.554926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.112 qpair failed and we were unable to recover it. 00:38:31.112 [2024-10-01 17:38:29.555260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.112 [2024-10-01 17:38:29.555271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.112 qpair failed and we were unable to recover it. 00:38:31.112 [2024-10-01 17:38:29.555599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.112 [2024-10-01 17:38:29.555610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.112 qpair failed and we were unable to recover it. 00:38:31.112 [2024-10-01 17:38:29.555886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.112 [2024-10-01 17:38:29.555896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.112 qpair failed and we were unable to recover it. 00:38:31.112 [2024-10-01 17:38:29.556227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.112 [2024-10-01 17:38:29.556238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.112 qpair failed and we were unable to recover it. 00:38:31.112 [2024-10-01 17:38:29.556540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.112 [2024-10-01 17:38:29.556551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.112 qpair failed and we were unable to recover it. 00:38:31.112 [2024-10-01 17:38:29.556860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.112 [2024-10-01 17:38:29.556871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.112 qpair failed and we were unable to recover it. 00:38:31.112 [2024-10-01 17:38:29.557018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.112 [2024-10-01 17:38:29.557029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.112 qpair failed and we were unable to recover it. 00:38:31.112 [2024-10-01 17:38:29.557334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.112 [2024-10-01 17:38:29.557346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.112 qpair failed and we were unable to recover it. 00:38:31.112 [2024-10-01 17:38:29.557505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.112 [2024-10-01 17:38:29.557517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.112 qpair failed and we were unable to recover it. 00:38:31.112 [2024-10-01 17:38:29.557881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.112 [2024-10-01 17:38:29.557893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.112 qpair failed and we were unable to recover it. 00:38:31.112 [2024-10-01 17:38:29.558197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.112 [2024-10-01 17:38:29.558213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.112 qpair failed and we were unable to recover it. 00:38:31.112 [2024-10-01 17:38:29.558497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.112 [2024-10-01 17:38:29.558508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.112 qpair failed and we were unable to recover it. 00:38:31.112 [2024-10-01 17:38:29.558802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.112 [2024-10-01 17:38:29.558812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.112 qpair failed and we were unable to recover it. 00:38:31.112 [2024-10-01 17:38:29.558975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.112 [2024-10-01 17:38:29.558985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.112 qpair failed and we were unable to recover it. 00:38:31.112 [2024-10-01 17:38:29.559309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.112 [2024-10-01 17:38:29.559320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.112 qpair failed and we were unable to recover it. 00:38:31.112 [2024-10-01 17:38:29.559480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.112 [2024-10-01 17:38:29.559492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.112 qpair failed and we were unable to recover it. 00:38:31.112 [2024-10-01 17:38:29.559810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.112 [2024-10-01 17:38:29.559821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.112 qpair failed and we were unable to recover it. 00:38:31.112 [2024-10-01 17:38:29.560204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.112 [2024-10-01 17:38:29.560216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.112 qpair failed and we were unable to recover it. 00:38:31.112 [2024-10-01 17:38:29.560523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.112 [2024-10-01 17:38:29.560534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.112 qpair failed and we were unable to recover it. 00:38:31.112 [2024-10-01 17:38:29.560868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.112 [2024-10-01 17:38:29.560879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.112 qpair failed and we were unable to recover it. 00:38:31.112 [2024-10-01 17:38:29.561058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.112 [2024-10-01 17:38:29.561070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.112 qpair failed and we were unable to recover it. 00:38:31.112 [2024-10-01 17:38:29.561343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.112 [2024-10-01 17:38:29.561353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.112 qpair failed and we were unable to recover it. 00:38:31.112 [2024-10-01 17:38:29.561537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.112 [2024-10-01 17:38:29.561549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.112 qpair failed and we were unable to recover it. 00:38:31.112 [2024-10-01 17:38:29.561713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.112 [2024-10-01 17:38:29.561724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.112 qpair failed and we were unable to recover it. 00:38:31.112 [2024-10-01 17:38:29.562009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.112 [2024-10-01 17:38:29.562021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.112 qpair failed and we were unable to recover it. 00:38:31.112 [2024-10-01 17:38:29.562353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.112 [2024-10-01 17:38:29.562364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.112 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 17:38:29.562649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 17:38:29.562661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 17:38:29.562846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 17:38:29.562857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 17:38:29.563192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 17:38:29.563203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 17:38:29.563388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 17:38:29.563399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 17:38:29.563686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 17:38:29.563697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 17:38:29.563884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 17:38:29.563895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 17:38:29.564233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 17:38:29.564247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 17:38:29.564578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 17:38:29.564588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 17:38:29.564762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 17:38:29.564774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 17:38:29.565113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 17:38:29.565125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 17:38:29.565450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 17:38:29.565461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 17:38:29.565784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 17:38:29.565795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 17:38:29.566111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 17:38:29.566123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 17:38:29.566302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 17:38:29.566313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 17:38:29.566586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 17:38:29.566597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 17:38:29.566877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 17:38:29.566887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 17:38:29.567194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 17:38:29.567205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 17:38:29.567524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 17:38:29.567535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 17:38:29.567795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 17:38:29.567806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 17:38:29.567978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 17:38:29.567990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 17:38:29.568327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 17:38:29.568339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 17:38:29.568615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 17:38:29.568625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 17:38:29.568809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 17:38:29.568821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 17:38:29.569130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 17:38:29.569141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 17:38:29.569447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 17:38:29.569458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 17:38:29.569781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 17:38:29.569794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 17:38:29.570177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 17:38:29.570188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 17:38:29.570518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 17:38:29.570529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 17:38:29.570788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 17:38:29.570798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 17:38:29.571104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 17:38:29.571115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 17:38:29.571415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 17:38:29.571426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 17:38:29.571684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 17:38:29.571696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 17:38:29.572001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 17:38:29.572013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 17:38:29.572347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 17:38:29.572357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 17:38:29.572693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.113 [2024-10-01 17:38:29.572704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.113 qpair failed and we were unable to recover it. 00:38:31.113 [2024-10-01 17:38:29.572891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 17:38:29.572904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 17:38:29.573111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 17:38:29.573122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 17:38:29.573317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 17:38:29.573328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 17:38:29.573519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 17:38:29.573531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 17:38:29.573819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 17:38:29.573829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 17:38:29.574111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 17:38:29.574122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 17:38:29.574414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 17:38:29.574425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 17:38:29.574758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 17:38:29.574769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 17:38:29.575052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 17:38:29.575063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 17:38:29.575347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 17:38:29.575358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 17:38:29.575688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 17:38:29.575699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 17:38:29.576047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 17:38:29.576058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 17:38:29.576349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 17:38:29.576361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 17:38:29.576678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 17:38:29.576690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 17:38:29.576860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 17:38:29.576871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 17:38:29.577224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 17:38:29.577238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 17:38:29.577561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 17:38:29.577574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 17:38:29.577886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 17:38:29.577899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 17:38:29.578127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 17:38:29.578138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 17:38:29.578444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 17:38:29.578456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 17:38:29.578794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 17:38:29.578805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 17:38:29.579005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 17:38:29.579016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 17:38:29.579394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 17:38:29.579405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 17:38:29.579591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 17:38:29.579602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 17:38:29.579882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 17:38:29.579893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 17:38:29.580109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 17:38:29.580120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 17:38:29.580312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 17:38:29.580324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 17:38:29.580377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 17:38:29.580388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 17:38:29.580669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 17:38:29.580679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 17:38:29.580966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 17:38:29.580977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 17:38:29.581289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 17:38:29.581301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 17:38:29.581634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 17:38:29.581645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 17:38:29.581961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 17:38:29.581972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 17:38:29.582274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.114 [2024-10-01 17:38:29.582285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.114 qpair failed and we were unable to recover it. 00:38:31.114 [2024-10-01 17:38:29.582451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 17:38:29.582462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 17:38:29.582770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 17:38:29.582781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 17:38:29.582936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 17:38:29.582948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 17:38:29.583279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 17:38:29.583290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 17:38:29.583336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 17:38:29.583345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 17:38:29.583642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 17:38:29.583653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 17:38:29.583985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 17:38:29.584000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 17:38:29.584045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 17:38:29.584053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 17:38:29.584356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 17:38:29.584367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 17:38:29.584675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 17:38:29.584686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 17:38:29.585002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 17:38:29.585013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 17:38:29.585313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 17:38:29.585323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 17:38:29.585618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 17:38:29.585629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 17:38:29.585958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 17:38:29.585970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 17:38:29.586282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 17:38:29.586295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 17:38:29.586641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 17:38:29.586653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 17:38:29.586831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 17:38:29.586843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 17:38:29.587119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 17:38:29.587130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 17:38:29.587403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 17:38:29.587415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 17:38:29.587591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 17:38:29.587602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 17:38:29.587905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 17:38:29.587916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 17:38:29.588133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 17:38:29.588145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 17:38:29.588431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 17:38:29.588442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 17:38:29.588627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 17:38:29.588638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 17:38:29.588937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 17:38:29.588950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 17:38:29.589168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 17:38:29.589179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 17:38:29.589454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 17:38:29.589465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 17:38:29.589765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 17:38:29.589776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 17:38:29.590093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 17:38:29.590104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 17:38:29.590414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 17:38:29.590424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 17:38:29.590715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 17:38:29.590726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 17:38:29.591106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 17:38:29.591117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 17:38:29.591398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 17:38:29.591410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.115 [2024-10-01 17:38:29.591584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.115 [2024-10-01 17:38:29.591596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.115 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 17:38:29.591906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 17:38:29.591918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 17:38:29.592247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 17:38:29.592259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 17:38:29.592450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 17:38:29.592461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 17:38:29.592743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 17:38:29.592754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 17:38:29.593072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 17:38:29.593083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 17:38:29.593394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 17:38:29.593405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 17:38:29.593619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 17:38:29.593629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 17:38:29.593946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 17:38:29.593957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 17:38:29.594277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 17:38:29.594290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 17:38:29.594622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 17:38:29.594634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 17:38:29.594948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 17:38:29.594960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 17:38:29.595180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 17:38:29.595192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 17:38:29.595476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 17:38:29.595487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 17:38:29.595793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 17:38:29.595804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 17:38:29.595980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 17:38:29.595992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 17:38:29.596296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 17:38:29.596308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 17:38:29.596632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 17:38:29.596644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 17:38:29.596962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 17:38:29.596976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 17:38:29.597275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 17:38:29.597287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 17:38:29.597500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 17:38:29.597511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 17:38:29.597774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 17:38:29.597785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 17:38:29.597968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 17:38:29.597980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 17:38:29.598330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 17:38:29.598341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 17:38:29.598641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 17:38:29.598653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 17:38:29.598968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 17:38:29.598979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 17:38:29.599170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 17:38:29.599182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 17:38:29.599495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 17:38:29.599506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 17:38:29.599840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 17:38:29.599851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 17:38:29.600036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 17:38:29.600047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 17:38:29.600370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 17:38:29.600381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 17:38:29.600705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 17:38:29.600716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 17:38:29.600901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 17:38:29.600913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 17:38:29.601075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 17:38:29.601087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 17:38:29.601382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 17:38:29.601393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 17:38:29.601569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 17:38:29.601581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 17:38:29.601772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 17:38:29.601783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 17:38:29.602109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 17:38:29.602121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 17:38:29.602290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.116 [2024-10-01 17:38:29.602302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.116 qpair failed and we were unable to recover it. 00:38:31.116 [2024-10-01 17:38:29.602493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 17:38:29.602504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 17:38:29.602835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 17:38:29.602845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 17:38:29.603148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 17:38:29.603159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 17:38:29.603324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 17:38:29.603335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 17:38:29.603627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 17:38:29.603637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 17:38:29.603823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 17:38:29.603833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 17:38:29.604137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 17:38:29.604148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 17:38:29.604442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 17:38:29.604454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 17:38:29.604760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 17:38:29.604771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 17:38:29.605058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 17:38:29.605070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 17:38:29.605391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 17:38:29.605402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 17:38:29.605735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 17:38:29.605746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 17:38:29.605920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 17:38:29.605930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 17:38:29.606196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 17:38:29.606206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 17:38:29.606516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 17:38:29.606528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 17:38:29.606713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 17:38:29.606724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 17:38:29.606911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 17:38:29.606922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 17:38:29.607114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 17:38:29.607125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 17:38:29.607309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 17:38:29.607319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 17:38:29.607611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 17:38:29.607621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 17:38:29.607932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 17:38:29.607945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 17:38:29.608248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 17:38:29.608260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 17:38:29.608550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 17:38:29.608562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 17:38:29.608872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 17:38:29.608883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 17:38:29.609170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 17:38:29.609182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 17:38:29.609230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 17:38:29.609240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 17:38:29.609509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 17:38:29.609520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 17:38:29.609825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 17:38:29.609835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 17:38:29.609992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 17:38:29.610018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 17:38:29.610283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 17:38:29.610294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 17:38:29.610603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 17:38:29.610615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 17:38:29.610950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 17:38:29.610962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.117 [2024-10-01 17:38:29.611286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.117 [2024-10-01 17:38:29.611296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.117 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 17:38:29.611604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 17:38:29.611616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 17:38:29.611903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 17:38:29.611915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 17:38:29.612264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 17:38:29.612275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 17:38:29.612662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 17:38:29.612673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.118 [2024-10-01 17:38:29.612970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.118 [2024-10-01 17:38:29.612981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.118 qpair failed and we were unable to recover it. 00:38:31.393 [2024-10-01 17:38:29.613312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.393 [2024-10-01 17:38:29.613325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.393 qpair failed and we were unable to recover it. 00:38:31.393 [2024-10-01 17:38:29.613658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.393 [2024-10-01 17:38:29.613669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.393 qpair failed and we were unable to recover it. 00:38:31.393 [2024-10-01 17:38:29.614001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.393 [2024-10-01 17:38:29.614013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.393 qpair failed and we were unable to recover it. 00:38:31.393 [2024-10-01 17:38:29.614303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.393 [2024-10-01 17:38:29.614314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.393 qpair failed and we were unable to recover it. 00:38:31.393 [2024-10-01 17:38:29.614648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.393 [2024-10-01 17:38:29.614658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.393 qpair failed and we were unable to recover it. 00:38:31.393 [2024-10-01 17:38:29.614955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.393 [2024-10-01 17:38:29.614966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.393 qpair failed and we were unable to recover it. 00:38:31.393 [2024-10-01 17:38:29.615278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.393 [2024-10-01 17:38:29.615289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.393 qpair failed and we were unable to recover it. 00:38:31.393 [2024-10-01 17:38:29.615456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.393 [2024-10-01 17:38:29.615468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.393 qpair failed and we were unable to recover it. 00:38:31.393 [2024-10-01 17:38:29.615800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.393 [2024-10-01 17:38:29.615811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.393 qpair failed and we were unable to recover it. 00:38:31.393 [2024-10-01 17:38:29.615998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.393 [2024-10-01 17:38:29.616017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.393 qpair failed and we were unable to recover it. 00:38:31.393 [2024-10-01 17:38:29.616183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.393 [2024-10-01 17:38:29.616195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.393 qpair failed and we were unable to recover it. 00:38:31.393 [2024-10-01 17:38:29.616508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.393 [2024-10-01 17:38:29.616520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.393 qpair failed and we were unable to recover it. 00:38:31.393 [2024-10-01 17:38:29.616826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.393 [2024-10-01 17:38:29.616838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.393 qpair failed and we were unable to recover it. 00:38:31.393 [2024-10-01 17:38:29.617163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.393 [2024-10-01 17:38:29.617174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.393 qpair failed and we were unable to recover it. 00:38:31.393 [2024-10-01 17:38:29.617455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.393 [2024-10-01 17:38:29.617466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.393 qpair failed and we were unable to recover it. 00:38:31.393 [2024-10-01 17:38:29.617651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.393 [2024-10-01 17:38:29.617662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.393 qpair failed and we were unable to recover it. 00:38:31.393 [2024-10-01 17:38:29.617921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.393 [2024-10-01 17:38:29.617933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.393 qpair failed and we were unable to recover it. 00:38:31.393 [2024-10-01 17:38:29.618249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.393 [2024-10-01 17:38:29.618260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.393 qpair failed and we were unable to recover it. 00:38:31.393 [2024-10-01 17:38:29.618574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.393 [2024-10-01 17:38:29.618585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.393 qpair failed and we were unable to recover it. 00:38:31.393 [2024-10-01 17:38:29.618798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.393 [2024-10-01 17:38:29.618809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.393 qpair failed and we were unable to recover it. 00:38:31.393 [2024-10-01 17:38:29.619121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.393 [2024-10-01 17:38:29.619133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.393 qpair failed and we were unable to recover it. 00:38:31.393 [2024-10-01 17:38:29.619460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.393 [2024-10-01 17:38:29.619471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.393 qpair failed and we were unable to recover it. 00:38:31.393 [2024-10-01 17:38:29.619737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.393 [2024-10-01 17:38:29.619749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.393 qpair failed and we were unable to recover it. 00:38:31.393 [2024-10-01 17:38:29.620091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.393 [2024-10-01 17:38:29.620102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.393 qpair failed and we were unable to recover it. 00:38:31.393 [2024-10-01 17:38:29.620426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.393 [2024-10-01 17:38:29.620438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.393 qpair failed and we were unable to recover it. 00:38:31.393 [2024-10-01 17:38:29.620742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.393 [2024-10-01 17:38:29.620754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.393 qpair failed and we were unable to recover it. 00:38:31.393 [2024-10-01 17:38:29.620946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.393 [2024-10-01 17:38:29.620958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.393 qpair failed and we were unable to recover it. 00:38:31.393 [2024-10-01 17:38:29.621259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.393 [2024-10-01 17:38:29.621271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.393 qpair failed and we were unable to recover it. 00:38:31.393 [2024-10-01 17:38:29.621540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.393 [2024-10-01 17:38:29.621552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.393 qpair failed and we were unable to recover it. 00:38:31.393 [2024-10-01 17:38:29.621879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.393 [2024-10-01 17:38:29.621891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.393 qpair failed and we were unable to recover it. 00:38:31.393 [2024-10-01 17:38:29.622106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.393 [2024-10-01 17:38:29.622117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.393 qpair failed and we were unable to recover it. 00:38:31.393 [2024-10-01 17:38:29.622417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.394 [2024-10-01 17:38:29.622428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.394 qpair failed and we were unable to recover it. 00:38:31.394 [2024-10-01 17:38:29.622766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.394 [2024-10-01 17:38:29.622777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.394 qpair failed and we were unable to recover it. 00:38:31.394 [2024-10-01 17:38:29.623118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.394 [2024-10-01 17:38:29.623129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.394 qpair failed and we were unable to recover it. 00:38:31.394 [2024-10-01 17:38:29.623424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.394 [2024-10-01 17:38:29.623435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.394 qpair failed and we were unable to recover it. 00:38:31.394 [2024-10-01 17:38:29.623766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.394 [2024-10-01 17:38:29.623778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.394 qpair failed and we were unable to recover it. 00:38:31.394 [2024-10-01 17:38:29.624118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.394 [2024-10-01 17:38:29.624129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.394 qpair failed and we were unable to recover it. 00:38:31.394 [2024-10-01 17:38:29.624456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.394 [2024-10-01 17:38:29.624468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.394 qpair failed and we were unable to recover it. 00:38:31.394 [2024-10-01 17:38:29.624801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.394 [2024-10-01 17:38:29.624812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.394 qpair failed and we were unable to recover it. 00:38:31.394 [2024-10-01 17:38:29.625119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.394 [2024-10-01 17:38:29.625130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.394 qpair failed and we were unable to recover it. 00:38:31.394 [2024-10-01 17:38:29.625453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.394 [2024-10-01 17:38:29.625464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.394 qpair failed and we were unable to recover it. 00:38:31.394 [2024-10-01 17:38:29.625741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.394 [2024-10-01 17:38:29.625752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.394 qpair failed and we were unable to recover it. 00:38:31.394 [2024-10-01 17:38:29.626079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.394 [2024-10-01 17:38:29.626090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.394 qpair failed and we were unable to recover it. 00:38:31.394 [2024-10-01 17:38:29.626444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.394 [2024-10-01 17:38:29.626454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.394 qpair failed and we were unable to recover it. 00:38:31.394 [2024-10-01 17:38:29.626751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.394 [2024-10-01 17:38:29.626763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.394 qpair failed and we were unable to recover it. 00:38:31.394 [2024-10-01 17:38:29.627102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.394 [2024-10-01 17:38:29.627114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.394 qpair failed and we were unable to recover it. 00:38:31.394 [2024-10-01 17:38:29.627445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.394 [2024-10-01 17:38:29.627456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.394 qpair failed and we were unable to recover it. 00:38:31.394 [2024-10-01 17:38:29.627647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.394 [2024-10-01 17:38:29.627659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.394 qpair failed and we were unable to recover it. 00:38:31.394 [2024-10-01 17:38:29.627991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.394 [2024-10-01 17:38:29.628005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.394 qpair failed and we were unable to recover it. 00:38:31.394 [2024-10-01 17:38:29.628301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.394 [2024-10-01 17:38:29.628312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.394 qpair failed and we were unable to recover it. 00:38:31.394 [2024-10-01 17:38:29.628645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.394 [2024-10-01 17:38:29.628658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.394 qpair failed and we were unable to recover it. 00:38:31.394 [2024-10-01 17:38:29.628991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.394 [2024-10-01 17:38:29.629006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.394 qpair failed and we were unable to recover it. 00:38:31.394 [2024-10-01 17:38:29.629185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.394 [2024-10-01 17:38:29.629197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.394 qpair failed and we were unable to recover it. 00:38:31.394 [2024-10-01 17:38:29.629390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.394 [2024-10-01 17:38:29.629402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.394 qpair failed and we were unable to recover it. 00:38:31.394 [2024-10-01 17:38:29.629565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.394 [2024-10-01 17:38:29.629578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.394 qpair failed and we were unable to recover it. 00:38:31.394 [2024-10-01 17:38:29.629859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.394 [2024-10-01 17:38:29.629869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.394 qpair failed and we were unable to recover it. 00:38:31.394 [2024-10-01 17:38:29.630165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.394 [2024-10-01 17:38:29.630176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.394 qpair failed and we were unable to recover it. 00:38:31.394 [2024-10-01 17:38:29.630345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.394 [2024-10-01 17:38:29.630356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.394 qpair failed and we were unable to recover it. 00:38:31.394 [2024-10-01 17:38:29.630520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.394 [2024-10-01 17:38:29.630530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.394 qpair failed and we were unable to recover it. 00:38:31.394 [2024-10-01 17:38:29.630796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.394 [2024-10-01 17:38:29.630807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.394 qpair failed and we were unable to recover it. 00:38:31.394 [2024-10-01 17:38:29.630960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.394 [2024-10-01 17:38:29.630970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.394 qpair failed and we were unable to recover it. 00:38:31.394 [2024-10-01 17:38:29.631294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.394 [2024-10-01 17:38:29.631306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.394 qpair failed and we were unable to recover it. 00:38:31.394 [2024-10-01 17:38:29.631632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.394 [2024-10-01 17:38:29.631643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.394 qpair failed and we were unable to recover it. 00:38:31.394 [2024-10-01 17:38:29.631948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.394 [2024-10-01 17:38:29.631959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.394 qpair failed and we were unable to recover it. 00:38:31.394 [2024-10-01 17:38:29.632293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.394 [2024-10-01 17:38:29.632305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.394 qpair failed and we were unable to recover it. 00:38:31.394 [2024-10-01 17:38:29.632621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.394 [2024-10-01 17:38:29.632632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.394 qpair failed and we were unable to recover it. 00:38:31.394 [2024-10-01 17:38:29.632980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.394 [2024-10-01 17:38:29.632992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.394 qpair failed and we were unable to recover it. 00:38:31.394 [2024-10-01 17:38:29.633183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.394 [2024-10-01 17:38:29.633196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.394 qpair failed and we were unable to recover it. 00:38:31.394 [2024-10-01 17:38:29.633456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.394 [2024-10-01 17:38:29.633469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.394 qpair failed and we were unable to recover it. 00:38:31.394 [2024-10-01 17:38:29.633778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.394 [2024-10-01 17:38:29.633789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.394 qpair failed and we were unable to recover it. 00:38:31.394 [2024-10-01 17:38:29.634080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.395 [2024-10-01 17:38:29.634091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.395 qpair failed and we were unable to recover it. 00:38:31.395 [2024-10-01 17:38:29.634402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.395 [2024-10-01 17:38:29.634413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.395 qpair failed and we were unable to recover it. 00:38:31.395 [2024-10-01 17:38:29.634585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.395 [2024-10-01 17:38:29.634597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.395 qpair failed and we were unable to recover it. 00:38:31.395 [2024-10-01 17:38:29.634875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.395 [2024-10-01 17:38:29.634885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.395 qpair failed and we were unable to recover it. 00:38:31.395 [2024-10-01 17:38:29.635166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.395 [2024-10-01 17:38:29.635177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.395 qpair failed and we were unable to recover it. 00:38:31.395 [2024-10-01 17:38:29.635514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.395 [2024-10-01 17:38:29.635525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.395 qpair failed and we were unable to recover it. 00:38:31.395 [2024-10-01 17:38:29.635720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.395 [2024-10-01 17:38:29.635731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.395 qpair failed and we were unable to recover it. 00:38:31.395 [2024-10-01 17:38:29.636061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.395 [2024-10-01 17:38:29.636073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.395 qpair failed and we were unable to recover it. 00:38:31.395 [2024-10-01 17:38:29.636392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.395 [2024-10-01 17:38:29.636403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.395 qpair failed and we were unable to recover it. 00:38:31.395 [2024-10-01 17:38:29.636704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.395 [2024-10-01 17:38:29.636714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.395 qpair failed and we were unable to recover it. 00:38:31.395 [2024-10-01 17:38:29.636905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.395 [2024-10-01 17:38:29.636917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.395 qpair failed and we were unable to recover it. 00:38:31.395 [2024-10-01 17:38:29.637213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.395 [2024-10-01 17:38:29.637224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.395 qpair failed and we were unable to recover it. 00:38:31.395 [2024-10-01 17:38:29.637553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.395 [2024-10-01 17:38:29.637564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.395 qpair failed and we were unable to recover it. 00:38:31.395 [2024-10-01 17:38:29.637886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.395 [2024-10-01 17:38:29.637897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.395 qpair failed and we were unable to recover it. 00:38:31.395 [2024-10-01 17:38:29.638083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.395 [2024-10-01 17:38:29.638095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.395 qpair failed and we were unable to recover it. 00:38:31.395 [2024-10-01 17:38:29.638431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.395 [2024-10-01 17:38:29.638442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.395 qpair failed and we were unable to recover it. 00:38:31.395 [2024-10-01 17:38:29.638752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.395 [2024-10-01 17:38:29.638763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.395 qpair failed and we were unable to recover it. 00:38:31.395 [2024-10-01 17:38:29.638973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.395 [2024-10-01 17:38:29.638984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.395 qpair failed and we were unable to recover it. 00:38:31.395 [2024-10-01 17:38:29.639283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.395 [2024-10-01 17:38:29.639294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.395 qpair failed and we were unable to recover it. 00:38:31.395 [2024-10-01 17:38:29.639579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.395 [2024-10-01 17:38:29.639590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.395 qpair failed and we were unable to recover it. 00:38:31.395 [2024-10-01 17:38:29.639776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.395 [2024-10-01 17:38:29.639789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.395 qpair failed and we were unable to recover it. 00:38:31.395 [2024-10-01 17:38:29.640106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.395 [2024-10-01 17:38:29.640118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.395 qpair failed and we were unable to recover it. 00:38:31.395 [2024-10-01 17:38:29.640395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.395 [2024-10-01 17:38:29.640406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.395 qpair failed and we were unable to recover it. 00:38:31.395 [2024-10-01 17:38:29.640734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.395 [2024-10-01 17:38:29.640745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.395 qpair failed and we were unable to recover it. 00:38:31.395 [2024-10-01 17:38:29.641082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.395 [2024-10-01 17:38:29.641093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.395 qpair failed and we were unable to recover it. 00:38:31.395 [2024-10-01 17:38:29.641377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.395 [2024-10-01 17:38:29.641387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.395 qpair failed and we were unable to recover it. 00:38:31.395 [2024-10-01 17:38:29.641694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.395 [2024-10-01 17:38:29.641705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.395 qpair failed and we were unable to recover it. 00:38:31.395 [2024-10-01 17:38:29.642012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.395 [2024-10-01 17:38:29.642024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.395 qpair failed and we were unable to recover it. 00:38:31.395 [2024-10-01 17:38:29.642350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.395 [2024-10-01 17:38:29.642361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.395 qpair failed and we were unable to recover it. 00:38:31.395 [2024-10-01 17:38:29.642691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.395 [2024-10-01 17:38:29.642703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.395 qpair failed and we were unable to recover it. 00:38:31.395 [2024-10-01 17:38:29.642916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.395 [2024-10-01 17:38:29.642928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.395 qpair failed and we were unable to recover it. 00:38:31.395 [2024-10-01 17:38:29.643242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.395 [2024-10-01 17:38:29.643254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.395 qpair failed and we were unable to recover it. 00:38:31.395 [2024-10-01 17:38:29.643555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.395 [2024-10-01 17:38:29.643565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.395 qpair failed and we were unable to recover it. 00:38:31.395 [2024-10-01 17:38:29.643909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.395 [2024-10-01 17:38:29.643920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.395 qpair failed and we were unable to recover it. 00:38:31.395 [2024-10-01 17:38:29.644253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.395 [2024-10-01 17:38:29.644265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.395 qpair failed and we were unable to recover it. 00:38:31.395 [2024-10-01 17:38:29.644597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.395 [2024-10-01 17:38:29.644609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.395 qpair failed and we were unable to recover it. 00:38:31.395 [2024-10-01 17:38:29.644914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.395 [2024-10-01 17:38:29.644925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.395 qpair failed and we were unable to recover it. 00:38:31.395 [2024-10-01 17:38:29.645188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.395 [2024-10-01 17:38:29.645200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.395 qpair failed and we were unable to recover it. 00:38:31.395 [2024-10-01 17:38:29.645529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.395 [2024-10-01 17:38:29.645541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.395 qpair failed and we were unable to recover it. 00:38:31.396 [2024-10-01 17:38:29.645728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.396 [2024-10-01 17:38:29.645739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.396 qpair failed and we were unable to recover it. 00:38:31.396 [2024-10-01 17:38:29.646076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.396 [2024-10-01 17:38:29.646087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.396 qpair failed and we were unable to recover it. 00:38:31.396 [2024-10-01 17:38:29.646469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.396 [2024-10-01 17:38:29.646479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.396 qpair failed and we were unable to recover it. 00:38:31.396 [2024-10-01 17:38:29.646809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.396 [2024-10-01 17:38:29.646820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.396 qpair failed and we were unable to recover it. 00:38:31.396 [2024-10-01 17:38:29.647151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.396 [2024-10-01 17:38:29.647163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.396 qpair failed and we were unable to recover it. 00:38:31.396 [2024-10-01 17:38:29.647471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.396 [2024-10-01 17:38:29.647482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.396 qpair failed and we were unable to recover it. 00:38:31.396 [2024-10-01 17:38:29.647768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.396 [2024-10-01 17:38:29.647779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.396 qpair failed and we were unable to recover it. 00:38:31.396 [2024-10-01 17:38:29.647961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.396 [2024-10-01 17:38:29.647972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.396 qpair failed and we were unable to recover it. 00:38:31.396 [2024-10-01 17:38:29.648260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.396 [2024-10-01 17:38:29.648271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.396 qpair failed and we were unable to recover it. 00:38:31.396 [2024-10-01 17:38:29.648596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.396 [2024-10-01 17:38:29.648608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.396 qpair failed and we were unable to recover it. 00:38:31.396 [2024-10-01 17:38:29.648791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.396 [2024-10-01 17:38:29.648802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.396 qpair failed and we were unable to recover it. 00:38:31.396 [2024-10-01 17:38:29.649138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.396 [2024-10-01 17:38:29.649149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.396 qpair failed and we were unable to recover it. 00:38:31.396 [2024-10-01 17:38:29.649470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.396 [2024-10-01 17:38:29.649480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.396 qpair failed and we were unable to recover it. 00:38:31.396 [2024-10-01 17:38:29.649790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.396 [2024-10-01 17:38:29.649800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.396 qpair failed and we were unable to recover it. 00:38:31.396 [2024-10-01 17:38:29.649971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.396 [2024-10-01 17:38:29.649982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.396 qpair failed and we were unable to recover it. 00:38:31.396 [2024-10-01 17:38:29.650303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.396 [2024-10-01 17:38:29.650314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.396 qpair failed and we were unable to recover it. 00:38:31.396 [2024-10-01 17:38:29.650528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.396 [2024-10-01 17:38:29.650538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.396 qpair failed and we were unable to recover it. 00:38:31.396 [2024-10-01 17:38:29.650848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.396 [2024-10-01 17:38:29.650859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.396 qpair failed and we were unable to recover it. 00:38:31.396 [2024-10-01 17:38:29.651018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.396 [2024-10-01 17:38:29.651029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.396 qpair failed and we were unable to recover it. 00:38:31.396 [2024-10-01 17:38:29.651311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.396 [2024-10-01 17:38:29.651322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.396 qpair failed and we were unable to recover it. 00:38:31.396 [2024-10-01 17:38:29.651489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.396 [2024-10-01 17:38:29.651501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.396 qpair failed and we were unable to recover it. 00:38:31.396 [2024-10-01 17:38:29.651675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.396 [2024-10-01 17:38:29.651688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.396 qpair failed and we were unable to recover it. 00:38:31.396 [2024-10-01 17:38:29.652028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.396 [2024-10-01 17:38:29.652040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.396 qpair failed and we were unable to recover it. 00:38:31.396 [2024-10-01 17:38:29.652360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.396 [2024-10-01 17:38:29.652371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.396 qpair failed and we were unable to recover it. 00:38:31.396 [2024-10-01 17:38:29.652556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.396 [2024-10-01 17:38:29.652567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.396 qpair failed and we were unable to recover it. 00:38:31.396 [2024-10-01 17:38:29.652779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.396 [2024-10-01 17:38:29.652790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.396 qpair failed and we were unable to recover it. 00:38:31.396 [2024-10-01 17:38:29.652954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.396 [2024-10-01 17:38:29.652966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.396 qpair failed and we were unable to recover it. 00:38:31.396 [2024-10-01 17:38:29.653280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.396 [2024-10-01 17:38:29.653291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.396 qpair failed and we were unable to recover it. 00:38:31.396 [2024-10-01 17:38:29.653484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.396 [2024-10-01 17:38:29.653495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.396 qpair failed and we were unable to recover it. 00:38:31.396 [2024-10-01 17:38:29.653826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.396 [2024-10-01 17:38:29.653837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.396 qpair failed and we were unable to recover it. 00:38:31.396 [2024-10-01 17:38:29.654169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.396 [2024-10-01 17:38:29.654180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.396 qpair failed and we were unable to recover it. 00:38:31.396 [2024-10-01 17:38:29.654511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.396 [2024-10-01 17:38:29.654521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.396 qpair failed and we were unable to recover it. 00:38:31.396 [2024-10-01 17:38:29.654709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.396 [2024-10-01 17:38:29.654719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.396 qpair failed and we were unable to recover it. 00:38:31.396 [2024-10-01 17:38:29.655058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.396 [2024-10-01 17:38:29.655069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.396 qpair failed and we were unable to recover it. 00:38:31.396 [2024-10-01 17:38:29.655378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.396 [2024-10-01 17:38:29.655389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.396 qpair failed and we were unable to recover it. 00:38:31.396 [2024-10-01 17:38:29.655716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.396 [2024-10-01 17:38:29.655728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.396 qpair failed and we were unable to recover it. 00:38:31.396 [2024-10-01 17:38:29.656006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.396 [2024-10-01 17:38:29.656017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.396 qpair failed and we were unable to recover it. 00:38:31.396 [2024-10-01 17:38:29.656323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.396 [2024-10-01 17:38:29.656334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.396 qpair failed and we were unable to recover it. 00:38:31.396 [2024-10-01 17:38:29.656542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.396 [2024-10-01 17:38:29.656553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.396 qpair failed and we were unable to recover it. 00:38:31.396 [2024-10-01 17:38:29.656710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.397 [2024-10-01 17:38:29.656720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.397 qpair failed and we were unable to recover it. 00:38:31.397 [2024-10-01 17:38:29.657041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.397 [2024-10-01 17:38:29.657051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.397 qpair failed and we were unable to recover it. 00:38:31.397 [2024-10-01 17:38:29.657378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.397 [2024-10-01 17:38:29.657389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.397 qpair failed and we were unable to recover it. 00:38:31.397 [2024-10-01 17:38:29.657675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.397 [2024-10-01 17:38:29.657686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.397 qpair failed and we were unable to recover it. 00:38:31.397 [2024-10-01 17:38:29.658009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.397 [2024-10-01 17:38:29.658021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.397 qpair failed and we were unable to recover it. 00:38:31.397 [2024-10-01 17:38:29.658214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.397 [2024-10-01 17:38:29.658226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.397 qpair failed and we were unable to recover it. 00:38:31.397 [2024-10-01 17:38:29.658597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.397 [2024-10-01 17:38:29.658609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.397 qpair failed and we were unable to recover it. 00:38:31.397 [2024-10-01 17:38:29.658865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.397 [2024-10-01 17:38:29.658875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.397 qpair failed and we were unable to recover it. 00:38:31.397 [2024-10-01 17:38:29.659159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.397 [2024-10-01 17:38:29.659171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.397 qpair failed and we were unable to recover it. 00:38:31.397 [2024-10-01 17:38:29.659503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.397 [2024-10-01 17:38:29.659514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.397 qpair failed and we were unable to recover it. 00:38:31.397 [2024-10-01 17:38:29.659826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.397 [2024-10-01 17:38:29.659837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.397 qpair failed and we were unable to recover it. 00:38:31.397 [2024-10-01 17:38:29.660127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.397 [2024-10-01 17:38:29.660138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.397 qpair failed and we were unable to recover it. 00:38:31.397 [2024-10-01 17:38:29.660439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.397 [2024-10-01 17:38:29.660450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.397 qpair failed and we were unable to recover it. 00:38:31.397 [2024-10-01 17:38:29.660613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.397 [2024-10-01 17:38:29.660630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.397 qpair failed and we were unable to recover it. 00:38:31.397 [2024-10-01 17:38:29.660936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.397 [2024-10-01 17:38:29.660948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.397 qpair failed and we were unable to recover it. 00:38:31.397 [2024-10-01 17:38:29.661240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.397 [2024-10-01 17:38:29.661251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.397 qpair failed and we were unable to recover it. 00:38:31.397 [2024-10-01 17:38:29.661512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.397 [2024-10-01 17:38:29.661523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.397 qpair failed and we were unable to recover it. 00:38:31.397 [2024-10-01 17:38:29.661817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.397 [2024-10-01 17:38:29.661829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.397 qpair failed and we were unable to recover it. 00:38:31.397 [2024-10-01 17:38:29.662021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.397 [2024-10-01 17:38:29.662033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.397 qpair failed and we were unable to recover it. 00:38:31.397 [2024-10-01 17:38:29.662325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.397 [2024-10-01 17:38:29.662337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.397 qpair failed and we were unable to recover it. 00:38:31.397 [2024-10-01 17:38:29.662595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.397 [2024-10-01 17:38:29.662605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.397 qpair failed and we were unable to recover it. 00:38:31.397 [2024-10-01 17:38:29.662883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.397 [2024-10-01 17:38:29.662894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.397 qpair failed and we were unable to recover it. 00:38:31.397 [2024-10-01 17:38:29.663158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.397 [2024-10-01 17:38:29.663169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.397 qpair failed and we were unable to recover it. 00:38:31.397 [2024-10-01 17:38:29.663540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.397 [2024-10-01 17:38:29.663551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.397 qpair failed and we were unable to recover it. 00:38:31.397 [2024-10-01 17:38:29.663884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.397 [2024-10-01 17:38:29.663894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.397 qpair failed and we were unable to recover it. 00:38:31.397 [2024-10-01 17:38:29.664218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.397 [2024-10-01 17:38:29.664230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.397 qpair failed and we were unable to recover it. 00:38:31.397 [2024-10-01 17:38:29.664540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.397 [2024-10-01 17:38:29.664551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.397 qpair failed and we were unable to recover it. 00:38:31.397 [2024-10-01 17:38:29.664778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.397 [2024-10-01 17:38:29.664790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.397 qpair failed and we were unable to recover it. 00:38:31.397 [2024-10-01 17:38:29.665138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.397 [2024-10-01 17:38:29.665150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.397 qpair failed and we were unable to recover it. 00:38:31.397 [2024-10-01 17:38:29.665493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.397 [2024-10-01 17:38:29.665503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.397 qpair failed and we were unable to recover it. 00:38:31.398 [2024-10-01 17:38:29.665788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.398 [2024-10-01 17:38:29.665799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.398 qpair failed and we were unable to recover it. 00:38:31.398 [2024-10-01 17:38:29.665844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.398 [2024-10-01 17:38:29.665852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.398 qpair failed and we were unable to recover it. 00:38:31.398 [2024-10-01 17:38:29.666137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.398 [2024-10-01 17:38:29.666148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.398 qpair failed and we were unable to recover it. 00:38:31.398 [2024-10-01 17:38:29.666474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.398 [2024-10-01 17:38:29.666484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.398 qpair failed and we were unable to recover it. 00:38:31.398 [2024-10-01 17:38:29.666820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.398 [2024-10-01 17:38:29.666831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.398 qpair failed and we were unable to recover it. 00:38:31.398 [2024-10-01 17:38:29.667012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.398 [2024-10-01 17:38:29.667021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.398 qpair failed and we were unable to recover it. 00:38:31.398 [2024-10-01 17:38:29.667343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.398 [2024-10-01 17:38:29.667354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.398 qpair failed and we were unable to recover it. 00:38:31.398 [2024-10-01 17:38:29.667720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.398 [2024-10-01 17:38:29.667730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.398 qpair failed and we were unable to recover it. 00:38:31.398 [2024-10-01 17:38:29.668029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.398 [2024-10-01 17:38:29.668043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.398 qpair failed and we were unable to recover it. 00:38:31.398 [2024-10-01 17:38:29.668371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.398 [2024-10-01 17:38:29.668381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.398 qpair failed and we were unable to recover it. 00:38:31.398 [2024-10-01 17:38:29.668573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.398 [2024-10-01 17:38:29.668584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.398 qpair failed and we were unable to recover it. 00:38:31.398 [2024-10-01 17:38:29.668913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.398 [2024-10-01 17:38:29.668925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.398 qpair failed and we were unable to recover it. 00:38:31.398 [2024-10-01 17:38:29.669258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.398 [2024-10-01 17:38:29.669269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.398 qpair failed and we were unable to recover it. 00:38:31.398 [2024-10-01 17:38:29.669637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.398 [2024-10-01 17:38:29.669649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.398 qpair failed and we were unable to recover it. 00:38:31.398 [2024-10-01 17:38:29.669901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.398 [2024-10-01 17:38:29.669912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.398 qpair failed and we were unable to recover it. 00:38:31.398 [2024-10-01 17:38:29.670103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.398 [2024-10-01 17:38:29.670115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.398 qpair failed and we were unable to recover it. 00:38:31.398 [2024-10-01 17:38:29.670456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.398 [2024-10-01 17:38:29.670467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.398 qpair failed and we were unable to recover it. 00:38:31.398 [2024-10-01 17:38:29.670799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.398 [2024-10-01 17:38:29.670810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.398 qpair failed and we were unable to recover it. 00:38:31.398 [2024-10-01 17:38:29.671110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.398 [2024-10-01 17:38:29.671121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.398 qpair failed and we were unable to recover it. 00:38:31.398 [2024-10-01 17:38:29.671449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.398 [2024-10-01 17:38:29.671460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.398 qpair failed and we were unable to recover it. 00:38:31.398 [2024-10-01 17:38:29.671770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.398 [2024-10-01 17:38:29.671781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.398 qpair failed and we were unable to recover it. 00:38:31.398 [2024-10-01 17:38:29.671988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.398 [2024-10-01 17:38:29.672002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.398 qpair failed and we were unable to recover it. 00:38:31.398 [2024-10-01 17:38:29.672158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.398 [2024-10-01 17:38:29.672168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.398 qpair failed and we were unable to recover it. 00:38:31.398 [2024-10-01 17:38:29.672448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.398 [2024-10-01 17:38:29.672458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.398 qpair failed and we were unable to recover it. 00:38:31.398 [2024-10-01 17:38:29.672765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.398 [2024-10-01 17:38:29.672777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.398 qpair failed and we were unable to recover it. 00:38:31.398 [2024-10-01 17:38:29.672951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.398 [2024-10-01 17:38:29.672962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.398 qpair failed and we were unable to recover it. 00:38:31.398 [2024-10-01 17:38:29.673144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.398 [2024-10-01 17:38:29.673156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.398 qpair failed and we were unable to recover it. 00:38:31.398 [2024-10-01 17:38:29.673503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.398 [2024-10-01 17:38:29.673514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.398 qpair failed and we were unable to recover it. 00:38:31.398 [2024-10-01 17:38:29.673698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.398 [2024-10-01 17:38:29.673711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.398 qpair failed and we were unable to recover it. 00:38:31.398 [2024-10-01 17:38:29.673999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.398 [2024-10-01 17:38:29.674011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.398 qpair failed and we were unable to recover it. 00:38:31.398 [2024-10-01 17:38:29.674347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.398 [2024-10-01 17:38:29.674357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.398 qpair failed and we were unable to recover it. 00:38:31.398 [2024-10-01 17:38:29.674541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.398 [2024-10-01 17:38:29.674552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.398 qpair failed and we were unable to recover it. 00:38:31.398 [2024-10-01 17:38:29.674778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.398 [2024-10-01 17:38:29.674789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.398 qpair failed and we were unable to recover it. 00:38:31.398 [2024-10-01 17:38:29.674978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.398 [2024-10-01 17:38:29.674989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.398 qpair failed and we were unable to recover it. 00:38:31.399 [2024-10-01 17:38:29.675207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.399 [2024-10-01 17:38:29.675218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.399 qpair failed and we were unable to recover it. 00:38:31.399 [2024-10-01 17:38:29.675493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.399 [2024-10-01 17:38:29.675503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.399 qpair failed and we were unable to recover it. 00:38:31.399 [2024-10-01 17:38:29.675846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.399 [2024-10-01 17:38:29.675857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.399 qpair failed and we were unable to recover it. 00:38:31.399 [2024-10-01 17:38:29.676044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.399 [2024-10-01 17:38:29.676056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.399 qpair failed and we were unable to recover it. 00:38:31.399 [2024-10-01 17:38:29.676396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.399 [2024-10-01 17:38:29.676407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.399 qpair failed and we were unable to recover it. 00:38:31.399 [2024-10-01 17:38:29.676729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.399 [2024-10-01 17:38:29.676741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.399 qpair failed and we were unable to recover it. 00:38:31.399 [2024-10-01 17:38:29.676900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.399 [2024-10-01 17:38:29.676912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.399 qpair failed and we were unable to recover it. 00:38:31.399 [2024-10-01 17:38:29.677219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.399 [2024-10-01 17:38:29.677230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.399 qpair failed and we were unable to recover it. 00:38:31.399 [2024-10-01 17:38:29.677537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.399 [2024-10-01 17:38:29.677548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.399 qpair failed and we were unable to recover it. 00:38:31.399 [2024-10-01 17:38:29.677941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.399 [2024-10-01 17:38:29.677951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.399 qpair failed and we were unable to recover it. 00:38:31.399 [2024-10-01 17:38:29.678225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.399 [2024-10-01 17:38:29.678237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.399 qpair failed and we were unable to recover it. 00:38:31.399 [2024-10-01 17:38:29.678399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.399 [2024-10-01 17:38:29.678411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.399 qpair failed and we were unable to recover it. 00:38:31.399 [2024-10-01 17:38:29.678741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.399 [2024-10-01 17:38:29.678752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.399 qpair failed and we were unable to recover it. 00:38:31.399 [2024-10-01 17:38:29.678916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.399 [2024-10-01 17:38:29.678926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.399 qpair failed and we were unable to recover it. 00:38:31.399 [2024-10-01 17:38:29.679104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.399 [2024-10-01 17:38:29.679117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.399 qpair failed and we were unable to recover it. 00:38:31.399 [2024-10-01 17:38:29.679296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.399 [2024-10-01 17:38:29.679310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.399 qpair failed and we were unable to recover it. 00:38:31.399 [2024-10-01 17:38:29.679486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.399 [2024-10-01 17:38:29.679498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.399 qpair failed and we were unable to recover it. 00:38:31.399 [2024-10-01 17:38:29.679726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.399 [2024-10-01 17:38:29.679737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.399 qpair failed and we were unable to recover it. 00:38:31.399 [2024-10-01 17:38:29.680026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.399 [2024-10-01 17:38:29.680038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.399 qpair failed and we were unable to recover it. 00:38:31.399 [2024-10-01 17:38:29.680342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.399 [2024-10-01 17:38:29.680354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.399 qpair failed and we were unable to recover it. 00:38:31.399 [2024-10-01 17:38:29.680652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.399 [2024-10-01 17:38:29.680664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.399 qpair failed and we were unable to recover it. 00:38:31.399 [2024-10-01 17:38:29.680940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.399 [2024-10-01 17:38:29.680950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.399 qpair failed and we were unable to recover it. 00:38:31.399 [2024-10-01 17:38:29.681250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.399 [2024-10-01 17:38:29.681261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.399 qpair failed and we were unable to recover it. 00:38:31.399 [2024-10-01 17:38:29.681568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.399 [2024-10-01 17:38:29.681579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.399 qpair failed and we were unable to recover it. 00:38:31.399 [2024-10-01 17:38:29.681870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.399 [2024-10-01 17:38:29.681880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.399 qpair failed and we were unable to recover it. 00:38:31.399 [2024-10-01 17:38:29.682195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.399 [2024-10-01 17:38:29.682206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.399 qpair failed and we were unable to recover it. 00:38:31.399 [2024-10-01 17:38:29.682541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.399 [2024-10-01 17:38:29.682551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.399 qpair failed and we were unable to recover it. 00:38:31.399 [2024-10-01 17:38:29.682881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.399 [2024-10-01 17:38:29.682893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.399 qpair failed and we were unable to recover it. 00:38:31.399 [2024-10-01 17:38:29.683059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.399 [2024-10-01 17:38:29.683071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.399 qpair failed and we were unable to recover it. 00:38:31.399 [2024-10-01 17:38:29.683399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.399 [2024-10-01 17:38:29.683410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.399 qpair failed and we were unable to recover it. 00:38:31.399 [2024-10-01 17:38:29.683740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.399 [2024-10-01 17:38:29.683751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.399 qpair failed and we were unable to recover it. 00:38:31.399 [2024-10-01 17:38:29.684077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.399 [2024-10-01 17:38:29.684088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.399 qpair failed and we were unable to recover it. 00:38:31.399 [2024-10-01 17:38:29.684423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.400 [2024-10-01 17:38:29.684434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.400 qpair failed and we were unable to recover it. 00:38:31.400 [2024-10-01 17:38:29.684726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.400 [2024-10-01 17:38:29.684737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.400 qpair failed and we were unable to recover it. 00:38:31.400 [2024-10-01 17:38:29.685002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.400 [2024-10-01 17:38:29.685013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.400 qpair failed and we were unable to recover it. 00:38:31.400 [2024-10-01 17:38:29.685322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.400 [2024-10-01 17:38:29.685333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.400 qpair failed and we were unable to recover it. 00:38:31.400 [2024-10-01 17:38:29.685612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.400 [2024-10-01 17:38:29.685622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.400 qpair failed and we were unable to recover it. 00:38:31.400 [2024-10-01 17:38:29.685929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.400 [2024-10-01 17:38:29.685941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.400 qpair failed and we were unable to recover it. 00:38:31.400 [2024-10-01 17:38:29.686275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.400 [2024-10-01 17:38:29.686287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.400 qpair failed and we were unable to recover it. 00:38:31.400 [2024-10-01 17:38:29.686618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.400 [2024-10-01 17:38:29.686630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.400 qpair failed and we were unable to recover it. 00:38:31.400 [2024-10-01 17:38:29.686964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.400 [2024-10-01 17:38:29.686976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.400 qpair failed and we were unable to recover it. 00:38:31.400 [2024-10-01 17:38:29.687149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.400 [2024-10-01 17:38:29.687160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.400 qpair failed and we were unable to recover it. 00:38:31.400 [2024-10-01 17:38:29.687430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.400 [2024-10-01 17:38:29.687444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.400 qpair failed and we were unable to recover it. 00:38:31.400 [2024-10-01 17:38:29.687723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.400 [2024-10-01 17:38:29.687734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.400 qpair failed and we were unable to recover it. 00:38:31.400 [2024-10-01 17:38:29.688070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.400 [2024-10-01 17:38:29.688081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.400 qpair failed and we were unable to recover it. 00:38:31.400 [2024-10-01 17:38:29.688394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.400 [2024-10-01 17:38:29.688404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.400 qpair failed and we were unable to recover it. 00:38:31.400 [2024-10-01 17:38:29.688727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.400 [2024-10-01 17:38:29.688739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.400 qpair failed and we were unable to recover it. 00:38:31.400 [2024-10-01 17:38:29.689049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.400 [2024-10-01 17:38:29.689060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.400 qpair failed and we were unable to recover it. 00:38:31.400 [2024-10-01 17:38:29.689377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.400 [2024-10-01 17:38:29.689387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.400 qpair failed and we were unable to recover it. 00:38:31.400 [2024-10-01 17:38:29.689690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.400 [2024-10-01 17:38:29.689702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.400 qpair failed and we were unable to recover it. 00:38:31.400 [2024-10-01 17:38:29.690030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.400 [2024-10-01 17:38:29.690041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.400 qpair failed and we were unable to recover it. 00:38:31.400 [2024-10-01 17:38:29.690386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.400 [2024-10-01 17:38:29.690396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.400 qpair failed and we were unable to recover it. 00:38:31.400 [2024-10-01 17:38:29.690723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.400 [2024-10-01 17:38:29.690733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.400 qpair failed and we were unable to recover it. 00:38:31.400 [2024-10-01 17:38:29.691095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.400 [2024-10-01 17:38:29.691106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.400 qpair failed and we were unable to recover it. 00:38:31.400 [2024-10-01 17:38:29.691403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.400 [2024-10-01 17:38:29.691414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.400 qpair failed and we were unable to recover it. 00:38:31.400 [2024-10-01 17:38:29.691720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.400 [2024-10-01 17:38:29.691732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.400 qpair failed and we were unable to recover it. 00:38:31.400 [2024-10-01 17:38:29.692107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.400 [2024-10-01 17:38:29.692120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.400 qpair failed and we were unable to recover it. 00:38:31.400 [2024-10-01 17:38:29.692301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.400 [2024-10-01 17:38:29.692312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.400 qpair failed and we were unable to recover it. 00:38:31.400 [2024-10-01 17:38:29.692597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.400 [2024-10-01 17:38:29.692609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.400 qpair failed and we were unable to recover it. 00:38:31.400 [2024-10-01 17:38:29.692916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.400 [2024-10-01 17:38:29.692926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.400 qpair failed and we were unable to recover it. 00:38:31.400 [2024-10-01 17:38:29.693225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.400 [2024-10-01 17:38:29.693236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.400 qpair failed and we were unable to recover it. 00:38:31.400 [2024-10-01 17:38:29.693358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.400 [2024-10-01 17:38:29.693369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.400 qpair failed and we were unable to recover it. 00:38:31.400 [2024-10-01 17:38:29.693662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.400 [2024-10-01 17:38:29.693672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.400 qpair failed and we were unable to recover it. 00:38:31.400 [2024-10-01 17:38:29.693855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.400 [2024-10-01 17:38:29.693866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.400 qpair failed and we were unable to recover it. 00:38:31.400 [2024-10-01 17:38:29.694094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.400 [2024-10-01 17:38:29.694108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 17:38:29.694440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 17:38:29.694452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 17:38:29.694632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 17:38:29.694643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 17:38:29.694978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 17:38:29.694989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 17:38:29.695337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 17:38:29.695348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 17:38:29.695657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 17:38:29.695667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 17:38:29.695850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 17:38:29.695862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 17:38:29.696195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 17:38:29.696206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 17:38:29.696537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 17:38:29.696549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 17:38:29.696882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 17:38:29.696893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 17:38:29.697158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 17:38:29.697169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 17:38:29.697471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 17:38:29.697482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 17:38:29.697636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 17:38:29.697648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 17:38:29.697812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 17:38:29.697823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 17:38:29.698157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 17:38:29.698168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 17:38:29.698475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 17:38:29.698486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 17:38:29.698653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 17:38:29.698666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 17:38:29.698877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 17:38:29.698888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 17:38:29.699088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 17:38:29.699099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 17:38:29.699372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 17:38:29.699386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 17:38:29.699710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 17:38:29.699721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 17:38:29.700006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 17:38:29.700018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 17:38:29.700216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 17:38:29.700227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 17:38:29.700410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 17:38:29.700420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 17:38:29.700585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 17:38:29.700597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 17:38:29.700877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 17:38:29.700888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 17:38:29.701175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 17:38:29.701186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 17:38:29.701496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 17:38:29.701506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 17:38:29.701703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 17:38:29.701714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 17:38:29.701905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 17:38:29.701915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 17:38:29.702139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 17:38:29.702149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 17:38:29.702484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 17:38:29.702494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 17:38:29.702705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.401 [2024-10-01 17:38:29.702716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.401 qpair failed and we were unable to recover it. 00:38:31.401 [2024-10-01 17:38:29.703027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 17:38:29.703039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 17:38:29.703348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 17:38:29.703359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 17:38:29.703527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 17:38:29.703539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 17:38:29.703711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 17:38:29.703721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 17:38:29.704027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 17:38:29.704038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 17:38:29.704315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 17:38:29.704326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 17:38:29.704621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 17:38:29.704632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 17:38:29.704973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 17:38:29.704983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 17:38:29.705360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 17:38:29.705372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 17:38:29.705676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 17:38:29.705688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 17:38:29.706013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 17:38:29.706024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 17:38:29.706365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 17:38:29.706377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 17:38:29.706756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 17:38:29.706768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 17:38:29.707026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 17:38:29.707039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 17:38:29.707347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 17:38:29.707358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 17:38:29.707643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 17:38:29.707654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 17:38:29.707916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 17:38:29.707927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 17:38:29.708228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 17:38:29.708240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 17:38:29.708579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 17:38:29.708590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 17:38:29.708819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 17:38:29.708830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 17:38:29.709043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 17:38:29.709055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 17:38:29.709375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 17:38:29.709386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 17:38:29.709690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 17:38:29.709702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 17:38:29.710032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 17:38:29.710043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 17:38:29.710344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 17:38:29.710355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 17:38:29.710664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 17:38:29.710675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 17:38:29.710851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 17:38:29.710863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 17:38:29.711214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 17:38:29.711226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 17:38:29.711556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 17:38:29.711567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 17:38:29.711894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 17:38:29.711904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 17:38:29.712170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 17:38:29.712181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.402 [2024-10-01 17:38:29.712506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.402 [2024-10-01 17:38:29.712517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.402 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 17:38:29.712822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 17:38:29.712833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 17:38:29.713122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 17:38:29.713134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 17:38:29.713402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 17:38:29.713412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 17:38:29.713733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 17:38:29.713745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 17:38:29.714079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 17:38:29.714089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 17:38:29.714372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 17:38:29.714383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 17:38:29.714718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 17:38:29.714730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 17:38:29.715061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 17:38:29.715073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 17:38:29.715363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 17:38:29.715374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 17:38:29.715710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 17:38:29.715721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 17:38:29.716068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 17:38:29.716079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 17:38:29.716246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 17:38:29.716256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 17:38:29.716585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 17:38:29.716596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 17:38:29.716777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 17:38:29.716789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 17:38:29.717067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 17:38:29.717078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 17:38:29.717356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 17:38:29.717367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 17:38:29.717559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 17:38:29.717570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 17:38:29.717899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 17:38:29.717910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 17:38:29.718071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 17:38:29.718082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 17:38:29.718391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 17:38:29.718401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 17:38:29.718705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 17:38:29.718717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 17:38:29.718897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 17:38:29.718909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 17:38:29.719096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 17:38:29.719109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 17:38:29.719284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 17:38:29.719295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 17:38:29.719631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 17:38:29.719642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 17:38:29.719831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 17:38:29.719843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 17:38:29.720158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 17:38:29.720169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.403 [2024-10-01 17:38:29.720350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.403 [2024-10-01 17:38:29.720362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.403 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 17:38:29.720533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 17:38:29.720543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 17:38:29.720711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 17:38:29.720722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 17:38:29.720886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 17:38:29.720896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 17:38:29.721223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 17:38:29.721234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 17:38:29.721512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 17:38:29.721524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 17:38:29.721712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 17:38:29.721722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 17:38:29.722031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 17:38:29.722042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 17:38:29.722340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 17:38:29.722351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 17:38:29.722682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 17:38:29.722693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 17:38:29.723022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 17:38:29.723033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 17:38:29.723446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 17:38:29.723458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 17:38:29.723756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 17:38:29.723767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 17:38:29.723960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 17:38:29.723970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 17:38:29.724277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 17:38:29.724288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 17:38:29.724623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 17:38:29.724635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 17:38:29.724893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 17:38:29.724904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 17:38:29.725089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 17:38:29.725101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 17:38:29.725288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 17:38:29.725301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 17:38:29.725624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 17:38:29.725635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 17:38:29.725838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 17:38:29.725850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 17:38:29.726168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 17:38:29.726179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 17:38:29.726527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 17:38:29.726537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 17:38:29.726916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 17:38:29.726927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 17:38:29.727206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 17:38:29.727217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 17:38:29.727546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 17:38:29.727557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 17:38:29.727889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 17:38:29.727901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 17:38:29.728233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 17:38:29.728244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 17:38:29.728541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 17:38:29.728552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 17:38:29.728768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 17:38:29.728779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 17:38:29.728969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 17:38:29.728981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 17:38:29.729192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 17:38:29.729205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 17:38:29.729371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.404 [2024-10-01 17:38:29.729383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.404 qpair failed and we were unable to recover it. 00:38:31.404 [2024-10-01 17:38:29.729610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 17:38:29.729621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 17:38:29.729949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 17:38:29.729960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 17:38:29.730317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 17:38:29.730329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 17:38:29.730651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 17:38:29.730663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 17:38:29.730996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 17:38:29.731008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 17:38:29.731201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 17:38:29.731212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 17:38:29.731532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 17:38:29.731542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 17:38:29.731875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 17:38:29.731886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 17:38:29.732123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 17:38:29.732135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 17:38:29.732347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 17:38:29.732357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 17:38:29.732659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 17:38:29.732670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 17:38:29.732835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 17:38:29.732848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 17:38:29.733144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 17:38:29.733155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 17:38:29.733466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 17:38:29.733476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 17:38:29.733777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 17:38:29.733788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 17:38:29.733969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 17:38:29.733981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 17:38:29.734304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 17:38:29.734316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 17:38:29.734510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 17:38:29.734521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 17:38:29.734812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 17:38:29.734823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 17:38:29.735170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 17:38:29.735181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 17:38:29.735490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 17:38:29.735501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 17:38:29.735761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 17:38:29.735772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 17:38:29.735951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 17:38:29.735962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 17:38:29.736300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 17:38:29.736311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 17:38:29.736685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 17:38:29.736697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 17:38:29.736903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 17:38:29.736914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 17:38:29.737185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 17:38:29.737196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 17:38:29.737367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 17:38:29.737377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 17:38:29.737708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 17:38:29.737719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 17:38:29.738057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 17:38:29.738068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 17:38:29.738226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 17:38:29.738238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 17:38:29.738524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 17:38:29.738535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 17:38:29.738869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.405 [2024-10-01 17:38:29.738879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.405 qpair failed and we were unable to recover it. 00:38:31.405 [2024-10-01 17:38:29.739201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 17:38:29.739213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 17:38:29.739541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 17:38:29.739554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 17:38:29.739742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 17:38:29.739753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 17:38:29.740037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 17:38:29.740048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 17:38:29.740383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 17:38:29.740394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 17:38:29.740675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 17:38:29.740687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 17:38:29.741001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 17:38:29.741012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 17:38:29.741335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 17:38:29.741346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 17:38:29.741681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 17:38:29.741693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 17:38:29.741866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 17:38:29.741878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 17:38:29.742066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 17:38:29.742077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 17:38:29.742274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 17:38:29.742286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 17:38:29.742481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 17:38:29.742491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 17:38:29.742822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 17:38:29.742833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 17:38:29.743168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 17:38:29.743178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 17:38:29.743441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 17:38:29.743451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 17:38:29.743637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 17:38:29.743649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 17:38:29.743969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 17:38:29.743980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 17:38:29.744279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 17:38:29.744291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 17:38:29.744621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 17:38:29.744631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 17:38:29.745003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 17:38:29.745015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 17:38:29.745229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 17:38:29.745240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 17:38:29.745412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 17:38:29.745422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 17:38:29.745735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 17:38:29.745746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 17:38:29.745930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 17:38:29.745941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 17:38:29.746245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 17:38:29.746256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 17:38:29.746455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 17:38:29.746465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 17:38:29.746749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 17:38:29.746761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 17:38:29.747065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 17:38:29.747076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 17:38:29.747266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 17:38:29.747276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 17:38:29.747529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 17:38:29.747540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 17:38:29.747700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 17:38:29.747712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.406 qpair failed and we were unable to recover it. 00:38:31.406 [2024-10-01 17:38:29.748023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.406 [2024-10-01 17:38:29.748034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 17:38:29.748194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 17:38:29.748204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 17:38:29.748506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 17:38:29.748517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 17:38:29.748701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 17:38:29.748713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 17:38:29.748997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 17:38:29.749009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 17:38:29.749220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 17:38:29.749232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 17:38:29.749537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 17:38:29.749548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 17:38:29.749858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 17:38:29.749869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 17:38:29.750194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 17:38:29.750205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 17:38:29.750486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 17:38:29.750497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 17:38:29.750803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 17:38:29.750814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 17:38:29.751150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 17:38:29.751162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 17:38:29.751501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 17:38:29.751512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 17:38:29.751857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 17:38:29.751869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 17:38:29.752150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 17:38:29.752162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 17:38:29.752496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 17:38:29.752507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 17:38:29.752834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 17:38:29.752846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 17:38:29.753150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 17:38:29.753162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 17:38:29.753464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 17:38:29.753474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 17:38:29.753798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 17:38:29.753809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 17:38:29.754112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 17:38:29.754124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 17:38:29.754440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 17:38:29.754451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 17:38:29.754712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 17:38:29.754724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 17:38:29.755034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 17:38:29.755046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 17:38:29.755379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 17:38:29.755390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 17:38:29.755727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 17:38:29.755737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 17:38:29.756067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 17:38:29.756078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 17:38:29.756374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 17:38:29.756385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 17:38:29.756711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 17:38:29.756721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 17:38:29.757019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 17:38:29.757030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 17:38:29.757315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 17:38:29.757327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 17:38:29.757508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 17:38:29.757519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 17:38:29.757828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 17:38:29.757839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 17:38:29.758010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 17:38:29.758025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 17:38:29.758354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 17:38:29.758366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 17:38:29.758668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 17:38:29.758679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.407 [2024-10-01 17:38:29.758959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.407 [2024-10-01 17:38:29.758970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.407 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 17:38:29.759131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 17:38:29.759144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 17:38:29.759483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 17:38:29.759495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 17:38:29.759671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 17:38:29.759683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 17:38:29.759841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 17:38:29.759852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 17:38:29.760364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 17:38:29.760461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 17:38:29.760873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 17:38:29.760911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 17:38:29.761240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 17:38:29.761274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9560000b90 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 17:38:29.761514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 17:38:29.761526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 17:38:29.761834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 17:38:29.761846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 17:38:29.762031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 17:38:29.762042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 17:38:29.762284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 17:38:29.762295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 17:38:29.762672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 17:38:29.762682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 17:38:29.763020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 17:38:29.763031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 17:38:29.763343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 17:38:29.763354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 17:38:29.763668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 17:38:29.763680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 17:38:29.763868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 17:38:29.763879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 17:38:29.764067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 17:38:29.764078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 17:38:29.764412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 17:38:29.764423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 17:38:29.764694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 17:38:29.764704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 17:38:29.765016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 17:38:29.765028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 17:38:29.765356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 17:38:29.765367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 17:38:29.765698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 17:38:29.765709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 17:38:29.765879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 17:38:29.765889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 17:38:29.766066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 17:38:29.766077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 17:38:29.766413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.408 [2024-10-01 17:38:29.766425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.408 qpair failed and we were unable to recover it. 00:38:31.408 [2024-10-01 17:38:29.766752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 17:38:29.766763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 17:38:29.767096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 17:38:29.767108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 17:38:29.767421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 17:38:29.767431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 17:38:29.767697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 17:38:29.767708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 17:38:29.768040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 17:38:29.768052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 17:38:29.768349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 17:38:29.768359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 17:38:29.768654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 17:38:29.768666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 17:38:29.769001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 17:38:29.769012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 17:38:29.769348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 17:38:29.769359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 17:38:29.769667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 17:38:29.769677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 17:38:29.769979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 17:38:29.769989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 17:38:29.770301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 17:38:29.770312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 17:38:29.770634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 17:38:29.770646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 17:38:29.770949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 17:38:29.770961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 17:38:29.771252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 17:38:29.771264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 17:38:29.771596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 17:38:29.771608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 17:38:29.771942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 17:38:29.771954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 17:38:29.772250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 17:38:29.772260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 17:38:29.772566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 17:38:29.772576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 17:38:29.772903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 17:38:29.772915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 17:38:29.773201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 17:38:29.773212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 17:38:29.773536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 17:38:29.773547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 17:38:29.773851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 17:38:29.773862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 17:38:29.774167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 17:38:29.774178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 17:38:29.774487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 17:38:29.774498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 17:38:29.774715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 17:38:29.774726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 17:38:29.775036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 17:38:29.775047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 17:38:29.775385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 17:38:29.775396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 17:38:29.775698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 17:38:29.775709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 17:38:29.776041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 17:38:29.776053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 17:38:29.776330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 17:38:29.776340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 17:38:29.776639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 17:38:29.776650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 17:38:29.776839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 17:38:29.776851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 17:38:29.777134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.409 [2024-10-01 17:38:29.777146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.409 qpair failed and we were unable to recover it. 00:38:31.409 [2024-10-01 17:38:29.777318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 17:38:29.777329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 17:38:29.777639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 17:38:29.777651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 17:38:29.777849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 17:38:29.777860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 17:38:29.778053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 17:38:29.778064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 17:38:29.778360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 17:38:29.778371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 17:38:29.778700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 17:38:29.778713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 17:38:29.779009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 17:38:29.779020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 17:38:29.779328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 17:38:29.779339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 17:38:29.779660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 17:38:29.779671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 17:38:29.779961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 17:38:29.779972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 17:38:29.780292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 17:38:29.780304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 17:38:29.780631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 17:38:29.780642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 17:38:29.780950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 17:38:29.780960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 17:38:29.781255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 17:38:29.781266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 17:38:29.781547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 17:38:29.781557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 17:38:29.781866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 17:38:29.781877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 17:38:29.782208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 17:38:29.782219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 17:38:29.782409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 17:38:29.782421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 17:38:29.782583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 17:38:29.782593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 17:38:29.782914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 17:38:29.782924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 17:38:29.783270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 17:38:29.783282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 17:38:29.783612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 17:38:29.783623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 17:38:29.783769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 17:38:29.783781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 17:38:29.784006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 17:38:29.784018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 17:38:29.784307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 17:38:29.784318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 17:38:29.784664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 17:38:29.784675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 17:38:29.784976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 17:38:29.784987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 17:38:29.785173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 17:38:29.785185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 17:38:29.785400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 17:38:29.785410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 17:38:29.785648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 17:38:29.785659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 17:38:29.785958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 17:38:29.785968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 17:38:29.786144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.410 [2024-10-01 17:38:29.786156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.410 qpair failed and we were unable to recover it. 00:38:31.410 [2024-10-01 17:38:29.786520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 17:38:29.786531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 17:38:29.786672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 17:38:29.786683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 17:38:29.786900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 17:38:29.786910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 17:38:29.787214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 17:38:29.787226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 17:38:29.787565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 17:38:29.787576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 17:38:29.787909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 17:38:29.787920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 17:38:29.788213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 17:38:29.788223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 17:38:29.788531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 17:38:29.788543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 17:38:29.788915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 17:38:29.788926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 17:38:29.789234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 17:38:29.789245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 17:38:29.789557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 17:38:29.789567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 17:38:29.789876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 17:38:29.789886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 17:38:29.790183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 17:38:29.790194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 17:38:29.790361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 17:38:29.790372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 17:38:29.790657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 17:38:29.790671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 17:38:29.791002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 17:38:29.791013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 17:38:29.791299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 17:38:29.791309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 17:38:29.791490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 17:38:29.791501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 17:38:29.791833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 17:38:29.791844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 17:38:29.792150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 17:38:29.792163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 17:38:29.792468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 17:38:29.792479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 17:38:29.792761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 17:38:29.792771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 17:38:29.793058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 17:38:29.793069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 17:38:29.793402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 17:38:29.793414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 17:38:29.793720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 17:38:29.793730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 17:38:29.794036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 17:38:29.794048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 17:38:29.794370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 17:38:29.794381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 17:38:29.794715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 17:38:29.794726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 17:38:29.795074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 17:38:29.795085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 17:38:29.795409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 17:38:29.795420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 17:38:29.795750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 17:38:29.795761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 17:38:29.796041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 17:38:29.796052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 17:38:29.796263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.411 [2024-10-01 17:38:29.796275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.411 qpair failed and we were unable to recover it. 00:38:31.411 [2024-10-01 17:38:29.796602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 17:38:29.796612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 17:38:29.796907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 17:38:29.796918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 17:38:29.797167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 17:38:29.797179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 17:38:29.797477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 17:38:29.797489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 17:38:29.797677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 17:38:29.797688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 17:38:29.798003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 17:38:29.798014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 17:38:29.798346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 17:38:29.798357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 17:38:29.798513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 17:38:29.798524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 17:38:29.798862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 17:38:29.798874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 17:38:29.799207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 17:38:29.799218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 17:38:29.799377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 17:38:29.799389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 17:38:29.799708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 17:38:29.799719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 17:38:29.799904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 17:38:29.799916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 17:38:29.800102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 17:38:29.800114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 17:38:29.800239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 17:38:29.800249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 17:38:29.800437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 17:38:29.800447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 17:38:29.800735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 17:38:29.800746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 17:38:29.801088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 17:38:29.801100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 17:38:29.801284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 17:38:29.801295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 17:38:29.801519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 17:38:29.801530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 17:38:29.801848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 17:38:29.801858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 17:38:29.802190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 17:38:29.802201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 17:38:29.802497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 17:38:29.802510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 17:38:29.802836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 17:38:29.802848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 17:38:29.803171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 17:38:29.803182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 17:38:29.803514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 17:38:29.803526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 17:38:29.803698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 17:38:29.803710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 17:38:29.803878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 17:38:29.803890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 17:38:29.804069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 17:38:29.804081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 17:38:29.804247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 17:38:29.804257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 17:38:29.804424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 17:38:29.804434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 17:38:29.804707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 17:38:29.804718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 17:38:29.805057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 17:38:29.805068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 17:38:29.805372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 17:38:29.805383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 17:38:29.805564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.412 [2024-10-01 17:38:29.805576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.412 qpair failed and we were unable to recover it. 00:38:31.412 [2024-10-01 17:38:29.805862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 17:38:29.805872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 17:38:29.806210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 17:38:29.806221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 17:38:29.806266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 17:38:29.806275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 17:38:29.806511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 17:38:29.806522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 17:38:29.806816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 17:38:29.806826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 17:38:29.807170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 17:38:29.807182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 17:38:29.807469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 17:38:29.807479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 17:38:29.807867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 17:38:29.807878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 17:38:29.808185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 17:38:29.808197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 17:38:29.808503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 17:38:29.808515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 17:38:29.808806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 17:38:29.808818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 17:38:29.809160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 17:38:29.809171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 17:38:29.809510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 17:38:29.809521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 17:38:29.809715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 17:38:29.809725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 17:38:29.810040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 17:38:29.810053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 17:38:29.810365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 17:38:29.810375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 17:38:29.810711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 17:38:29.810722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 17:38:29.811017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 17:38:29.811029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 17:38:29.811353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 17:38:29.811365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 17:38:29.811675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 17:38:29.811686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 17:38:29.811965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 17:38:29.811976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 17:38:29.812157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 17:38:29.812170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 17:38:29.812502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 17:38:29.812513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 17:38:29.812793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 17:38:29.812803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 17:38:29.813109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 17:38:29.813120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 17:38:29.813447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 17:38:29.813457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 17:38:29.813631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 17:38:29.813643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 17:38:29.813901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 17:38:29.813912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 17:38:29.814228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 17:38:29.814239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 17:38:29.814570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.413 [2024-10-01 17:38:29.814580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.413 qpair failed and we were unable to recover it. 00:38:31.413 [2024-10-01 17:38:29.814903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.414 [2024-10-01 17:38:29.814914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.414 qpair failed and we were unable to recover it. 00:38:31.414 [2024-10-01 17:38:29.815227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.414 [2024-10-01 17:38:29.815238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.414 qpair failed and we were unable to recover it. 00:38:31.414 [2024-10-01 17:38:29.815526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.414 [2024-10-01 17:38:29.815537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.414 qpair failed and we were unable to recover it. 00:38:31.414 [2024-10-01 17:38:29.815842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.414 [2024-10-01 17:38:29.815853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.414 qpair failed and we were unable to recover it. 00:38:31.414 [2024-10-01 17:38:29.816182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.414 [2024-10-01 17:38:29.816193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.414 qpair failed and we were unable to recover it. 00:38:31.414 [2024-10-01 17:38:29.816478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.414 [2024-10-01 17:38:29.816489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.414 qpair failed and we were unable to recover it. 00:38:31.414 [2024-10-01 17:38:29.816784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.414 [2024-10-01 17:38:29.816795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.414 qpair failed and we were unable to recover it. 00:38:31.414 [2024-10-01 17:38:29.817063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.414 [2024-10-01 17:38:29.817075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.414 qpair failed and we were unable to recover it. 00:38:31.414 [2024-10-01 17:38:29.817237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.414 [2024-10-01 17:38:29.817248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.414 qpair failed and we were unable to recover it. 00:38:31.414 [2024-10-01 17:38:29.817433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.414 [2024-10-01 17:38:29.817444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.414 qpair failed and we were unable to recover it. 00:38:31.414 [2024-10-01 17:38:29.817776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.414 [2024-10-01 17:38:29.817787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.414 qpair failed and we were unable to recover it. 00:38:31.414 [2024-10-01 17:38:29.818120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.414 [2024-10-01 17:38:29.818134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.414 qpair failed and we were unable to recover it. 00:38:31.414 [2024-10-01 17:38:29.818471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.414 [2024-10-01 17:38:29.818483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.414 qpair failed and we were unable to recover it. 00:38:31.414 [2024-10-01 17:38:29.818807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.414 [2024-10-01 17:38:29.818818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.414 qpair failed and we were unable to recover it. 00:38:31.414 [2024-10-01 17:38:29.819102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.414 [2024-10-01 17:38:29.819113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.414 qpair failed and we were unable to recover it. 00:38:31.414 [2024-10-01 17:38:29.819278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.414 [2024-10-01 17:38:29.819290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.414 qpair failed and we were unable to recover it. 00:38:31.414 [2024-10-01 17:38:29.819463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.414 [2024-10-01 17:38:29.819473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.414 qpair failed and we were unable to recover it. 00:38:31.414 [2024-10-01 17:38:29.819766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.414 [2024-10-01 17:38:29.819776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.414 qpair failed and we were unable to recover it. 00:38:31.414 [2024-10-01 17:38:29.820101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.414 [2024-10-01 17:38:29.820112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.414 qpair failed and we were unable to recover it. 00:38:31.414 [2024-10-01 17:38:29.820380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.414 [2024-10-01 17:38:29.820391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.414 qpair failed and we were unable to recover it. 00:38:31.414 [2024-10-01 17:38:29.820581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.414 [2024-10-01 17:38:29.820591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.414 qpair failed and we were unable to recover it. 00:38:31.414 [2024-10-01 17:38:29.820789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.414 [2024-10-01 17:38:29.820800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.414 qpair failed and we were unable to recover it. 00:38:31.414 [2024-10-01 17:38:29.821122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.414 [2024-10-01 17:38:29.821133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.414 qpair failed and we were unable to recover it. 00:38:31.414 [2024-10-01 17:38:29.821423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.414 [2024-10-01 17:38:29.821433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.414 qpair failed and we were unable to recover it. 00:38:31.414 [2024-10-01 17:38:29.821615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.414 [2024-10-01 17:38:29.821626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.414 qpair failed and we were unable to recover it. 00:38:31.414 [2024-10-01 17:38:29.821958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.414 [2024-10-01 17:38:29.821970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.414 qpair failed and we were unable to recover it. 00:38:31.414 [2024-10-01 17:38:29.822157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.414 [2024-10-01 17:38:29.822168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.414 qpair failed and we were unable to recover it. 00:38:31.414 [2024-10-01 17:38:29.822361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.414 [2024-10-01 17:38:29.822372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.414 qpair failed and we were unable to recover it. 00:38:31.414 [2024-10-01 17:38:29.822559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.414 [2024-10-01 17:38:29.822571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.414 qpair failed and we were unable to recover it. 00:38:31.414 [2024-10-01 17:38:29.822755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.414 [2024-10-01 17:38:29.822766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.414 qpair failed and we were unable to recover it. 00:38:31.414 [2024-10-01 17:38:29.822974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.414 [2024-10-01 17:38:29.822986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.414 qpair failed and we were unable to recover it. 00:38:31.414 [2024-10-01 17:38:29.823344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.414 [2024-10-01 17:38:29.823356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.415 qpair failed and we were unable to recover it. 00:38:31.415 [2024-10-01 17:38:29.823534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.415 [2024-10-01 17:38:29.823545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.415 qpair failed and we were unable to recover it. 00:38:31.415 [2024-10-01 17:38:29.823730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.415 [2024-10-01 17:38:29.823743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.415 qpair failed and we were unable to recover it. 00:38:31.415 [2024-10-01 17:38:29.823927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.415 [2024-10-01 17:38:29.823939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.415 qpair failed and we were unable to recover it. 00:38:31.415 [2024-10-01 17:38:29.824225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.415 [2024-10-01 17:38:29.824237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.415 qpair failed and we were unable to recover it. 00:38:31.415 [2024-10-01 17:38:29.824560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.415 [2024-10-01 17:38:29.824572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.415 qpair failed and we were unable to recover it. 00:38:31.415 [2024-10-01 17:38:29.824873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.415 [2024-10-01 17:38:29.824885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.415 qpair failed and we were unable to recover it. 00:38:31.415 [2024-10-01 17:38:29.825200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.415 [2024-10-01 17:38:29.825211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.415 qpair failed and we were unable to recover it. 00:38:31.415 [2024-10-01 17:38:29.825503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.415 [2024-10-01 17:38:29.825514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.415 qpair failed and we were unable to recover it. 00:38:31.415 [2024-10-01 17:38:29.825824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.415 [2024-10-01 17:38:29.825835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.415 qpair failed and we were unable to recover it. 00:38:31.415 [2024-10-01 17:38:29.826147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.415 [2024-10-01 17:38:29.826158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.415 qpair failed and we were unable to recover it. 00:38:31.415 [2024-10-01 17:38:29.826418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.415 [2024-10-01 17:38:29.826428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.415 qpair failed and we were unable to recover it. 00:38:31.415 [2024-10-01 17:38:29.826567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.415 [2024-10-01 17:38:29.826578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.415 qpair failed and we were unable to recover it. 00:38:31.415 [2024-10-01 17:38:29.826869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.415 [2024-10-01 17:38:29.826880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.415 qpair failed and we were unable to recover it. 00:38:31.415 [2024-10-01 17:38:29.827067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.415 [2024-10-01 17:38:29.827078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.415 qpair failed and we were unable to recover it. 00:38:31.415 [2024-10-01 17:38:29.827366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.415 [2024-10-01 17:38:29.827376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.415 qpair failed and we were unable to recover it. 00:38:31.415 [2024-10-01 17:38:29.827574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.415 [2024-10-01 17:38:29.827586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.415 qpair failed and we were unable to recover it. 00:38:31.415 [2024-10-01 17:38:29.827798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.415 [2024-10-01 17:38:29.827808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.415 qpair failed and we were unable to recover it. 00:38:31.415 [2024-10-01 17:38:29.828121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.415 [2024-10-01 17:38:29.828132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.415 qpair failed and we were unable to recover it. 00:38:31.415 [2024-10-01 17:38:29.828450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.415 [2024-10-01 17:38:29.828461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.415 qpair failed and we were unable to recover it. 00:38:31.415 [2024-10-01 17:38:29.828745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.415 [2024-10-01 17:38:29.828756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.415 qpair failed and we were unable to recover it. 00:38:31.415 [2024-10-01 17:38:29.829062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.415 [2024-10-01 17:38:29.829075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.415 qpair failed and we were unable to recover it. 00:38:31.415 [2024-10-01 17:38:29.829264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.415 [2024-10-01 17:38:29.829276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.415 qpair failed and we were unable to recover it. 00:38:31.415 [2024-10-01 17:38:29.829560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.415 [2024-10-01 17:38:29.829571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.415 qpair failed and we were unable to recover it. 00:38:31.415 [2024-10-01 17:38:29.829902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.415 [2024-10-01 17:38:29.829914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.415 qpair failed and we were unable to recover it. 00:38:31.415 [2024-10-01 17:38:29.830221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.415 [2024-10-01 17:38:29.830232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.415 qpair failed and we were unable to recover it. 00:38:31.415 [2024-10-01 17:38:29.830573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.415 [2024-10-01 17:38:29.830583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.415 qpair failed and we were unable to recover it. 00:38:31.415 [2024-10-01 17:38:29.830899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.415 [2024-10-01 17:38:29.830909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.415 qpair failed and we were unable to recover it. 00:38:31.415 [2024-10-01 17:38:29.831240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.415 [2024-10-01 17:38:29.831251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.415 qpair failed and we were unable to recover it. 00:38:31.415 [2024-10-01 17:38:29.831441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.415 [2024-10-01 17:38:29.831451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.415 qpair failed and we were unable to recover it. 00:38:31.415 [2024-10-01 17:38:29.831774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.415 [2024-10-01 17:38:29.831785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.415 qpair failed and we were unable to recover it. 00:38:31.415 [2024-10-01 17:38:29.832104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.415 [2024-10-01 17:38:29.832115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.415 qpair failed and we were unable to recover it. 00:38:31.415 [2024-10-01 17:38:29.832162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.415 [2024-10-01 17:38:29.832171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.416 qpair failed and we were unable to recover it. 00:38:31.416 [2024-10-01 17:38:29.832467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.416 [2024-10-01 17:38:29.832477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.416 qpair failed and we were unable to recover it. 00:38:31.416 [2024-10-01 17:38:29.832793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.416 [2024-10-01 17:38:29.832804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.416 qpair failed and we were unable to recover it. 00:38:31.416 [2024-10-01 17:38:29.833153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.416 [2024-10-01 17:38:29.833165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.416 qpair failed and we were unable to recover it. 00:38:31.416 [2024-10-01 17:38:29.833461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.416 [2024-10-01 17:38:29.833471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.416 qpair failed and we were unable to recover it. 00:38:31.416 [2024-10-01 17:38:29.833789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.416 [2024-10-01 17:38:29.833800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.416 qpair failed and we were unable to recover it. 00:38:31.416 [2024-10-01 17:38:29.834098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.416 [2024-10-01 17:38:29.834109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.416 qpair failed and we were unable to recover it. 00:38:31.416 [2024-10-01 17:38:29.834371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.416 [2024-10-01 17:38:29.834382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.416 qpair failed and we were unable to recover it. 00:38:31.416 [2024-10-01 17:38:29.834709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.416 [2024-10-01 17:38:29.834720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.416 qpair failed and we were unable to recover it. 00:38:31.416 [2024-10-01 17:38:29.835000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.416 [2024-10-01 17:38:29.835012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.416 qpair failed and we were unable to recover it. 00:38:31.416 [2024-10-01 17:38:29.835228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.416 [2024-10-01 17:38:29.835239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.416 qpair failed and we were unable to recover it. 00:38:31.416 [2024-10-01 17:38:29.835549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.416 [2024-10-01 17:38:29.835559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.416 qpair failed and we were unable to recover it. 00:38:31.416 [2024-10-01 17:38:29.835903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.416 [2024-10-01 17:38:29.835914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.416 qpair failed and we were unable to recover it. 00:38:31.416 [2024-10-01 17:38:29.836277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.416 [2024-10-01 17:38:29.836288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.416 qpair failed and we were unable to recover it. 00:38:31.416 [2024-10-01 17:38:29.836617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.416 [2024-10-01 17:38:29.836628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.416 qpair failed and we were unable to recover it. 00:38:31.416 [2024-10-01 17:38:29.836962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.416 [2024-10-01 17:38:29.836973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.416 qpair failed and we were unable to recover it. 00:38:31.416 [2024-10-01 17:38:29.837292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.416 [2024-10-01 17:38:29.837305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.416 qpair failed and we were unable to recover it. 00:38:31.416 [2024-10-01 17:38:29.837651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.416 [2024-10-01 17:38:29.837662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.416 qpair failed and we were unable to recover it. 00:38:31.416 [2024-10-01 17:38:29.838000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.416 [2024-10-01 17:38:29.838011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.416 qpair failed and we were unable to recover it. 00:38:31.416 [2024-10-01 17:38:29.838319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.416 [2024-10-01 17:38:29.838331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.416 qpair failed and we were unable to recover it. 00:38:31.416 [2024-10-01 17:38:29.838632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.416 [2024-10-01 17:38:29.838642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.416 qpair failed and we were unable to recover it. 00:38:31.416 [2024-10-01 17:38:29.838955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.416 [2024-10-01 17:38:29.838966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.416 qpair failed and we were unable to recover it. 00:38:31.416 [2024-10-01 17:38:29.839281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.416 [2024-10-01 17:38:29.839292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.416 qpair failed and we were unable to recover it. 00:38:31.416 [2024-10-01 17:38:29.839639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.416 [2024-10-01 17:38:29.839650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.416 qpair failed and we were unable to recover it. 00:38:31.416 [2024-10-01 17:38:29.839927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.416 [2024-10-01 17:38:29.839938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.416 qpair failed and we were unable to recover it. 00:38:31.416 [2024-10-01 17:38:29.840267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.416 [2024-10-01 17:38:29.840278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.416 qpair failed and we were unable to recover it. 00:38:31.416 [2024-10-01 17:38:29.840627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.416 [2024-10-01 17:38:29.840639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.416 qpair failed and we were unable to recover it. 00:38:31.416 [2024-10-01 17:38:29.840972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.416 [2024-10-01 17:38:29.840983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.416 qpair failed and we were unable to recover it. 00:38:31.416 [2024-10-01 17:38:29.841314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.416 [2024-10-01 17:38:29.841326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.416 qpair failed and we were unable to recover it. 00:38:31.416 [2024-10-01 17:38:29.841507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.416 [2024-10-01 17:38:29.841519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.416 qpair failed and we were unable to recover it. 00:38:31.416 [2024-10-01 17:38:29.841827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.416 [2024-10-01 17:38:29.841838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.416 qpair failed and we were unable to recover it. 00:38:31.416 [2024-10-01 17:38:29.842013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.416 [2024-10-01 17:38:29.842025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.416 qpair failed and we were unable to recover it. 00:38:31.417 [2024-10-01 17:38:29.842212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.417 [2024-10-01 17:38:29.842223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.417 qpair failed and we were unable to recover it. 00:38:31.417 [2024-10-01 17:38:29.842528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.417 [2024-10-01 17:38:29.842540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.417 qpair failed and we were unable to recover it. 00:38:31.417 [2024-10-01 17:38:29.842858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.417 [2024-10-01 17:38:29.842870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.417 qpair failed and we were unable to recover it. 00:38:31.417 [2024-10-01 17:38:29.843184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.417 [2024-10-01 17:38:29.843195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.417 qpair failed and we were unable to recover it. 00:38:31.417 [2024-10-01 17:38:29.843525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.417 [2024-10-01 17:38:29.843536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.417 qpair failed and we were unable to recover it. 00:38:31.417 [2024-10-01 17:38:29.843792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.417 [2024-10-01 17:38:29.843803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.417 qpair failed and we were unable to recover it. 00:38:31.417 [2024-10-01 17:38:29.843985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.417 [2024-10-01 17:38:29.843999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.417 qpair failed and we were unable to recover it. 00:38:31.417 [2024-10-01 17:38:29.844174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.417 [2024-10-01 17:38:29.844185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.417 qpair failed and we were unable to recover it. 00:38:31.417 [2024-10-01 17:38:29.844457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.417 [2024-10-01 17:38:29.844467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.417 qpair failed and we were unable to recover it. 00:38:31.417 [2024-10-01 17:38:29.844792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.417 [2024-10-01 17:38:29.844804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.417 qpair failed and we were unable to recover it. 00:38:31.417 [2024-10-01 17:38:29.845143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.417 [2024-10-01 17:38:29.845154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.417 qpair failed and we were unable to recover it. 00:38:31.417 [2024-10-01 17:38:29.845462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.417 [2024-10-01 17:38:29.845473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.417 qpair failed and we were unable to recover it. 00:38:31.417 [2024-10-01 17:38:29.845784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.417 [2024-10-01 17:38:29.845796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.417 qpair failed and we were unable to recover it. 00:38:31.417 [2024-10-01 17:38:29.846074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.417 [2024-10-01 17:38:29.846086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.417 qpair failed and we were unable to recover it. 00:38:31.417 [2024-10-01 17:38:29.846405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.417 [2024-10-01 17:38:29.846417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.417 qpair failed and we were unable to recover it. 00:38:31.417 [2024-10-01 17:38:29.846749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.417 [2024-10-01 17:38:29.846760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.417 qpair failed and we were unable to recover it. 00:38:31.417 [2024-10-01 17:38:29.846944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.417 [2024-10-01 17:38:29.846955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.417 qpair failed and we were unable to recover it. 00:38:31.417 [2024-10-01 17:38:29.847152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.417 [2024-10-01 17:38:29.847163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.417 qpair failed and we were unable to recover it. 00:38:31.417 [2024-10-01 17:38:29.847501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.417 [2024-10-01 17:38:29.847514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.417 qpair failed and we were unable to recover it. 00:38:31.417 [2024-10-01 17:38:29.847801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.417 [2024-10-01 17:38:29.847812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.417 qpair failed and we were unable to recover it. 00:38:31.417 [2024-10-01 17:38:29.847997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.417 [2024-10-01 17:38:29.848008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.417 qpair failed and we were unable to recover it. 00:38:31.417 [2024-10-01 17:38:29.848207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.417 [2024-10-01 17:38:29.848218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.417 qpair failed and we were unable to recover it. 00:38:31.417 [2024-10-01 17:38:29.848564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.417 [2024-10-01 17:38:29.848575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.417 qpair failed and we were unable to recover it. 00:38:31.417 [2024-10-01 17:38:29.848908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.417 [2024-10-01 17:38:29.848919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.417 qpair failed and we were unable to recover it. 00:38:31.417 [2024-10-01 17:38:29.849232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.417 [2024-10-01 17:38:29.849243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.417 qpair failed and we were unable to recover it. 00:38:31.417 [2024-10-01 17:38:29.849579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.417 [2024-10-01 17:38:29.849593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.417 qpair failed and we were unable to recover it. 00:38:31.417 [2024-10-01 17:38:29.849938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.417 [2024-10-01 17:38:29.849950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.417 qpair failed and we were unable to recover it. 00:38:31.417 [2024-10-01 17:38:29.850266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.417 [2024-10-01 17:38:29.850277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.417 qpair failed and we were unable to recover it. 00:38:31.417 [2024-10-01 17:38:29.850587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.417 [2024-10-01 17:38:29.850598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.417 qpair failed and we were unable to recover it. 00:38:31.417 [2024-10-01 17:38:29.850934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.417 [2024-10-01 17:38:29.850945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.417 qpair failed and we were unable to recover it. 00:38:31.417 [2024-10-01 17:38:29.851245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.417 [2024-10-01 17:38:29.851257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.417 qpair failed and we were unable to recover it. 00:38:31.417 [2024-10-01 17:38:29.851547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.417 [2024-10-01 17:38:29.851558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.418 qpair failed and we were unable to recover it. 00:38:31.418 [2024-10-01 17:38:29.851884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.418 [2024-10-01 17:38:29.851895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.418 qpair failed and we were unable to recover it. 00:38:31.418 [2024-10-01 17:38:29.852211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.418 [2024-10-01 17:38:29.852223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.418 qpair failed and we were unable to recover it. 00:38:31.418 [2024-10-01 17:38:29.852504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.418 [2024-10-01 17:38:29.852515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.418 qpair failed and we were unable to recover it. 00:38:31.418 [2024-10-01 17:38:29.852821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.418 [2024-10-01 17:38:29.852832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.418 qpair failed and we were unable to recover it. 00:38:31.418 [2024-10-01 17:38:29.853140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.418 [2024-10-01 17:38:29.853152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.418 qpair failed and we were unable to recover it. 00:38:31.418 [2024-10-01 17:38:29.853346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.418 [2024-10-01 17:38:29.853356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.418 qpair failed and we were unable to recover it. 00:38:31.418 [2024-10-01 17:38:29.853618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.418 [2024-10-01 17:38:29.853630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.418 qpair failed and we were unable to recover it. 00:38:31.418 [2024-10-01 17:38:29.853962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.418 [2024-10-01 17:38:29.853974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.418 qpair failed and we were unable to recover it. 00:38:31.418 [2024-10-01 17:38:29.854331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.418 [2024-10-01 17:38:29.854343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.418 qpair failed and we were unable to recover it. 00:38:31.418 [2024-10-01 17:38:29.854666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.418 [2024-10-01 17:38:29.854677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.418 qpair failed and we were unable to recover it. 00:38:31.418 [2024-10-01 17:38:29.854986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.418 [2024-10-01 17:38:29.855000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.418 qpair failed and we were unable to recover it. 00:38:31.418 [2024-10-01 17:38:29.855304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.418 [2024-10-01 17:38:29.855314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.418 qpair failed and we were unable to recover it. 00:38:31.418 [2024-10-01 17:38:29.855575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.418 [2024-10-01 17:38:29.855586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.418 qpair failed and we were unable to recover it. 00:38:31.418 [2024-10-01 17:38:29.855895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.418 [2024-10-01 17:38:29.855906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.418 qpair failed and we were unable to recover it. 00:38:31.418 [2024-10-01 17:38:29.856219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.418 [2024-10-01 17:38:29.856232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.418 qpair failed and we were unable to recover it. 00:38:31.418 [2024-10-01 17:38:29.856553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.418 [2024-10-01 17:38:29.856565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.418 qpair failed and we were unable to recover it. 00:38:31.418 [2024-10-01 17:38:29.856875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.418 [2024-10-01 17:38:29.856887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.418 qpair failed and we were unable to recover it. 00:38:31.418 [2024-10-01 17:38:29.857164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.418 [2024-10-01 17:38:29.857175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.418 qpair failed and we were unable to recover it. 00:38:31.418 [2024-10-01 17:38:29.857379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.418 [2024-10-01 17:38:29.857390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.418 qpair failed and we were unable to recover it. 00:38:31.418 [2024-10-01 17:38:29.857734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.418 [2024-10-01 17:38:29.857744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.418 qpair failed and we were unable to recover it. 00:38:31.418 [2024-10-01 17:38:29.858075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.418 [2024-10-01 17:38:29.858086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.418 qpair failed and we were unable to recover it. 00:38:31.418 [2024-10-01 17:38:29.858429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.418 [2024-10-01 17:38:29.858440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.418 qpair failed and we were unable to recover it. 00:38:31.418 [2024-10-01 17:38:29.858649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.418 [2024-10-01 17:38:29.858659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.418 qpair failed and we were unable to recover it. 00:38:31.418 [2024-10-01 17:38:29.858882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.418 [2024-10-01 17:38:29.858892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.418 qpair failed and we were unable to recover it. 00:38:31.418 [2024-10-01 17:38:29.859110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.418 [2024-10-01 17:38:29.859122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.418 qpair failed and we were unable to recover it. 00:38:31.418 [2024-10-01 17:38:29.859468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.418 [2024-10-01 17:38:29.859478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.418 qpair failed and we were unable to recover it. 00:38:31.418 [2024-10-01 17:38:29.859808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.418 [2024-10-01 17:38:29.859819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.418 qpair failed and we were unable to recover it. 00:38:31.418 [2024-10-01 17:38:29.860112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.418 [2024-10-01 17:38:29.860123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.418 qpair failed and we were unable to recover it. 00:38:31.418 [2024-10-01 17:38:29.860406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.418 [2024-10-01 17:38:29.860416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.418 qpair failed and we were unable to recover it. 00:38:31.418 [2024-10-01 17:38:29.860698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.418 [2024-10-01 17:38:29.860710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.418 qpair failed and we were unable to recover it. 00:38:31.418 [2024-10-01 17:38:29.861045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.418 [2024-10-01 17:38:29.861056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.418 qpair failed and we were unable to recover it. 00:38:31.418 [2024-10-01 17:38:29.861372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.418 [2024-10-01 17:38:29.861383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.418 qpair failed and we were unable to recover it. 00:38:31.418 [2024-10-01 17:38:29.861668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.418 [2024-10-01 17:38:29.861678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.418 qpair failed and we were unable to recover it. 00:38:31.418 [2024-10-01 17:38:29.861991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.418 [2024-10-01 17:38:29.862007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.418 qpair failed and we were unable to recover it. 00:38:31.418 [2024-10-01 17:38:29.862325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.419 [2024-10-01 17:38:29.862336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.419 qpair failed and we were unable to recover it. 00:38:31.419 [2024-10-01 17:38:29.862626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.419 [2024-10-01 17:38:29.862639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.419 qpair failed and we were unable to recover it. 00:38:31.419 [2024-10-01 17:38:29.862947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.419 [2024-10-01 17:38:29.862958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.419 qpair failed and we were unable to recover it. 00:38:31.419 [2024-10-01 17:38:29.863292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.419 [2024-10-01 17:38:29.863303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.419 qpair failed and we were unable to recover it. 00:38:31.419 [2024-10-01 17:38:29.863582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.419 [2024-10-01 17:38:29.863592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.419 qpair failed and we were unable to recover it. 00:38:31.419 [2024-10-01 17:38:29.863850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.419 [2024-10-01 17:38:29.863860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.419 qpair failed and we were unable to recover it. 00:38:31.419 [2024-10-01 17:38:29.864097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.419 [2024-10-01 17:38:29.864108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.419 qpair failed and we were unable to recover it. 00:38:31.419 [2024-10-01 17:38:29.864458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.419 [2024-10-01 17:38:29.864469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.419 qpair failed and we were unable to recover it. 00:38:31.419 [2024-10-01 17:38:29.864803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.419 [2024-10-01 17:38:29.864814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.419 qpair failed and we were unable to recover it. 00:38:31.419 [2024-10-01 17:38:29.865014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.419 [2024-10-01 17:38:29.865027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.419 qpair failed and we were unable to recover it. 00:38:31.419 [2024-10-01 17:38:29.865350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.419 [2024-10-01 17:38:29.865362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.419 qpair failed and we were unable to recover it. 00:38:31.419 [2024-10-01 17:38:29.865688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.419 [2024-10-01 17:38:29.865699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.419 qpair failed and we were unable to recover it. 00:38:31.419 [2024-10-01 17:38:29.865858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.419 [2024-10-01 17:38:29.865870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.419 qpair failed and we were unable to recover it. 00:38:31.419 [2024-10-01 17:38:29.866211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.419 [2024-10-01 17:38:29.866222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.419 qpair failed and we were unable to recover it. 00:38:31.419 [2024-10-01 17:38:29.866414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.419 [2024-10-01 17:38:29.866426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.419 qpair failed and we were unable to recover it. 00:38:31.419 [2024-10-01 17:38:29.866761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.419 [2024-10-01 17:38:29.866772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.419 qpair failed and we were unable to recover it. 00:38:31.419 [2024-10-01 17:38:29.867091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.419 [2024-10-01 17:38:29.867102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.419 qpair failed and we were unable to recover it. 00:38:31.419 [2024-10-01 17:38:29.867438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.419 [2024-10-01 17:38:29.867450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.419 qpair failed and we were unable to recover it. 00:38:31.419 [2024-10-01 17:38:29.867621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.419 [2024-10-01 17:38:29.867633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.419 qpair failed and we were unable to recover it. 00:38:31.419 [2024-10-01 17:38:29.867948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.419 [2024-10-01 17:38:29.867960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.419 qpair failed and we were unable to recover it. 00:38:31.419 [2024-10-01 17:38:29.868289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.419 [2024-10-01 17:38:29.868300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.419 qpair failed and we were unable to recover it. 00:38:31.419 [2024-10-01 17:38:29.868489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.419 [2024-10-01 17:38:29.868499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.419 qpair failed and we were unable to recover it. 00:38:31.419 [2024-10-01 17:38:29.868806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.419 [2024-10-01 17:38:29.868836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.419 qpair failed and we were unable to recover it. 00:38:31.419 [2024-10-01 17:38:29.869081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.419 [2024-10-01 17:38:29.869101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.419 qpair failed and we were unable to recover it. 00:38:31.419 [2024-10-01 17:38:29.869412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.419 [2024-10-01 17:38:29.869432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.419 qpair failed and we were unable to recover it. 00:38:31.419 [2024-10-01 17:38:29.869728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.419 [2024-10-01 17:38:29.869746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.419 qpair failed and we were unable to recover it. 00:38:31.419 [2024-10-01 17:38:29.870054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.419 [2024-10-01 17:38:29.870071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.419 qpair failed and we were unable to recover it. 00:38:31.419 [2024-10-01 17:38:29.870415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.419 [2024-10-01 17:38:29.870437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.419 qpair failed and we were unable to recover it. 00:38:31.420 [2024-10-01 17:38:29.870603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.420 [2024-10-01 17:38:29.870620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.420 qpair failed and we were unable to recover it. 00:38:31.420 [2024-10-01 17:38:29.870802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.420 [2024-10-01 17:38:29.870817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.420 qpair failed and we were unable to recover it. 00:38:31.420 [2024-10-01 17:38:29.871015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.420 [2024-10-01 17:38:29.871033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.420 qpair failed and we were unable to recover it. 00:38:31.420 [2024-10-01 17:38:29.871371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.420 [2024-10-01 17:38:29.871391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.420 qpair failed and we were unable to recover it. 00:38:31.420 [2024-10-01 17:38:29.871570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.420 [2024-10-01 17:38:29.871584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.420 qpair failed and we were unable to recover it. 00:38:31.420 [2024-10-01 17:38:29.871909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.420 [2024-10-01 17:38:29.871921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.420 qpair failed and we were unable to recover it. 00:38:31.420 [2024-10-01 17:38:29.872257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.420 [2024-10-01 17:38:29.872268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.420 qpair failed and we were unable to recover it. 00:38:31.420 [2024-10-01 17:38:29.872611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.420 [2024-10-01 17:38:29.872622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.420 qpair failed and we were unable to recover it. 00:38:31.420 [2024-10-01 17:38:29.872792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.420 [2024-10-01 17:38:29.872805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.420 qpair failed and we were unable to recover it. 00:38:31.420 [2024-10-01 17:38:29.873096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.420 [2024-10-01 17:38:29.873107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.420 qpair failed and we were unable to recover it. 00:38:31.420 [2024-10-01 17:38:29.873445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.420 [2024-10-01 17:38:29.873456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.420 qpair failed and we were unable to recover it. 00:38:31.420 [2024-10-01 17:38:29.873633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.420 [2024-10-01 17:38:29.873645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.420 qpair failed and we were unable to recover it. 00:38:31.420 [2024-10-01 17:38:29.873836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.420 [2024-10-01 17:38:29.873848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.420 qpair failed and we were unable to recover it. 00:38:31.420 [2024-10-01 17:38:29.874071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.420 [2024-10-01 17:38:29.874082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.420 qpair failed and we were unable to recover it. 00:38:31.420 [2024-10-01 17:38:29.874365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.420 [2024-10-01 17:38:29.874376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.420 qpair failed and we were unable to recover it. 00:38:31.420 [2024-10-01 17:38:29.874548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.420 [2024-10-01 17:38:29.874559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.420 qpair failed and we were unable to recover it. 00:38:31.420 [2024-10-01 17:38:29.874876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.420 [2024-10-01 17:38:29.874887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.420 qpair failed and we were unable to recover it. 00:38:31.420 [2024-10-01 17:38:29.875156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.420 [2024-10-01 17:38:29.875168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.420 qpair failed and we were unable to recover it. 00:38:31.420 [2024-10-01 17:38:29.875358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.420 [2024-10-01 17:38:29.875370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.420 qpair failed and we were unable to recover it. 00:38:31.420 [2024-10-01 17:38:29.875694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.420 [2024-10-01 17:38:29.875705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.420 qpair failed and we were unable to recover it. 00:38:31.420 [2024-10-01 17:38:29.875877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.420 [2024-10-01 17:38:29.875889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.420 qpair failed and we were unable to recover it. 00:38:31.420 [2024-10-01 17:38:29.876255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.420 [2024-10-01 17:38:29.876266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.420 qpair failed and we were unable to recover it. 00:38:31.420 [2024-10-01 17:38:29.876544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.420 [2024-10-01 17:38:29.876554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.420 qpair failed and we were unable to recover it. 00:38:31.420 [2024-10-01 17:38:29.876879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.420 [2024-10-01 17:38:29.876890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.420 qpair failed and we were unable to recover it. 00:38:31.420 [2024-10-01 17:38:29.877209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.420 [2024-10-01 17:38:29.877220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.420 qpair failed and we were unable to recover it. 00:38:31.420 [2024-10-01 17:38:29.877515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.420 [2024-10-01 17:38:29.877526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.420 qpair failed and we were unable to recover it. 00:38:31.420 [2024-10-01 17:38:29.877835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.420 [2024-10-01 17:38:29.877846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.420 qpair failed and we were unable to recover it. 00:38:31.420 [2024-10-01 17:38:29.878036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.420 [2024-10-01 17:38:29.878048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.420 qpair failed and we were unable to recover it. 00:38:31.420 [2024-10-01 17:38:29.878215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.420 [2024-10-01 17:38:29.878228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.420 qpair failed and we were unable to recover it. 00:38:31.420 [2024-10-01 17:38:29.878630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.420 [2024-10-01 17:38:29.878642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.420 qpair failed and we were unable to recover it. 00:38:31.420 [2024-10-01 17:38:29.878979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.420 [2024-10-01 17:38:29.878991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.420 qpair failed and we were unable to recover it. 00:38:31.420 [2024-10-01 17:38:29.879322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.421 [2024-10-01 17:38:29.879334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.421 qpair failed and we were unable to recover it. 00:38:31.421 [2024-10-01 17:38:29.879655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.421 [2024-10-01 17:38:29.879666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.421 qpair failed and we were unable to recover it. 00:38:31.421 [2024-10-01 17:38:29.880012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.421 [2024-10-01 17:38:29.880024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.421 qpair failed and we were unable to recover it. 00:38:31.421 [2024-10-01 17:38:29.880335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.421 [2024-10-01 17:38:29.880345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.421 qpair failed and we were unable to recover it. 00:38:31.421 [2024-10-01 17:38:29.880666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.421 [2024-10-01 17:38:29.880676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.421 qpair failed and we were unable to recover it. 00:38:31.421 [2024-10-01 17:38:29.880889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.421 [2024-10-01 17:38:29.880900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.421 qpair failed and we were unable to recover it. 00:38:31.421 [2024-10-01 17:38:29.881204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.421 [2024-10-01 17:38:29.881215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.421 qpair failed and we were unable to recover it. 00:38:31.421 [2024-10-01 17:38:29.881426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.421 [2024-10-01 17:38:29.881436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.421 qpair failed and we were unable to recover it. 00:38:31.421 [2024-10-01 17:38:29.881614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.421 [2024-10-01 17:38:29.881625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.421 qpair failed and we were unable to recover it. 00:38:31.421 [2024-10-01 17:38:29.881924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.421 [2024-10-01 17:38:29.881934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.421 qpair failed and we were unable to recover it. 00:38:31.421 [2024-10-01 17:38:29.882247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.421 [2024-10-01 17:38:29.882259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.421 qpair failed and we were unable to recover it. 00:38:31.421 [2024-10-01 17:38:29.882448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.421 [2024-10-01 17:38:29.882459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.421 qpair failed and we were unable to recover it. 00:38:31.421 [2024-10-01 17:38:29.882617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.421 [2024-10-01 17:38:29.882628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.421 qpair failed and we were unable to recover it. 00:38:31.421 [2024-10-01 17:38:29.882912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.421 [2024-10-01 17:38:29.882923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.421 qpair failed and we were unable to recover it. 00:38:31.421 [2024-10-01 17:38:29.883256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.421 [2024-10-01 17:38:29.883268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.421 qpair failed and we were unable to recover it. 00:38:31.421 [2024-10-01 17:38:29.883424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.421 [2024-10-01 17:38:29.883436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.421 qpair failed and we were unable to recover it. 00:38:31.421 [2024-10-01 17:38:29.883752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.421 [2024-10-01 17:38:29.883763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.421 qpair failed and we were unable to recover it. 00:38:31.421 [2024-10-01 17:38:29.884075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.421 [2024-10-01 17:38:29.884087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.421 qpair failed and we were unable to recover it. 00:38:31.421 [2024-10-01 17:38:29.884292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.421 [2024-10-01 17:38:29.884303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.421 qpair failed and we were unable to recover it. 00:38:31.421 [2024-10-01 17:38:29.884580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.421 [2024-10-01 17:38:29.884590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.421 qpair failed and we were unable to recover it. 00:38:31.421 [2024-10-01 17:38:29.884971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.421 [2024-10-01 17:38:29.884982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.421 qpair failed and we were unable to recover it. 00:38:31.421 [2024-10-01 17:38:29.885150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.421 [2024-10-01 17:38:29.885162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.421 qpair failed and we were unable to recover it. 00:38:31.421 [2024-10-01 17:38:29.885445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.421 [2024-10-01 17:38:29.885455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.421 qpair failed and we were unable to recover it. 00:38:31.421 [2024-10-01 17:38:29.885803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.421 [2024-10-01 17:38:29.885815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.421 qpair failed and we were unable to recover it. 00:38:31.421 [2024-10-01 17:38:29.886101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.421 [2024-10-01 17:38:29.886113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.421 qpair failed and we were unable to recover it. 00:38:31.421 [2024-10-01 17:38:29.886408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.421 [2024-10-01 17:38:29.886419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.421 qpair failed and we were unable to recover it. 00:38:31.421 [2024-10-01 17:38:29.886722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.421 [2024-10-01 17:38:29.886733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.421 qpair failed and we were unable to recover it. 00:38:31.421 [2024-10-01 17:38:29.886923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.421 [2024-10-01 17:38:29.886933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.421 qpair failed and we were unable to recover it. 00:38:31.421 [2024-10-01 17:38:29.887214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.421 [2024-10-01 17:38:29.887225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.421 qpair failed and we were unable to recover it. 00:38:31.421 [2024-10-01 17:38:29.887555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.421 [2024-10-01 17:38:29.887565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.421 qpair failed and we were unable to recover it. 00:38:31.421 [2024-10-01 17:38:29.887831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.421 [2024-10-01 17:38:29.887843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.421 qpair failed and we were unable to recover it. 00:38:31.422 [2024-10-01 17:38:29.888139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.422 [2024-10-01 17:38:29.888150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.422 qpair failed and we were unable to recover it. 00:38:31.422 [2024-10-01 17:38:29.888487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.422 [2024-10-01 17:38:29.888499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.422 qpair failed and we were unable to recover it. 00:38:31.422 [2024-10-01 17:38:29.888809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.422 [2024-10-01 17:38:29.888820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.422 qpair failed and we were unable to recover it. 00:38:31.422 [2024-10-01 17:38:29.889165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.422 [2024-10-01 17:38:29.889176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.422 qpair failed and we were unable to recover it. 00:38:31.422 [2024-10-01 17:38:29.889560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.422 [2024-10-01 17:38:29.889571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.422 qpair failed and we were unable to recover it. 00:38:31.422 [2024-10-01 17:38:29.889760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.422 [2024-10-01 17:38:29.889774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.422 qpair failed and we were unable to recover it. 00:38:31.422 [2024-10-01 17:38:29.889990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.422 [2024-10-01 17:38:29.890006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.422 qpair failed and we were unable to recover it. 00:38:31.422 [2024-10-01 17:38:29.890307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.422 [2024-10-01 17:38:29.890318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.422 qpair failed and we were unable to recover it. 00:38:31.422 [2024-10-01 17:38:29.890479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.422 [2024-10-01 17:38:29.890489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.422 qpair failed and we were unable to recover it. 00:38:31.422 [2024-10-01 17:38:29.890812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.422 [2024-10-01 17:38:29.890823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.422 qpair failed and we were unable to recover it. 00:38:31.422 [2024-10-01 17:38:29.891017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.422 [2024-10-01 17:38:29.891029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.422 qpair failed and we were unable to recover it. 00:38:31.422 [2024-10-01 17:38:29.891314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.422 [2024-10-01 17:38:29.891325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.422 qpair failed and we were unable to recover it. 00:38:31.422 [2024-10-01 17:38:29.891631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.422 [2024-10-01 17:38:29.891641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.422 qpair failed and we were unable to recover it. 00:38:31.422 [2024-10-01 17:38:29.891950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.422 [2024-10-01 17:38:29.891961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.422 qpair failed and we were unable to recover it. 00:38:31.422 [2024-10-01 17:38:29.892285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.422 [2024-10-01 17:38:29.892296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.422 qpair failed and we were unable to recover it. 00:38:31.422 [2024-10-01 17:38:29.892605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.422 [2024-10-01 17:38:29.892616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.422 qpair failed and we were unable to recover it. 00:38:31.422 [2024-10-01 17:38:29.892951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.422 [2024-10-01 17:38:29.892962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.422 qpair failed and we were unable to recover it. 00:38:31.422 [2024-10-01 17:38:29.893135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.422 [2024-10-01 17:38:29.893147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.422 qpair failed and we were unable to recover it. 00:38:31.422 [2024-10-01 17:38:29.893442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.422 [2024-10-01 17:38:29.893453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.422 qpair failed and we were unable to recover it. 00:38:31.422 [2024-10-01 17:38:29.893742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.422 [2024-10-01 17:38:29.893753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.422 qpair failed and we were unable to recover it. 00:38:31.422 [2024-10-01 17:38:29.894014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.422 [2024-10-01 17:38:29.894026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.422 qpair failed and we were unable to recover it. 00:38:31.422 [2024-10-01 17:38:29.894257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.422 [2024-10-01 17:38:29.894268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.422 qpair failed and we were unable to recover it. 00:38:31.422 [2024-10-01 17:38:29.894452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.422 [2024-10-01 17:38:29.894463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.422 qpair failed and we were unable to recover it. 00:38:31.422 [2024-10-01 17:38:29.894680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.422 [2024-10-01 17:38:29.894691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.422 qpair failed and we were unable to recover it. 00:38:31.422 [2024-10-01 17:38:29.894889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.422 [2024-10-01 17:38:29.894899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.422 qpair failed and we were unable to recover it. 00:38:31.422 [2024-10-01 17:38:29.895204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.422 [2024-10-01 17:38:29.895215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.422 qpair failed and we were unable to recover it. 00:38:31.422 [2024-10-01 17:38:29.895374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.422 [2024-10-01 17:38:29.895383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.422 qpair failed and we were unable to recover it. 00:38:31.422 [2024-10-01 17:38:29.895542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.422 [2024-10-01 17:38:29.895553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.422 qpair failed and we were unable to recover it. 00:38:31.422 [2024-10-01 17:38:29.895880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.422 [2024-10-01 17:38:29.895890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.422 qpair failed and we were unable to recover it. 00:38:31.422 [2024-10-01 17:38:29.896205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.422 [2024-10-01 17:38:29.896217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.422 qpair failed and we were unable to recover it. 00:38:31.422 [2024-10-01 17:38:29.896383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.422 [2024-10-01 17:38:29.896396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.422 qpair failed and we were unable to recover it. 00:38:31.422 [2024-10-01 17:38:29.896738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.422 [2024-10-01 17:38:29.896749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.422 qpair failed and we were unable to recover it. 00:38:31.422 [2024-10-01 17:38:29.896939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.422 [2024-10-01 17:38:29.896959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.422 qpair failed and we were unable to recover it. 00:38:31.422 [2024-10-01 17:38:29.897249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.422 [2024-10-01 17:38:29.897260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.422 qpair failed and we were unable to recover it. 00:38:31.422 [2024-10-01 17:38:29.897560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.423 [2024-10-01 17:38:29.897570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.423 qpair failed and we were unable to recover it. 00:38:31.423 [2024-10-01 17:38:29.897877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.423 [2024-10-01 17:38:29.897887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.423 qpair failed and we were unable to recover it. 00:38:31.423 [2024-10-01 17:38:29.898213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.423 [2024-10-01 17:38:29.898223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.423 qpair failed and we were unable to recover it. 00:38:31.423 [2024-10-01 17:38:29.898396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.423 [2024-10-01 17:38:29.898407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.423 qpair failed and we were unable to recover it. 00:38:31.423 [2024-10-01 17:38:29.898675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.423 [2024-10-01 17:38:29.898686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.423 qpair failed and we were unable to recover it. 00:38:31.423 [2024-10-01 17:38:29.898977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.423 [2024-10-01 17:38:29.898988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.423 qpair failed and we were unable to recover it. 00:38:31.423 [2024-10-01 17:38:29.899287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.423 [2024-10-01 17:38:29.899297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.423 qpair failed and we were unable to recover it. 00:38:31.423 [2024-10-01 17:38:29.899600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.423 [2024-10-01 17:38:29.899612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.423 qpair failed and we were unable to recover it. 00:38:31.423 [2024-10-01 17:38:29.899928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.423 [2024-10-01 17:38:29.899939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.423 qpair failed and we were unable to recover it. 00:38:31.423 [2024-10-01 17:38:29.900275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.423 [2024-10-01 17:38:29.900286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.423 qpair failed and we were unable to recover it. 00:38:31.423 [2024-10-01 17:38:29.900609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.423 [2024-10-01 17:38:29.900620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.423 qpair failed and we were unable to recover it. 00:38:31.423 [2024-10-01 17:38:29.900933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.423 [2024-10-01 17:38:29.900944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.423 qpair failed and we were unable to recover it. 00:38:31.423 [2024-10-01 17:38:29.901275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.423 [2024-10-01 17:38:29.901289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.423 qpair failed and we were unable to recover it. 00:38:31.423 [2024-10-01 17:38:29.901613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.423 [2024-10-01 17:38:29.901625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.423 qpair failed and we were unable to recover it. 00:38:31.423 [2024-10-01 17:38:29.901932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.423 [2024-10-01 17:38:29.901943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.423 qpair failed and we were unable to recover it. 00:38:31.423 [2024-10-01 17:38:29.902252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.423 [2024-10-01 17:38:29.902265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.423 qpair failed and we were unable to recover it. 00:38:31.423 [2024-10-01 17:38:29.902598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.423 [2024-10-01 17:38:29.902609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.423 qpair failed and we were unable to recover it. 00:38:31.423 [2024-10-01 17:38:29.902936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.423 [2024-10-01 17:38:29.902948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.423 qpair failed and we were unable to recover it. 00:38:31.423 [2024-10-01 17:38:29.903282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.423 [2024-10-01 17:38:29.903293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.423 qpair failed and we were unable to recover it. 00:38:31.423 [2024-10-01 17:38:29.903636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.423 [2024-10-01 17:38:29.903647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.423 qpair failed and we were unable to recover it. 00:38:31.423 [2024-10-01 17:38:29.903980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.423 [2024-10-01 17:38:29.903991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.423 qpair failed and we were unable to recover it. 00:38:31.423 [2024-10-01 17:38:29.904318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.423 [2024-10-01 17:38:29.904330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.423 qpair failed and we were unable to recover it. 00:38:31.423 [2024-10-01 17:38:29.904628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.423 [2024-10-01 17:38:29.904639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.423 qpair failed and we were unable to recover it. 00:38:31.423 [2024-10-01 17:38:29.904967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.423 [2024-10-01 17:38:29.904979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.423 qpair failed and we were unable to recover it. 00:38:31.423 [2024-10-01 17:38:29.905316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.423 [2024-10-01 17:38:29.905328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.423 qpair failed and we were unable to recover it. 00:38:31.423 [2024-10-01 17:38:29.905650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.423 [2024-10-01 17:38:29.905662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.423 qpair failed and we were unable to recover it. 00:38:31.423 [2024-10-01 17:38:29.906073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.423 [2024-10-01 17:38:29.906085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.423 qpair failed and we were unable to recover it. 00:38:31.423 [2024-10-01 17:38:29.906384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.423 [2024-10-01 17:38:29.906395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.423 qpair failed and we were unable to recover it. 00:38:31.423 [2024-10-01 17:38:29.906713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.423 [2024-10-01 17:38:29.906723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.423 qpair failed and we were unable to recover it. 00:38:31.423 [2024-10-01 17:38:29.907057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.423 [2024-10-01 17:38:29.907068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.423 qpair failed and we were unable to recover it. 00:38:31.423 [2024-10-01 17:38:29.907112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.423 [2024-10-01 17:38:29.907121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.423 qpair failed and we were unable to recover it. 00:38:31.423 [2024-10-01 17:38:29.907268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.423 [2024-10-01 17:38:29.907278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.423 qpair failed and we were unable to recover it. 00:38:31.423 [2024-10-01 17:38:29.907579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.423 [2024-10-01 17:38:29.907590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.423 qpair failed and we were unable to recover it. 00:38:31.423 [2024-10-01 17:38:29.907896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.423 [2024-10-01 17:38:29.907907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.424 qpair failed and we were unable to recover it. 00:38:31.424 [2024-10-01 17:38:29.908232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.424 [2024-10-01 17:38:29.908243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.424 qpair failed and we were unable to recover it. 00:38:31.424 [2024-10-01 17:38:29.908566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.424 [2024-10-01 17:38:29.908577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.424 qpair failed and we were unable to recover it. 00:38:31.424 [2024-10-01 17:38:29.908911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.424 [2024-10-01 17:38:29.908922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.424 qpair failed and we were unable to recover it. 00:38:31.424 [2024-10-01 17:38:29.909104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.424 [2024-10-01 17:38:29.909114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.424 qpair failed and we were unable to recover it. 00:38:31.424 [2024-10-01 17:38:29.909440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.424 [2024-10-01 17:38:29.909451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.424 qpair failed and we were unable to recover it. 00:38:31.424 [2024-10-01 17:38:29.909499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.424 [2024-10-01 17:38:29.909514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.424 qpair failed and we were unable to recover it. 00:38:31.424 [2024-10-01 17:38:29.909797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.424 [2024-10-01 17:38:29.909807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.424 qpair failed and we were unable to recover it. 00:38:31.424 [2024-10-01 17:38:29.909851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.424 [2024-10-01 17:38:29.909860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.424 qpair failed and we were unable to recover it. 00:38:31.424 [2024-10-01 17:38:29.910055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.424 [2024-10-01 17:38:29.910067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.424 qpair failed and we were unable to recover it. 00:38:31.424 [2024-10-01 17:38:29.910384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.424 [2024-10-01 17:38:29.910395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.424 qpair failed and we were unable to recover it. 00:38:31.424 [2024-10-01 17:38:29.910733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.424 [2024-10-01 17:38:29.910743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.424 qpair failed and we were unable to recover it. 00:38:31.424 [2024-10-01 17:38:29.911074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.424 [2024-10-01 17:38:29.911085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.424 qpair failed and we were unable to recover it. 00:38:31.424 [2024-10-01 17:38:29.911423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.424 [2024-10-01 17:38:29.911433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.424 qpair failed and we were unable to recover it. 00:38:31.424 [2024-10-01 17:38:29.911611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.424 [2024-10-01 17:38:29.911621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.424 qpair failed and we were unable to recover it. 00:38:31.424 [2024-10-01 17:38:29.911717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.424 [2024-10-01 17:38:29.911729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.424 qpair failed and we were unable to recover it. 00:38:31.424 [2024-10-01 17:38:29.912072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.424 [2024-10-01 17:38:29.912083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.424 qpair failed and we were unable to recover it. 00:38:31.424 [2024-10-01 17:38:29.912404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.424 [2024-10-01 17:38:29.912414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.424 qpair failed and we were unable to recover it. 00:38:31.424 [2024-10-01 17:38:29.912721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.424 [2024-10-01 17:38:29.912733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.424 qpair failed and we were unable to recover it. 00:38:31.424 [2024-10-01 17:38:29.913077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.424 [2024-10-01 17:38:29.913088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.424 qpair failed and we were unable to recover it. 00:38:31.424 [2024-10-01 17:38:29.913274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.424 [2024-10-01 17:38:29.913286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.424 qpair failed and we were unable to recover it. 00:38:31.424 [2024-10-01 17:38:29.913617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.424 [2024-10-01 17:38:29.913628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.424 qpair failed and we were unable to recover it. 00:38:31.424 [2024-10-01 17:38:29.913981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.424 [2024-10-01 17:38:29.913991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.424 qpair failed and we were unable to recover it. 00:38:31.424 [2024-10-01 17:38:29.914280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.424 [2024-10-01 17:38:29.914290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.424 qpair failed and we were unable to recover it. 00:38:31.424 [2024-10-01 17:38:29.914451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.424 [2024-10-01 17:38:29.914463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.424 qpair failed and we were unable to recover it. 00:38:31.424 [2024-10-01 17:38:29.914657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.424 [2024-10-01 17:38:29.914667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.424 qpair failed and we were unable to recover it. 00:38:31.424 [2024-10-01 17:38:29.914852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.424 [2024-10-01 17:38:29.914864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.424 qpair failed and we were unable to recover it. 00:38:31.425 [2024-10-01 17:38:29.915196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.425 [2024-10-01 17:38:29.915208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.425 qpair failed and we were unable to recover it. 00:38:31.425 [2024-10-01 17:38:29.915540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.425 [2024-10-01 17:38:29.915550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.425 qpair failed and we were unable to recover it. 00:38:31.425 [2024-10-01 17:38:29.915834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.425 [2024-10-01 17:38:29.915845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.425 qpair failed and we were unable to recover it. 00:38:31.425 [2024-10-01 17:38:29.916177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.425 [2024-10-01 17:38:29.916188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.425 qpair failed and we were unable to recover it. 00:38:31.425 [2024-10-01 17:38:29.916520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.425 [2024-10-01 17:38:29.916530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.425 qpair failed and we were unable to recover it. 00:38:31.425 [2024-10-01 17:38:29.916867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.425 [2024-10-01 17:38:29.916878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.425 qpair failed and we were unable to recover it. 00:38:31.425 [2024-10-01 17:38:29.917250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.425 [2024-10-01 17:38:29.917261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.425 qpair failed and we were unable to recover it. 00:38:31.425 [2024-10-01 17:38:29.917561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.425 [2024-10-01 17:38:29.917573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.425 qpair failed and we were unable to recover it. 00:38:31.425 [2024-10-01 17:38:29.917899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.425 [2024-10-01 17:38:29.917911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.425 qpair failed and we were unable to recover it. 00:38:31.425 [2024-10-01 17:38:29.918086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.425 [2024-10-01 17:38:29.918098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.425 qpair failed and we were unable to recover it. 00:38:31.425 [2024-10-01 17:38:29.918429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.425 [2024-10-01 17:38:29.918441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.425 qpair failed and we were unable to recover it. 00:38:31.425 [2024-10-01 17:38:29.918765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.425 [2024-10-01 17:38:29.918775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.425 qpair failed and we were unable to recover it. 00:38:31.425 [2024-10-01 17:38:29.919040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.425 [2024-10-01 17:38:29.919052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.425 qpair failed and we were unable to recover it. 00:38:31.425 [2024-10-01 17:38:29.919376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.425 [2024-10-01 17:38:29.919387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.425 qpair failed and we were unable to recover it. 00:38:31.425 [2024-10-01 17:38:29.919659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.425 [2024-10-01 17:38:29.919670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.425 qpair failed and we were unable to recover it. 00:38:31.425 [2024-10-01 17:38:29.919933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.425 [2024-10-01 17:38:29.919944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.425 qpair failed and we were unable to recover it. 00:38:31.425 [2024-10-01 17:38:29.920118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.425 [2024-10-01 17:38:29.920128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.425 qpair failed and we were unable to recover it. 00:38:31.425 [2024-10-01 17:38:29.920445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.425 [2024-10-01 17:38:29.920456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.425 qpair failed and we were unable to recover it. 00:38:31.425 [2024-10-01 17:38:29.920627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.425 [2024-10-01 17:38:29.920637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.425 qpair failed and we were unable to recover it. 00:38:31.425 [2024-10-01 17:38:29.920965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.425 [2024-10-01 17:38:29.920975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.425 qpair failed and we were unable to recover it. 00:38:31.425 [2024-10-01 17:38:29.921328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.425 [2024-10-01 17:38:29.921341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.425 qpair failed and we were unable to recover it. 00:38:31.425 [2024-10-01 17:38:29.921642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.425 [2024-10-01 17:38:29.921654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.425 qpair failed and we were unable to recover it. 00:38:31.425 [2024-10-01 17:38:29.921974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.425 [2024-10-01 17:38:29.921985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.425 qpair failed and we were unable to recover it. 00:38:31.425 [2024-10-01 17:38:29.922323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.425 [2024-10-01 17:38:29.922334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.425 qpair failed and we were unable to recover it. 00:38:31.425 [2024-10-01 17:38:29.922640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.425 [2024-10-01 17:38:29.922650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.425 qpair failed and we were unable to recover it. 00:38:31.425 [2024-10-01 17:38:29.922938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.425 [2024-10-01 17:38:29.922949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.425 qpair failed and we were unable to recover it. 00:38:31.425 [2024-10-01 17:38:29.923219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.425 [2024-10-01 17:38:29.923232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.425 qpair failed and we were unable to recover it. 00:38:31.425 [2024-10-01 17:38:29.923555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.425 [2024-10-01 17:38:29.923565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.425 qpair failed and we were unable to recover it. 00:38:31.425 [2024-10-01 17:38:29.923881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.425 [2024-10-01 17:38:29.923892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.425 qpair failed and we were unable to recover it. 00:38:31.425 [2024-10-01 17:38:29.924199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.425 [2024-10-01 17:38:29.924210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.425 qpair failed and we were unable to recover it. 00:38:31.426 [2024-10-01 17:38:29.924514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.426 [2024-10-01 17:38:29.924525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.426 qpair failed and we were unable to recover it. 00:38:31.426 [2024-10-01 17:38:29.924807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.426 [2024-10-01 17:38:29.924818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.426 qpair failed and we were unable to recover it. 00:38:31.700 [2024-10-01 17:38:29.925132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.700 [2024-10-01 17:38:29.925144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.700 qpair failed and we were unable to recover it. 00:38:31.701 [2024-10-01 17:38:29.925451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.701 [2024-10-01 17:38:29.925463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.701 qpair failed and we were unable to recover it. 00:38:31.701 [2024-10-01 17:38:29.925799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.701 [2024-10-01 17:38:29.925811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.701 qpair failed and we were unable to recover it. 00:38:31.701 [2024-10-01 17:38:29.926149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.701 [2024-10-01 17:38:29.926160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.701 qpair failed and we were unable to recover it. 00:38:31.701 [2024-10-01 17:38:29.926427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.701 [2024-10-01 17:38:29.926438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.701 qpair failed and we were unable to recover it. 00:38:31.701 [2024-10-01 17:38:29.926772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.701 [2024-10-01 17:38:29.926784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.701 qpair failed and we were unable to recover it. 00:38:31.701 [2024-10-01 17:38:29.926978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.701 [2024-10-01 17:38:29.926989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.701 qpair failed and we were unable to recover it. 00:38:31.701 [2024-10-01 17:38:29.927262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.701 [2024-10-01 17:38:29.927273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.701 qpair failed and we were unable to recover it. 00:38:31.701 [2024-10-01 17:38:29.927571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.701 [2024-10-01 17:38:29.927582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.701 qpair failed and we were unable to recover it. 00:38:31.701 [2024-10-01 17:38:29.927906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.701 [2024-10-01 17:38:29.927919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.701 qpair failed and we were unable to recover it. 00:38:31.701 [2024-10-01 17:38:29.928235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.701 [2024-10-01 17:38:29.928246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.701 qpair failed and we were unable to recover it. 00:38:31.701 [2024-10-01 17:38:29.928583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.701 [2024-10-01 17:38:29.928593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.701 qpair failed and we were unable to recover it. 00:38:31.701 [2024-10-01 17:38:29.928902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.701 [2024-10-01 17:38:29.928913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.701 qpair failed and we were unable to recover it. 00:38:31.701 [2024-10-01 17:38:29.929182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.701 [2024-10-01 17:38:29.929193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.701 qpair failed and we were unable to recover it. 00:38:31.701 [2024-10-01 17:38:29.929363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.701 [2024-10-01 17:38:29.929373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.701 qpair failed and we were unable to recover it. 00:38:31.701 [2024-10-01 17:38:29.929558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.701 [2024-10-01 17:38:29.929573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.701 qpair failed and we were unable to recover it. 00:38:31.701 [2024-10-01 17:38:29.929728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.701 [2024-10-01 17:38:29.929739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.701 qpair failed and we were unable to recover it. 00:38:31.701 [2024-10-01 17:38:29.930088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.701 [2024-10-01 17:38:29.930099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.701 qpair failed and we were unable to recover it. 00:38:31.701 [2024-10-01 17:38:29.930436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.701 [2024-10-01 17:38:29.930447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.701 qpair failed and we were unable to recover it. 00:38:31.701 [2024-10-01 17:38:29.930762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.701 [2024-10-01 17:38:29.930776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.701 qpair failed and we were unable to recover it. 00:38:31.701 [2024-10-01 17:38:29.930968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.701 [2024-10-01 17:38:29.930980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.701 qpair failed and we were unable to recover it. 00:38:31.701 [2024-10-01 17:38:29.931321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.701 [2024-10-01 17:38:29.931334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.701 qpair failed and we were unable to recover it. 00:38:31.701 [2024-10-01 17:38:29.931633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.701 [2024-10-01 17:38:29.931645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.701 qpair failed and we were unable to recover it. 00:38:31.701 [2024-10-01 17:38:29.931982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.701 [2024-10-01 17:38:29.932004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.701 qpair failed and we were unable to recover it. 00:38:31.701 [2024-10-01 17:38:29.932291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.701 [2024-10-01 17:38:29.932302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.701 qpair failed and we were unable to recover it. 00:38:31.701 [2024-10-01 17:38:29.932490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.701 [2024-10-01 17:38:29.932501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.701 qpair failed and we were unable to recover it. 00:38:31.701 [2024-10-01 17:38:29.932817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.701 [2024-10-01 17:38:29.932828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.701 qpair failed and we were unable to recover it. 00:38:31.701 [2024-10-01 17:38:29.933103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.701 [2024-10-01 17:38:29.933116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.701 qpair failed and we were unable to recover it. 00:38:31.701 [2024-10-01 17:38:29.933439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.701 [2024-10-01 17:38:29.933450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.701 qpair failed and we were unable to recover it. 00:38:31.701 [2024-10-01 17:38:29.933756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.701 [2024-10-01 17:38:29.933769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.701 qpair failed and we were unable to recover it. 00:38:31.701 [2024-10-01 17:38:29.933932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.701 [2024-10-01 17:38:29.933944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.701 qpair failed and we were unable to recover it. 00:38:31.701 [2024-10-01 17:38:29.934284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.701 [2024-10-01 17:38:29.934296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.701 qpair failed and we were unable to recover it. 00:38:31.701 [2024-10-01 17:38:29.934600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.701 [2024-10-01 17:38:29.934611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.701 qpair failed and we were unable to recover it. 00:38:31.701 [2024-10-01 17:38:29.934807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.701 [2024-10-01 17:38:29.934819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.701 qpair failed and we were unable to recover it. 00:38:31.701 [2024-10-01 17:38:29.934986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.701 [2024-10-01 17:38:29.935002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.701 qpair failed and we were unable to recover it. 00:38:31.701 [2024-10-01 17:38:29.935327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.701 [2024-10-01 17:38:29.935338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.701 qpair failed and we were unable to recover it. 00:38:31.702 [2024-10-01 17:38:29.935672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.702 [2024-10-01 17:38:29.935682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.702 qpair failed and we were unable to recover it. 00:38:31.702 [2024-10-01 17:38:29.936006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.702 [2024-10-01 17:38:29.936018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.702 qpair failed and we were unable to recover it. 00:38:31.702 [2024-10-01 17:38:29.936334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.702 [2024-10-01 17:38:29.936345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.702 qpair failed and we were unable to recover it. 00:38:31.702 [2024-10-01 17:38:29.936535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.702 [2024-10-01 17:38:29.936547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.702 qpair failed and we were unable to recover it. 00:38:31.702 [2024-10-01 17:38:29.936888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.702 [2024-10-01 17:38:29.936898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.702 qpair failed and we were unable to recover it. 00:38:31.702 [2024-10-01 17:38:29.937230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.702 [2024-10-01 17:38:29.937241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.702 qpair failed and we were unable to recover it. 00:38:31.702 [2024-10-01 17:38:29.937401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.702 [2024-10-01 17:38:29.937411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.702 qpair failed and we were unable to recover it. 00:38:31.702 [2024-10-01 17:38:29.937723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.702 [2024-10-01 17:38:29.937733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.702 qpair failed and we were unable to recover it. 00:38:31.702 [2024-10-01 17:38:29.938063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.702 [2024-10-01 17:38:29.938074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.702 qpair failed and we were unable to recover it. 00:38:31.702 [2024-10-01 17:38:29.938404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.702 [2024-10-01 17:38:29.938415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.702 qpair failed and we were unable to recover it. 00:38:31.702 [2024-10-01 17:38:29.938710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.702 [2024-10-01 17:38:29.938721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.702 qpair failed and we were unable to recover it. 00:38:31.702 [2024-10-01 17:38:29.939036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.702 [2024-10-01 17:38:29.939047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.702 qpair failed and we were unable to recover it. 00:38:31.702 [2024-10-01 17:38:29.939364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.702 [2024-10-01 17:38:29.939375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.702 qpair failed and we were unable to recover it. 00:38:31.702 [2024-10-01 17:38:29.939685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.702 [2024-10-01 17:38:29.939695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.702 qpair failed and we were unable to recover it. 00:38:31.702 [2024-10-01 17:38:29.940015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.702 [2024-10-01 17:38:29.940026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.702 qpair failed and we were unable to recover it. 00:38:31.702 [2024-10-01 17:38:29.940211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.702 [2024-10-01 17:38:29.940223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.702 qpair failed and we were unable to recover it. 00:38:31.702 [2024-10-01 17:38:29.940546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.702 [2024-10-01 17:38:29.940557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.702 qpair failed and we were unable to recover it. 00:38:31.702 [2024-10-01 17:38:29.940739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.702 [2024-10-01 17:38:29.940750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.702 qpair failed and we were unable to recover it. 00:38:31.702 [2024-10-01 17:38:29.941041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.702 [2024-10-01 17:38:29.941052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.702 qpair failed and we were unable to recover it. 00:38:31.702 [2024-10-01 17:38:29.941351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.702 [2024-10-01 17:38:29.941362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.702 qpair failed and we were unable to recover it. 00:38:31.702 [2024-10-01 17:38:29.941698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.702 [2024-10-01 17:38:29.941711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.702 qpair failed and we were unable to recover it. 00:38:31.702 [2024-10-01 17:38:29.942028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.702 [2024-10-01 17:38:29.942039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.702 qpair failed and we were unable to recover it. 00:38:31.702 [2024-10-01 17:38:29.942217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.702 [2024-10-01 17:38:29.942226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.702 qpair failed and we were unable to recover it. 00:38:31.702 [2024-10-01 17:38:29.942533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.702 [2024-10-01 17:38:29.942544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.702 qpair failed and we were unable to recover it. 00:38:31.702 [2024-10-01 17:38:29.942805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.702 [2024-10-01 17:38:29.942816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.702 qpair failed and we were unable to recover it. 00:38:31.702 [2024-10-01 17:38:29.943123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.702 [2024-10-01 17:38:29.943134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.702 qpair failed and we were unable to recover it. 00:38:31.702 [2024-10-01 17:38:29.943299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.702 [2024-10-01 17:38:29.943310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.702 qpair failed and we were unable to recover it. 00:38:31.702 [2024-10-01 17:38:29.943607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.702 [2024-10-01 17:38:29.943618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.702 qpair failed and we were unable to recover it. 00:38:31.702 [2024-10-01 17:38:29.943930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.702 [2024-10-01 17:38:29.943941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.702 qpair failed and we were unable to recover it. 00:38:31.702 [2024-10-01 17:38:29.944274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.702 [2024-10-01 17:38:29.944285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.702 qpair failed and we were unable to recover it. 00:38:31.702 [2024-10-01 17:38:29.944459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.702 [2024-10-01 17:38:29.944471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.703 qpair failed and we were unable to recover it. 00:38:31.703 [2024-10-01 17:38:29.944654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.703 [2024-10-01 17:38:29.944664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.703 qpair failed and we were unable to recover it. 00:38:31.703 [2024-10-01 17:38:29.944975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.703 [2024-10-01 17:38:29.944987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.703 qpair failed and we were unable to recover it. 00:38:31.703 [2024-10-01 17:38:29.945312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.703 [2024-10-01 17:38:29.945323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.703 qpair failed and we were unable to recover it. 00:38:31.703 [2024-10-01 17:38:29.945623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.703 [2024-10-01 17:38:29.945635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.703 qpair failed and we were unable to recover it. 00:38:31.703 [2024-10-01 17:38:29.945948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.703 [2024-10-01 17:38:29.945962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.703 qpair failed and we were unable to recover it. 00:38:31.703 [2024-10-01 17:38:29.946288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.703 [2024-10-01 17:38:29.946301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.703 qpair failed and we were unable to recover it. 00:38:31.703 [2024-10-01 17:38:29.946470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.703 [2024-10-01 17:38:29.946483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.703 qpair failed and we were unable to recover it. 00:38:31.703 [2024-10-01 17:38:29.946801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.703 [2024-10-01 17:38:29.946813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.703 qpair failed and we were unable to recover it. 00:38:31.703 [2024-10-01 17:38:29.947143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.703 [2024-10-01 17:38:29.947154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.703 qpair failed and we were unable to recover it. 00:38:31.703 [2024-10-01 17:38:29.947462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.703 [2024-10-01 17:38:29.947473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.703 qpair failed and we were unable to recover it. 00:38:31.703 [2024-10-01 17:38:29.947786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.703 [2024-10-01 17:38:29.947797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.703 qpair failed and we were unable to recover it. 00:38:31.703 [2024-10-01 17:38:29.948103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.703 [2024-10-01 17:38:29.948114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.703 qpair failed and we were unable to recover it. 00:38:31.703 [2024-10-01 17:38:29.948366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.703 [2024-10-01 17:38:29.948377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.703 qpair failed and we were unable to recover it. 00:38:31.703 [2024-10-01 17:38:29.948702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.703 [2024-10-01 17:38:29.948713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.703 qpair failed and we were unable to recover it. 00:38:31.703 [2024-10-01 17:38:29.948974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.703 [2024-10-01 17:38:29.948985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.703 qpair failed and we were unable to recover it. 00:38:31.703 [2024-10-01 17:38:29.949292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.703 [2024-10-01 17:38:29.949304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.703 qpair failed and we were unable to recover it. 00:38:31.703 [2024-10-01 17:38:29.949618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.703 [2024-10-01 17:38:29.949632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.703 qpair failed and we were unable to recover it. 00:38:31.703 [2024-10-01 17:38:29.949953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.703 [2024-10-01 17:38:29.949964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.703 qpair failed and we were unable to recover it. 00:38:31.703 [2024-10-01 17:38:29.950177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.703 [2024-10-01 17:38:29.950190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.703 qpair failed and we were unable to recover it. 00:38:31.703 [2024-10-01 17:38:29.950491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.703 [2024-10-01 17:38:29.950502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.703 qpair failed and we were unable to recover it. 00:38:31.703 [2024-10-01 17:38:29.950686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.703 [2024-10-01 17:38:29.950697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.703 qpair failed and we were unable to recover it. 00:38:31.703 [2024-10-01 17:38:29.951018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.703 [2024-10-01 17:38:29.951030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.703 qpair failed and we were unable to recover it. 00:38:31.703 [2024-10-01 17:38:29.951355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.703 [2024-10-01 17:38:29.951366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.703 qpair failed and we were unable to recover it. 00:38:31.703 [2024-10-01 17:38:29.951553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.703 [2024-10-01 17:38:29.951564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.703 qpair failed and we were unable to recover it. 00:38:31.703 [2024-10-01 17:38:29.951728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.703 [2024-10-01 17:38:29.951739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.703 qpair failed and we were unable to recover it. 00:38:31.703 [2024-10-01 17:38:29.952090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.703 [2024-10-01 17:38:29.952101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.703 qpair failed and we were unable to recover it. 00:38:31.703 [2024-10-01 17:38:29.952425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.703 [2024-10-01 17:38:29.952435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.703 qpair failed and we were unable to recover it. 00:38:31.703 [2024-10-01 17:38:29.952738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.703 [2024-10-01 17:38:29.952749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.703 qpair failed and we were unable to recover it. 00:38:31.703 [2024-10-01 17:38:29.953066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.703 [2024-10-01 17:38:29.953077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.703 qpair failed and we were unable to recover it. 00:38:31.703 [2024-10-01 17:38:29.953420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.703 [2024-10-01 17:38:29.953431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.703 qpair failed and we were unable to recover it. 00:38:31.703 [2024-10-01 17:38:29.953747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.703 [2024-10-01 17:38:29.953759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.703 qpair failed and we were unable to recover it. 00:38:31.703 [2024-10-01 17:38:29.954065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.703 [2024-10-01 17:38:29.954077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.703 qpair failed and we were unable to recover it. 00:38:31.703 [2024-10-01 17:38:29.954416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.703 [2024-10-01 17:38:29.954427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.703 qpair failed and we were unable to recover it. 00:38:31.703 [2024-10-01 17:38:29.954703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.704 [2024-10-01 17:38:29.954713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.704 qpair failed and we were unable to recover it. 00:38:31.704 [2024-10-01 17:38:29.955043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.704 [2024-10-01 17:38:29.955054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.704 qpair failed and we were unable to recover it. 00:38:31.704 [2024-10-01 17:38:29.955360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.704 [2024-10-01 17:38:29.955372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.704 qpair failed and we were unable to recover it. 00:38:31.704 [2024-10-01 17:38:29.955586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.704 [2024-10-01 17:38:29.955597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.704 qpair failed and we were unable to recover it. 00:38:31.704 [2024-10-01 17:38:29.955759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.704 [2024-10-01 17:38:29.955771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.704 qpair failed and we were unable to recover it. 00:38:31.704 [2024-10-01 17:38:29.956077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.704 [2024-10-01 17:38:29.956088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.704 qpair failed and we were unable to recover it. 00:38:31.704 [2024-10-01 17:38:29.956384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.704 [2024-10-01 17:38:29.956395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.704 qpair failed and we were unable to recover it. 00:38:31.704 [2024-10-01 17:38:29.956729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.704 [2024-10-01 17:38:29.956740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.704 qpair failed and we were unable to recover it. 00:38:31.704 [2024-10-01 17:38:29.957026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.704 [2024-10-01 17:38:29.957037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.704 qpair failed and we were unable to recover it. 00:38:31.704 [2024-10-01 17:38:29.957381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.704 [2024-10-01 17:38:29.957392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.704 qpair failed and we were unable to recover it. 00:38:31.704 [2024-10-01 17:38:29.957723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.704 [2024-10-01 17:38:29.957734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.704 qpair failed and we were unable to recover it. 00:38:31.704 [2024-10-01 17:38:29.957917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.704 [2024-10-01 17:38:29.957929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.704 qpair failed and we were unable to recover it. 00:38:31.704 [2024-10-01 17:38:29.958243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.704 [2024-10-01 17:38:29.958255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.704 qpair failed and we were unable to recover it. 00:38:31.704 [2024-10-01 17:38:29.958587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.704 [2024-10-01 17:38:29.958599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.704 qpair failed and we were unable to recover it. 00:38:31.704 [2024-10-01 17:38:29.958852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.704 [2024-10-01 17:38:29.958863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.704 qpair failed and we were unable to recover it. 00:38:31.704 [2024-10-01 17:38:29.959197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.704 [2024-10-01 17:38:29.959208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.704 qpair failed and we were unable to recover it. 00:38:31.704 [2024-10-01 17:38:29.959485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.704 [2024-10-01 17:38:29.959495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.704 qpair failed and we were unable to recover it. 00:38:31.704 [2024-10-01 17:38:29.959806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.704 [2024-10-01 17:38:29.959817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.704 qpair failed and we were unable to recover it. 00:38:31.704 [2024-10-01 17:38:29.960122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.704 [2024-10-01 17:38:29.960132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.704 qpair failed and we were unable to recover it. 00:38:31.704 [2024-10-01 17:38:29.960310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.704 [2024-10-01 17:38:29.960321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.704 qpair failed and we were unable to recover it. 00:38:31.704 [2024-10-01 17:38:29.960631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.704 [2024-10-01 17:38:29.960642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.704 qpair failed and we were unable to recover it. 00:38:31.704 [2024-10-01 17:38:29.960958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.704 [2024-10-01 17:38:29.960970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.704 qpair failed and we were unable to recover it. 00:38:31.704 [2024-10-01 17:38:29.961312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.704 [2024-10-01 17:38:29.961323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.704 qpair failed and we were unable to recover it. 00:38:31.704 [2024-10-01 17:38:29.961661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.704 [2024-10-01 17:38:29.961672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.704 qpair failed and we were unable to recover it. 00:38:31.704 [2024-10-01 17:38:29.961844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.704 [2024-10-01 17:38:29.961857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.704 qpair failed and we were unable to recover it. 00:38:31.704 [2024-10-01 17:38:29.962166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.704 [2024-10-01 17:38:29.962178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.704 qpair failed and we were unable to recover it. 00:38:31.704 [2024-10-01 17:38:29.962501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.704 [2024-10-01 17:38:29.962513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.704 qpair failed and we were unable to recover it. 00:38:31.704 [2024-10-01 17:38:29.962820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.704 [2024-10-01 17:38:29.962832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.704 qpair failed and we were unable to recover it. 00:38:31.704 [2024-10-01 17:38:29.963165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.704 [2024-10-01 17:38:29.963178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.704 qpair failed and we were unable to recover it. 00:38:31.704 [2024-10-01 17:38:29.963487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.705 [2024-10-01 17:38:29.963499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.705 qpair failed and we were unable to recover it. 00:38:31.705 [2024-10-01 17:38:29.963776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.705 [2024-10-01 17:38:29.963788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.705 qpair failed and we were unable to recover it. 00:38:31.705 [2024-10-01 17:38:29.964062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.705 [2024-10-01 17:38:29.964072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.705 qpair failed and we were unable to recover it. 00:38:31.705 [2024-10-01 17:38:29.964442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.705 [2024-10-01 17:38:29.964453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.705 qpair failed and we were unable to recover it. 00:38:31.705 [2024-10-01 17:38:29.964782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.705 [2024-10-01 17:38:29.964792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.705 qpair failed and we were unable to recover it. 00:38:31.705 [2024-10-01 17:38:29.965095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.705 [2024-10-01 17:38:29.965106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.705 qpair failed and we were unable to recover it. 00:38:31.705 [2024-10-01 17:38:29.965416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.705 [2024-10-01 17:38:29.965427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.705 qpair failed and we were unable to recover it. 00:38:31.705 [2024-10-01 17:38:29.965730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.705 [2024-10-01 17:38:29.965741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.705 qpair failed and we were unable to recover it. 00:38:31.705 [2024-10-01 17:38:29.966026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.705 [2024-10-01 17:38:29.966038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.705 qpair failed and we were unable to recover it. 00:38:31.705 [2024-10-01 17:38:29.966376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.705 [2024-10-01 17:38:29.966387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.705 qpair failed and we were unable to recover it. 00:38:31.705 [2024-10-01 17:38:29.966718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.705 [2024-10-01 17:38:29.966729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.705 qpair failed and we were unable to recover it. 00:38:31.705 [2024-10-01 17:38:29.966894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.705 [2024-10-01 17:38:29.966905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.705 qpair failed and we were unable to recover it. 00:38:31.705 [2024-10-01 17:38:29.967297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.705 [2024-10-01 17:38:29.967307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.705 qpair failed and we were unable to recover it. 00:38:31.705 [2024-10-01 17:38:29.967490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.705 [2024-10-01 17:38:29.967502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.705 qpair failed and we were unable to recover it. 00:38:31.705 [2024-10-01 17:38:29.967793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.705 [2024-10-01 17:38:29.967804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.705 qpair failed and we were unable to recover it. 00:38:31.705 [2024-10-01 17:38:29.967987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.705 [2024-10-01 17:38:29.968002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.705 qpair failed and we were unable to recover it. 00:38:31.705 [2024-10-01 17:38:29.968293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.705 [2024-10-01 17:38:29.968304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.705 qpair failed and we were unable to recover it. 00:38:31.705 [2024-10-01 17:38:29.968610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.705 [2024-10-01 17:38:29.968620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.705 qpair failed and we were unable to recover it. 00:38:31.705 [2024-10-01 17:38:29.968786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.705 [2024-10-01 17:38:29.968797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.705 qpair failed and we were unable to recover it. 00:38:31.705 [2024-10-01 17:38:29.969027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.705 [2024-10-01 17:38:29.969038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.705 qpair failed and we were unable to recover it. 00:38:31.705 [2024-10-01 17:38:29.969352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.705 [2024-10-01 17:38:29.969362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.705 qpair failed and we were unable to recover it. 00:38:31.705 [2024-10-01 17:38:29.969526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.705 [2024-10-01 17:38:29.969538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.705 qpair failed and we were unable to recover it. 00:38:31.705 [2024-10-01 17:38:29.969723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.705 [2024-10-01 17:38:29.969734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.705 qpair failed and we were unable to recover it. 00:38:31.705 [2024-10-01 17:38:29.970047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.705 [2024-10-01 17:38:29.970059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.705 qpair failed and we were unable to recover it. 00:38:31.705 [2024-10-01 17:38:29.970386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.705 [2024-10-01 17:38:29.970397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.705 qpair failed and we were unable to recover it. 00:38:31.705 [2024-10-01 17:38:29.970730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.705 [2024-10-01 17:38:29.970740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.705 qpair failed and we were unable to recover it. 00:38:31.705 [2024-10-01 17:38:29.971019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.705 [2024-10-01 17:38:29.971031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.705 qpair failed and we were unable to recover it. 00:38:31.705 [2024-10-01 17:38:29.971213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.705 [2024-10-01 17:38:29.971224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.705 qpair failed and we were unable to recover it. 00:38:31.705 [2024-10-01 17:38:29.971531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.705 [2024-10-01 17:38:29.971542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.705 qpair failed and we were unable to recover it. 00:38:31.705 [2024-10-01 17:38:29.971723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.705 [2024-10-01 17:38:29.971735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.705 qpair failed and we were unable to recover it. 00:38:31.705 [2024-10-01 17:38:29.972058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.705 [2024-10-01 17:38:29.972069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.705 qpair failed and we were unable to recover it. 00:38:31.705 [2024-10-01 17:38:29.972365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.705 [2024-10-01 17:38:29.972376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.705 qpair failed and we were unable to recover it. 00:38:31.705 [2024-10-01 17:38:29.972551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.705 [2024-10-01 17:38:29.972564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.705 qpair failed and we were unable to recover it. 00:38:31.705 [2024-10-01 17:38:29.972747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.705 [2024-10-01 17:38:29.972757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.705 qpair failed and we were unable to recover it. 00:38:31.705 [2024-10-01 17:38:29.973070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.705 [2024-10-01 17:38:29.973081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.706 qpair failed and we were unable to recover it. 00:38:31.706 [2024-10-01 17:38:29.973421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.706 [2024-10-01 17:38:29.973432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.706 qpair failed and we were unable to recover it. 00:38:31.706 [2024-10-01 17:38:29.973741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.706 [2024-10-01 17:38:29.973752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.706 qpair failed and we were unable to recover it. 00:38:31.706 [2024-10-01 17:38:29.974062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.706 [2024-10-01 17:38:29.974073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.706 qpair failed and we were unable to recover it. 00:38:31.706 [2024-10-01 17:38:29.974402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.706 [2024-10-01 17:38:29.974413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.706 qpair failed and we were unable to recover it. 00:38:31.706 [2024-10-01 17:38:29.974744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.706 [2024-10-01 17:38:29.974754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.706 qpair failed and we were unable to recover it. 00:38:31.706 [2024-10-01 17:38:29.974967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.706 [2024-10-01 17:38:29.974978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.706 qpair failed and we were unable to recover it. 00:38:31.706 [2024-10-01 17:38:29.975198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.706 [2024-10-01 17:38:29.975209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.706 qpair failed and we were unable to recover it. 00:38:31.706 [2024-10-01 17:38:29.975480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.706 [2024-10-01 17:38:29.975491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.706 qpair failed and we were unable to recover it. 00:38:31.706 [2024-10-01 17:38:29.975795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.706 [2024-10-01 17:38:29.975806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.706 qpair failed and we were unable to recover it. 00:38:31.706 [2024-10-01 17:38:29.976032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.706 [2024-10-01 17:38:29.976044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.706 qpair failed and we were unable to recover it. 00:38:31.706 [2024-10-01 17:38:29.976240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.706 [2024-10-01 17:38:29.976250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.706 qpair failed and we were unable to recover it. 00:38:31.706 [2024-10-01 17:38:29.976548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.706 [2024-10-01 17:38:29.976558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.706 qpair failed and we were unable to recover it. 00:38:31.706 [2024-10-01 17:38:29.976744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.706 [2024-10-01 17:38:29.976757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.706 qpair failed and we were unable to recover it. 00:38:31.706 [2024-10-01 17:38:29.977096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.706 [2024-10-01 17:38:29.977107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.706 qpair failed and we were unable to recover it. 00:38:31.706 [2024-10-01 17:38:29.977384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.706 [2024-10-01 17:38:29.977394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.706 qpair failed and we were unable to recover it. 00:38:31.706 [2024-10-01 17:38:29.977570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.706 [2024-10-01 17:38:29.977582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.706 qpair failed and we were unable to recover it. 00:38:31.706 [2024-10-01 17:38:29.977915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.706 [2024-10-01 17:38:29.977926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.706 qpair failed and we were unable to recover it. 00:38:31.706 [2024-10-01 17:38:29.978238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.706 [2024-10-01 17:38:29.978249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.706 qpair failed and we were unable to recover it. 00:38:31.706 [2024-10-01 17:38:29.978529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.706 [2024-10-01 17:38:29.978540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.706 qpair failed and we were unable to recover it. 00:38:31.706 [2024-10-01 17:38:29.978848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.706 [2024-10-01 17:38:29.978859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.706 qpair failed and we were unable to recover it. 00:38:31.706 [2024-10-01 17:38:29.979072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.706 [2024-10-01 17:38:29.979083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.706 qpair failed and we were unable to recover it. 00:38:31.706 [2024-10-01 17:38:29.979245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.706 [2024-10-01 17:38:29.979256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.706 qpair failed and we were unable to recover it. 00:38:31.706 [2024-10-01 17:38:29.979577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.706 [2024-10-01 17:38:29.979587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.706 qpair failed and we were unable to recover it. 00:38:31.706 [2024-10-01 17:38:29.979884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.706 [2024-10-01 17:38:29.979894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.706 qpair failed and we were unable to recover it. 00:38:31.706 [2024-10-01 17:38:29.980206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.706 [2024-10-01 17:38:29.980217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.706 qpair failed and we were unable to recover it. 00:38:31.706 [2024-10-01 17:38:29.980550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.706 [2024-10-01 17:38:29.980562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.706 qpair failed and we were unable to recover it. 00:38:31.706 [2024-10-01 17:38:29.980860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.706 [2024-10-01 17:38:29.980872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.706 qpair failed and we were unable to recover it. 00:38:31.706 [2024-10-01 17:38:29.981200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.706 [2024-10-01 17:38:29.981210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.706 qpair failed and we were unable to recover it. 00:38:31.706 [2024-10-01 17:38:29.981524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.706 [2024-10-01 17:38:29.981537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.706 qpair failed and we were unable to recover it. 00:38:31.706 [2024-10-01 17:38:29.981854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.707 [2024-10-01 17:38:29.981865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.707 qpair failed and we were unable to recover it. 00:38:31.707 [2024-10-01 17:38:29.982156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.707 [2024-10-01 17:38:29.982167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.707 qpair failed and we were unable to recover it. 00:38:31.707 [2024-10-01 17:38:29.982332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.707 [2024-10-01 17:38:29.982343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.707 qpair failed and we were unable to recover it. 00:38:31.707 [2024-10-01 17:38:29.982680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.707 [2024-10-01 17:38:29.982691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.707 qpair failed and we were unable to recover it. 00:38:31.707 [2024-10-01 17:38:29.982738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.707 [2024-10-01 17:38:29.982748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.707 qpair failed and we were unable to recover it. 00:38:31.707 [2024-10-01 17:38:29.983034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.707 [2024-10-01 17:38:29.983045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.707 qpair failed and we were unable to recover it. 00:38:31.707 [2024-10-01 17:38:29.983351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.707 [2024-10-01 17:38:29.983362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.707 qpair failed and we were unable to recover it. 00:38:31.707 [2024-10-01 17:38:29.983623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.707 [2024-10-01 17:38:29.983635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.707 qpair failed and we were unable to recover it. 00:38:31.707 [2024-10-01 17:38:29.983819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.707 [2024-10-01 17:38:29.983830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.707 qpair failed and we were unable to recover it. 00:38:31.707 [2024-10-01 17:38:29.984182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.707 [2024-10-01 17:38:29.984193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.707 qpair failed and we were unable to recover it. 00:38:31.707 [2024-10-01 17:38:29.984532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.707 [2024-10-01 17:38:29.984543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.707 qpair failed and we were unable to recover it. 00:38:31.707 [2024-10-01 17:38:29.984827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.707 [2024-10-01 17:38:29.984837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.707 qpair failed and we were unable to recover it. 00:38:31.707 [2024-10-01 17:38:29.985167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.707 [2024-10-01 17:38:29.985178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.707 qpair failed and we were unable to recover it. 00:38:31.707 [2024-10-01 17:38:29.985489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.707 [2024-10-01 17:38:29.985500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.707 qpair failed and we were unable to recover it. 00:38:31.707 [2024-10-01 17:38:29.985812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.707 [2024-10-01 17:38:29.985823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.707 qpair failed and we were unable to recover it. 00:38:31.707 [2024-10-01 17:38:29.986039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.707 [2024-10-01 17:38:29.986052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.707 qpair failed and we were unable to recover it. 00:38:31.707 [2024-10-01 17:38:29.986137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.707 [2024-10-01 17:38:29.986147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.707 qpair failed and we were unable to recover it. 00:38:31.707 [2024-10-01 17:38:29.986457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.707 [2024-10-01 17:38:29.986468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.707 qpair failed and we were unable to recover it. 00:38:31.707 [2024-10-01 17:38:29.986757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.707 [2024-10-01 17:38:29.986769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.707 qpair failed and we were unable to recover it. 00:38:31.707 [2024-10-01 17:38:29.987057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.707 [2024-10-01 17:38:29.987067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.707 qpair failed and we were unable to recover it. 00:38:31.707 [2024-10-01 17:38:29.987400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.707 [2024-10-01 17:38:29.987411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.707 qpair failed and we were unable to recover it. 00:38:31.707 [2024-10-01 17:38:29.987674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.707 [2024-10-01 17:38:29.987684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.707 qpair failed and we were unable to recover it. 00:38:31.707 [2024-10-01 17:38:29.987858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.707 [2024-10-01 17:38:29.987871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.707 qpair failed and we were unable to recover it. 00:38:31.707 [2024-10-01 17:38:29.988181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.707 [2024-10-01 17:38:29.988192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.707 qpair failed and we were unable to recover it. 00:38:31.707 [2024-10-01 17:38:29.988505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.707 [2024-10-01 17:38:29.988516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.707 qpair failed and we were unable to recover it. 00:38:31.707 [2024-10-01 17:38:29.988696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.707 [2024-10-01 17:38:29.988707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.707 qpair failed and we were unable to recover it. 00:38:31.707 [2024-10-01 17:38:29.989005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.707 [2024-10-01 17:38:29.989016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.707 qpair failed and we were unable to recover it. 00:38:31.707 [2024-10-01 17:38:29.989355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.707 [2024-10-01 17:38:29.989366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.707 qpair failed and we were unable to recover it. 00:38:31.707 [2024-10-01 17:38:29.989677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.707 [2024-10-01 17:38:29.989688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.707 qpair failed and we were unable to recover it. 00:38:31.707 [2024-10-01 17:38:29.989856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.707 [2024-10-01 17:38:29.989868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.707 qpair failed and we were unable to recover it. 00:38:31.707 [2024-10-01 17:38:29.990143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.708 [2024-10-01 17:38:29.990155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.708 qpair failed and we were unable to recover it. 00:38:31.708 [2024-10-01 17:38:29.990315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.708 [2024-10-01 17:38:29.990326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.708 qpair failed and we were unable to recover it. 00:38:31.708 [2024-10-01 17:38:29.990513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.708 [2024-10-01 17:38:29.990524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.708 qpair failed and we were unable to recover it. 00:38:31.708 [2024-10-01 17:38:29.990754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.708 [2024-10-01 17:38:29.990766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.708 qpair failed and we were unable to recover it. 00:38:31.708 [2024-10-01 17:38:29.991090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.708 [2024-10-01 17:38:29.991101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.708 qpair failed and we were unable to recover it. 00:38:31.708 [2024-10-01 17:38:29.991392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.708 [2024-10-01 17:38:29.991403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.708 qpair failed and we were unable to recover it. 00:38:31.708 [2024-10-01 17:38:29.991710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.708 [2024-10-01 17:38:29.991722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.708 qpair failed and we were unable to recover it. 00:38:31.708 [2024-10-01 17:38:29.991904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.708 [2024-10-01 17:38:29.991915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.708 qpair failed and we were unable to recover it. 00:38:31.708 [2024-10-01 17:38:29.992248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.708 [2024-10-01 17:38:29.992260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.708 qpair failed and we were unable to recover it. 00:38:31.708 [2024-10-01 17:38:29.992546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.708 [2024-10-01 17:38:29.992557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.708 qpair failed and we were unable to recover it. 00:38:31.708 [2024-10-01 17:38:29.992898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.708 [2024-10-01 17:38:29.992912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.708 qpair failed and we were unable to recover it. 00:38:31.708 [2024-10-01 17:38:29.993201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.708 [2024-10-01 17:38:29.993212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.708 qpair failed and we were unable to recover it. 00:38:31.708 [2024-10-01 17:38:29.993503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.708 [2024-10-01 17:38:29.993514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.708 qpair failed and we were unable to recover it. 00:38:31.708 [2024-10-01 17:38:29.993834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.708 [2024-10-01 17:38:29.993845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.708 qpair failed and we were unable to recover it. 00:38:31.708 [2024-10-01 17:38:29.994165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.708 [2024-10-01 17:38:29.994177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.708 qpair failed and we were unable to recover it. 00:38:31.708 [2024-10-01 17:38:29.994500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.708 [2024-10-01 17:38:29.994511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.708 qpair failed and we were unable to recover it. 00:38:31.708 [2024-10-01 17:38:29.994818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.708 [2024-10-01 17:38:29.994828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.708 qpair failed and we were unable to recover it. 00:38:31.708 [2024-10-01 17:38:29.995126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.708 [2024-10-01 17:38:29.995137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.708 qpair failed and we were unable to recover it. 00:38:31.708 [2024-10-01 17:38:29.995459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.708 [2024-10-01 17:38:29.995470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.708 qpair failed and we were unable to recover it. 00:38:31.708 [2024-10-01 17:38:29.995641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.708 [2024-10-01 17:38:29.995653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.708 qpair failed and we were unable to recover it. 00:38:31.708 [2024-10-01 17:38:29.995811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.708 [2024-10-01 17:38:29.995821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.708 qpair failed and we were unable to recover it. 00:38:31.708 [2024-10-01 17:38:29.996003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.708 [2024-10-01 17:38:29.996013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.708 qpair failed and we were unable to recover it. 00:38:31.708 [2024-10-01 17:38:29.996104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.708 [2024-10-01 17:38:29.996115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.708 qpair failed and we were unable to recover it. 00:38:31.708 [2024-10-01 17:38:29.996419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.708 [2024-10-01 17:38:29.996430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.708 qpair failed and we were unable to recover it. 00:38:31.708 [2024-10-01 17:38:29.996810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.708 [2024-10-01 17:38:29.996821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.708 qpair failed and we were unable to recover it. 00:38:31.708 [2024-10-01 17:38:29.997142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.708 [2024-10-01 17:38:29.997154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.708 qpair failed and we were unable to recover it. 00:38:31.708 [2024-10-01 17:38:29.997442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.708 [2024-10-01 17:38:29.997452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.708 qpair failed and we were unable to recover it. 00:38:31.708 [2024-10-01 17:38:29.997635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.708 [2024-10-01 17:38:29.997647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.708 qpair failed and we were unable to recover it. 00:38:31.708 [2024-10-01 17:38:29.997909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.708 [2024-10-01 17:38:29.997920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.708 qpair failed and we were unable to recover it. 00:38:31.708 [2024-10-01 17:38:29.998185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.708 [2024-10-01 17:38:29.998196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.708 qpair failed and we were unable to recover it. 00:38:31.708 [2024-10-01 17:38:29.998534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.708 [2024-10-01 17:38:29.998545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.708 qpair failed and we were unable to recover it. 00:38:31.708 [2024-10-01 17:38:29.998830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.708 [2024-10-01 17:38:29.998841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.708 qpair failed and we were unable to recover it. 00:38:31.708 [2024-10-01 17:38:29.999147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.708 [2024-10-01 17:38:29.999158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.708 qpair failed and we were unable to recover it. 00:38:31.708 [2024-10-01 17:38:29.999421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.708 [2024-10-01 17:38:29.999432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.708 qpair failed and we were unable to recover it. 00:38:31.708 [2024-10-01 17:38:29.999620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.708 [2024-10-01 17:38:29.999630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.709 qpair failed and we were unable to recover it. 00:38:31.709 [2024-10-01 17:38:29.999902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.709 [2024-10-01 17:38:29.999912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.709 qpair failed and we were unable to recover it. 00:38:31.709 [2024-10-01 17:38:30.000218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.709 [2024-10-01 17:38:30.000229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.709 qpair failed and we were unable to recover it. 00:38:31.709 [2024-10-01 17:38:30.000560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.709 [2024-10-01 17:38:30.000574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.709 qpair failed and we were unable to recover it. 00:38:31.709 [2024-10-01 17:38:30.000875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.709 [2024-10-01 17:38:30.000894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.709 qpair failed and we were unable to recover it. 00:38:31.709 [2024-10-01 17:38:30.001208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.709 [2024-10-01 17:38:30.001225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.709 qpair failed and we were unable to recover it. 00:38:31.709 [2024-10-01 17:38:30.001332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.709 [2024-10-01 17:38:30.001347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.709 qpair failed and we were unable to recover it. 00:38:31.709 [2024-10-01 17:38:30.001651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.709 [2024-10-01 17:38:30.001667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.709 qpair failed and we were unable to recover it. 00:38:31.709 [2024-10-01 17:38:30.002468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.709 [2024-10-01 17:38:30.002490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.709 qpair failed and we were unable to recover it. 00:38:31.709 [2024-10-01 17:38:30.002826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.709 [2024-10-01 17:38:30.002846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.709 qpair failed and we were unable to recover it. 00:38:31.709 [2024-10-01 17:38:30.003171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.709 [2024-10-01 17:38:30.003186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.709 qpair failed and we were unable to recover it. 00:38:31.709 [2024-10-01 17:38:30.003258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.709 [2024-10-01 17:38:30.003272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.709 qpair failed and we were unable to recover it. 00:38:31.709 [2024-10-01 17:38:30.003556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.709 [2024-10-01 17:38:30.003573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.709 qpair failed and we were unable to recover it. 00:38:31.709 [2024-10-01 17:38:30.003899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.709 [2024-10-01 17:38:30.003917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.709 qpair failed and we were unable to recover it. 00:38:31.709 [2024-10-01 17:38:30.004321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.709 [2024-10-01 17:38:30.004341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.709 qpair failed and we were unable to recover it. 00:38:31.709 [2024-10-01 17:38:30.004653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.709 [2024-10-01 17:38:30.004671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.709 qpair failed and we were unable to recover it. 00:38:31.709 [2024-10-01 17:38:30.005021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.709 [2024-10-01 17:38:30.005039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.709 qpair failed and we were unable to recover it. 00:38:31.709 [2024-10-01 17:38:30.005343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.709 [2024-10-01 17:38:30.005360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.709 qpair failed and we were unable to recover it. 00:38:31.709 [2024-10-01 17:38:30.005433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.709 [2024-10-01 17:38:30.005449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.709 qpair failed and we were unable to recover it. 00:38:31.709 [2024-10-01 17:38:30.005738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.709 [2024-10-01 17:38:30.005755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.709 qpair failed and we were unable to recover it. 00:38:31.709 [2024-10-01 17:38:30.005984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.709 [2024-10-01 17:38:30.006020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.709 qpair failed and we were unable to recover it. 00:38:31.709 [2024-10-01 17:38:30.006357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.709 [2024-10-01 17:38:30.006375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.709 qpair failed and we were unable to recover it. 00:38:31.709 [2024-10-01 17:38:30.006681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.709 [2024-10-01 17:38:30.006697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.709 qpair failed and we were unable to recover it. 00:38:31.709 [2024-10-01 17:38:30.007010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.709 [2024-10-01 17:38:30.007030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.709 qpair failed and we were unable to recover it. 00:38:31.709 [2024-10-01 17:38:30.007242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.709 [2024-10-01 17:38:30.007257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.709 qpair failed and we were unable to recover it. 00:38:31.709 [2024-10-01 17:38:30.007496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.709 [2024-10-01 17:38:30.007511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.709 qpair failed and we were unable to recover it. 00:38:31.709 [2024-10-01 17:38:30.007832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.709 [2024-10-01 17:38:30.007851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.709 qpair failed and we were unable to recover it. 00:38:31.709 [2024-10-01 17:38:30.008058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.709 [2024-10-01 17:38:30.008076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.709 qpair failed and we were unable to recover it. 00:38:31.709 [2024-10-01 17:38:30.008257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.709 [2024-10-01 17:38:30.008274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.709 qpair failed and we were unable to recover it. 00:38:31.709 [2024-10-01 17:38:30.008451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.709 [2024-10-01 17:38:30.008469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.709 qpair failed and we were unable to recover it. 00:38:31.709 [2024-10-01 17:38:30.008803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.709 [2024-10-01 17:38:30.008821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.709 qpair failed and we were unable to recover it. 00:38:31.709 [2024-10-01 17:38:30.009155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.709 [2024-10-01 17:38:30.009174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.709 qpair failed and we were unable to recover it. 00:38:31.710 [2024-10-01 17:38:30.009451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.710 [2024-10-01 17:38:30.009469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.710 qpair failed and we were unable to recover it. 00:38:31.710 [2024-10-01 17:38:30.009563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.710 [2024-10-01 17:38:30.009572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.710 qpair failed and we were unable to recover it. 00:38:31.710 Read completed with error (sct=0, sc=8) 00:38:31.710 starting I/O failed 00:38:31.710 Read completed with error (sct=0, sc=8) 00:38:31.710 starting I/O failed 00:38:31.710 Read completed with error (sct=0, sc=8) 00:38:31.710 starting I/O failed 00:38:31.710 Read completed with error (sct=0, sc=8) 00:38:31.710 starting I/O failed 00:38:31.710 Read completed with error (sct=0, sc=8) 00:38:31.710 starting I/O failed 00:38:31.710 Read completed with error (sct=0, sc=8) 00:38:31.710 starting I/O failed 00:38:31.710 Read completed with error (sct=0, sc=8) 00:38:31.710 starting I/O failed 00:38:31.710 Read completed with error (sct=0, sc=8) 00:38:31.710 starting I/O failed 00:38:31.710 Read completed with error (sct=0, sc=8) 00:38:31.710 starting I/O failed 00:38:31.710 Read completed with error (sct=0, sc=8) 00:38:31.710 starting I/O failed 00:38:31.710 Write completed with error (sct=0, sc=8) 00:38:31.710 starting I/O failed 00:38:31.710 Read completed with error (sct=0, sc=8) 00:38:31.710 starting I/O failed 00:38:31.710 Read completed with error (sct=0, sc=8) 00:38:31.710 starting I/O failed 00:38:31.710 Write completed with error (sct=0, sc=8) 00:38:31.710 starting I/O failed 00:38:31.710 Read completed with error (sct=0, sc=8) 00:38:31.710 starting I/O failed 00:38:31.710 Read completed with error (sct=0, sc=8) 00:38:31.710 starting I/O failed 00:38:31.710 Read completed with error (sct=0, sc=8) 00:38:31.710 starting I/O failed 00:38:31.710 Read completed with error (sct=0, sc=8) 00:38:31.710 starting I/O failed 00:38:31.710 Read completed with error (sct=0, sc=8) 00:38:31.710 starting I/O failed 00:38:31.710 Read completed with error (sct=0, sc=8) 00:38:31.710 starting I/O failed 00:38:31.710 Write completed with error (sct=0, sc=8) 00:38:31.710 starting I/O failed 00:38:31.710 Write completed with error (sct=0, sc=8) 00:38:31.710 starting I/O failed 00:38:31.710 Read completed with error (sct=0, sc=8) 00:38:31.710 starting I/O failed 00:38:31.710 Write completed with error (sct=0, sc=8) 00:38:31.710 starting I/O failed 00:38:31.710 Write completed with error (sct=0, sc=8) 00:38:31.710 starting I/O failed 00:38:31.710 Read completed with error (sct=0, sc=8) 00:38:31.710 starting I/O failed 00:38:31.710 Write completed with error (sct=0, sc=8) 00:38:31.710 starting I/O failed 00:38:31.710 Write completed with error (sct=0, sc=8) 00:38:31.710 starting I/O failed 00:38:31.710 Write completed with error (sct=0, sc=8) 00:38:31.710 starting I/O failed 00:38:31.710 Read completed with error (sct=0, sc=8) 00:38:31.710 starting I/O failed 00:38:31.710 Read completed with error (sct=0, sc=8) 00:38:31.710 starting I/O failed 00:38:31.710 Read completed with error (sct=0, sc=8) 00:38:31.710 starting I/O failed 00:38:31.710 [2024-10-01 17:38:30.010312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:31.710 [2024-10-01 17:38:30.010717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.710 [2024-10-01 17:38:30.010775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.710 qpair failed and we were unable to recover it. 00:38:31.710 [2024-10-01 17:38:30.011120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.710 [2024-10-01 17:38:30.011159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.710 qpair failed and we were unable to recover it. 00:38:31.710 [2024-10-01 17:38:30.011443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.710 [2024-10-01 17:38:30.011476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.710 qpair failed and we were unable to recover it. 00:38:31.710 [2024-10-01 17:38:30.011597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.710 [2024-10-01 17:38:30.011642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.710 qpair failed and we were unable to recover it. 00:38:31.710 [2024-10-01 17:38:30.011909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.710 [2024-10-01 17:38:30.011940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.710 qpair failed and we were unable to recover it. 00:38:31.710 [2024-10-01 17:38:30.012327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.710 [2024-10-01 17:38:30.012361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.710 qpair failed and we were unable to recover it. 00:38:31.710 [2024-10-01 17:38:30.012616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.710 [2024-10-01 17:38:30.012647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.710 qpair failed and we were unable to recover it. 00:38:31.710 [2024-10-01 17:38:30.012760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.710 [2024-10-01 17:38:30.012790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.710 qpair failed and we were unable to recover it. 00:38:31.710 [2024-10-01 17:38:30.013056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.710 [2024-10-01 17:38:30.013089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.710 qpair failed and we were unable to recover it. 00:38:31.710 [2024-10-01 17:38:30.013365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.710 [2024-10-01 17:38:30.013396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.710 qpair failed and we were unable to recover it. 00:38:31.710 [2024-10-01 17:38:30.013627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.710 [2024-10-01 17:38:30.013660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.710 qpair failed and we were unable to recover it. 00:38:31.710 [2024-10-01 17:38:30.013785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.710 [2024-10-01 17:38:30.013817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.710 qpair failed and we were unable to recover it. 00:38:31.710 [2024-10-01 17:38:30.014282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.710 [2024-10-01 17:38:30.014313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.710 qpair failed and we were unable to recover it. 00:38:31.710 [2024-10-01 17:38:30.014673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.710 [2024-10-01 17:38:30.014702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.710 qpair failed and we were unable to recover it. 00:38:31.710 [2024-10-01 17:38:30.015073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.710 [2024-10-01 17:38:30.015104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.710 qpair failed and we were unable to recover it. 00:38:31.710 [2024-10-01 17:38:30.015456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.710 [2024-10-01 17:38:30.015485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.710 qpair failed and we were unable to recover it. 00:38:31.710 [2024-10-01 17:38:30.015837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.710 [2024-10-01 17:38:30.015866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.710 qpair failed and we were unable to recover it. 00:38:31.710 [2024-10-01 17:38:30.016309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.710 [2024-10-01 17:38:30.016340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.710 qpair failed and we were unable to recover it. 00:38:31.711 [2024-10-01 17:38:30.016693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.711 [2024-10-01 17:38:30.016723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.711 qpair failed and we were unable to recover it. 00:38:31.711 [2024-10-01 17:38:30.017084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.711 [2024-10-01 17:38:30.017115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.711 qpair failed and we were unable to recover it. 00:38:31.711 [2024-10-01 17:38:30.017518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.711 [2024-10-01 17:38:30.017547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.711 qpair failed and we were unable to recover it. 00:38:31.711 [2024-10-01 17:38:30.017883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.711 [2024-10-01 17:38:30.017912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.711 qpair failed and we were unable to recover it. 00:38:31.711 [2024-10-01 17:38:30.018295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.711 [2024-10-01 17:38:30.018325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.711 qpair failed and we were unable to recover it. 00:38:31.711 [2024-10-01 17:38:30.018661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.711 [2024-10-01 17:38:30.018691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.711 qpair failed and we were unable to recover it. 00:38:31.711 [2024-10-01 17:38:30.019030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.711 [2024-10-01 17:38:30.019062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.711 qpair failed and we were unable to recover it. 00:38:31.711 [2024-10-01 17:38:30.019373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.711 [2024-10-01 17:38:30.019402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.711 qpair failed and we were unable to recover it. 00:38:31.711 [2024-10-01 17:38:30.019753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.711 [2024-10-01 17:38:30.019782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.711 qpair failed and we were unable to recover it. 00:38:31.711 [2024-10-01 17:38:30.020114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.711 [2024-10-01 17:38:30.020145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.711 qpair failed and we were unable to recover it. 00:38:31.711 [2024-10-01 17:38:30.020496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.711 [2024-10-01 17:38:30.020526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.711 qpair failed and we were unable to recover it. 00:38:31.711 [2024-10-01 17:38:30.020846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.711 [2024-10-01 17:38:30.020878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.711 qpair failed and we were unable to recover it. 00:38:31.711 [2024-10-01 17:38:30.021162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.711 [2024-10-01 17:38:30.021195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.711 qpair failed and we were unable to recover it. 00:38:31.711 [2024-10-01 17:38:30.021574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.711 [2024-10-01 17:38:30.021603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.711 qpair failed and we were unable to recover it. 00:38:31.711 [2024-10-01 17:38:30.021947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.711 [2024-10-01 17:38:30.021976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.711 qpair failed and we were unable to recover it. 00:38:31.711 [2024-10-01 17:38:30.022313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.711 [2024-10-01 17:38:30.022344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.711 qpair failed and we were unable to recover it. 00:38:31.711 [2024-10-01 17:38:30.022692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.711 [2024-10-01 17:38:30.022722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.711 qpair failed and we were unable to recover it. 00:38:31.711 [2024-10-01 17:38:30.023077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.711 [2024-10-01 17:38:30.023110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.711 qpair failed and we were unable to recover it. 00:38:31.711 [2024-10-01 17:38:30.023452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.711 [2024-10-01 17:38:30.023482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.711 qpair failed and we were unable to recover it. 00:38:31.711 [2024-10-01 17:38:30.023848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.711 [2024-10-01 17:38:30.023877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.711 qpair failed and we were unable to recover it. 00:38:31.711 [2024-10-01 17:38:30.024205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.711 [2024-10-01 17:38:30.024235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.711 qpair failed and we were unable to recover it. 00:38:31.711 [2024-10-01 17:38:30.024559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.711 [2024-10-01 17:38:30.024588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.711 qpair failed and we were unable to recover it. 00:38:31.711 [2024-10-01 17:38:30.024953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.711 [2024-10-01 17:38:30.024983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.711 qpair failed and we were unable to recover it. 00:38:31.711 [2024-10-01 17:38:30.025373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.711 [2024-10-01 17:38:30.025403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.711 qpair failed and we were unable to recover it. 00:38:31.711 [2024-10-01 17:38:30.025740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.711 [2024-10-01 17:38:30.025769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.711 qpair failed and we were unable to recover it. 00:38:31.711 [2024-10-01 17:38:30.026087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.711 [2024-10-01 17:38:30.026125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.711 qpair failed and we were unable to recover it. 00:38:31.712 [2024-10-01 17:38:30.026425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.712 [2024-10-01 17:38:30.026456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.712 qpair failed and we were unable to recover it. 00:38:31.712 [2024-10-01 17:38:30.026789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.712 [2024-10-01 17:38:30.026818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.712 qpair failed and we were unable to recover it. 00:38:31.712 [2024-10-01 17:38:30.027155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.712 [2024-10-01 17:38:30.027187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.712 qpair failed and we were unable to recover it. 00:38:31.712 [2024-10-01 17:38:30.027545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.712 [2024-10-01 17:38:30.027575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.712 qpair failed and we were unable to recover it. 00:38:31.712 [2024-10-01 17:38:30.027910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.712 [2024-10-01 17:38:30.027940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.712 qpair failed and we were unable to recover it. 00:38:31.712 [2024-10-01 17:38:30.028294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.712 [2024-10-01 17:38:30.028325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.712 qpair failed and we were unable to recover it. 00:38:31.712 [2024-10-01 17:38:30.028643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.712 [2024-10-01 17:38:30.028673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.712 qpair failed and we were unable to recover it. 00:38:31.712 [2024-10-01 17:38:30.029009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.712 [2024-10-01 17:38:30.029040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.712 qpair failed and we were unable to recover it. 00:38:31.712 [2024-10-01 17:38:30.029405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.712 [2024-10-01 17:38:30.029434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.712 qpair failed and we were unable to recover it. 00:38:31.712 [2024-10-01 17:38:30.029796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.712 [2024-10-01 17:38:30.029826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.712 qpair failed and we were unable to recover it. 00:38:31.712 [2024-10-01 17:38:30.030023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.712 [2024-10-01 17:38:30.030056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.712 qpair failed and we were unable to recover it. 00:38:31.712 [2024-10-01 17:38:30.030268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.712 [2024-10-01 17:38:30.030298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.712 qpair failed and we were unable to recover it. 00:38:31.712 [2024-10-01 17:38:30.030521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.712 [2024-10-01 17:38:30.030557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.712 qpair failed and we were unable to recover it. 00:38:31.712 [2024-10-01 17:38:30.030898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.712 [2024-10-01 17:38:30.030930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.712 qpair failed and we were unable to recover it. 00:38:31.712 [2024-10-01 17:38:30.031159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.712 [2024-10-01 17:38:30.031195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.712 qpair failed and we were unable to recover it. 00:38:31.712 [2024-10-01 17:38:30.031523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.712 [2024-10-01 17:38:30.031553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.712 qpair failed and we were unable to recover it. 00:38:31.712 [2024-10-01 17:38:30.031919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.712 [2024-10-01 17:38:30.031948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.712 qpair failed and we were unable to recover it. 00:38:31.712 [2024-10-01 17:38:30.032301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.712 [2024-10-01 17:38:30.032332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.712 qpair failed and we were unable to recover it. 00:38:31.712 [2024-10-01 17:38:30.032689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.712 [2024-10-01 17:38:30.032718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.712 qpair failed and we were unable to recover it. 00:38:31.712 [2024-10-01 17:38:30.033050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.712 [2024-10-01 17:38:30.033080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.712 qpair failed and we were unable to recover it. 00:38:31.712 [2024-10-01 17:38:30.033308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.712 [2024-10-01 17:38:30.033337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.712 qpair failed and we were unable to recover it. 00:38:31.712 [2024-10-01 17:38:30.033671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.712 [2024-10-01 17:38:30.033700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.712 qpair failed and we were unable to recover it. 00:38:31.712 [2024-10-01 17:38:30.034016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.712 [2024-10-01 17:38:30.034047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.712 qpair failed and we were unable to recover it. 00:38:31.712 [2024-10-01 17:38:30.034386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.712 [2024-10-01 17:38:30.034415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.712 qpair failed and we were unable to recover it. 00:38:31.712 [2024-10-01 17:38:30.034767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.712 [2024-10-01 17:38:30.034795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.712 qpair failed and we were unable to recover it. 00:38:31.712 [2024-10-01 17:38:30.035133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.712 [2024-10-01 17:38:30.035164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.712 qpair failed and we were unable to recover it. 00:38:31.712 [2024-10-01 17:38:30.035397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.712 [2024-10-01 17:38:30.035430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.712 qpair failed and we were unable to recover it. 00:38:31.712 [2024-10-01 17:38:30.035788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.712 [2024-10-01 17:38:30.035818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.712 qpair failed and we were unable to recover it. 00:38:31.712 [2024-10-01 17:38:30.036152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.712 [2024-10-01 17:38:30.036183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.712 qpair failed and we were unable to recover it. 00:38:31.712 [2024-10-01 17:38:30.036278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.712 [2024-10-01 17:38:30.036305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.712 qpair failed and we were unable to recover it. 00:38:31.712 [2024-10-01 17:38:30.036643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.713 [2024-10-01 17:38:30.036673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.713 qpair failed and we were unable to recover it. 00:38:31.713 [2024-10-01 17:38:30.036977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.713 [2024-10-01 17:38:30.037015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.713 qpair failed and we were unable to recover it. 00:38:31.713 [2024-10-01 17:38:30.037217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.713 [2024-10-01 17:38:30.037247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.713 qpair failed and we were unable to recover it. 00:38:31.713 [2024-10-01 17:38:30.037569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.713 [2024-10-01 17:38:30.037598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.713 qpair failed and we were unable to recover it. 00:38:31.713 [2024-10-01 17:38:30.037916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.713 [2024-10-01 17:38:30.037945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.713 qpair failed and we were unable to recover it. 00:38:31.713 [2024-10-01 17:38:30.038367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.713 [2024-10-01 17:38:30.038397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.713 qpair failed and we were unable to recover it. 00:38:31.713 [2024-10-01 17:38:30.038558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.713 [2024-10-01 17:38:30.038587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.713 qpair failed and we were unable to recover it. 00:38:31.713 [2024-10-01 17:38:30.038789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.713 [2024-10-01 17:38:30.038819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.713 qpair failed and we were unable to recover it. 00:38:31.713 [2024-10-01 17:38:30.039179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.713 [2024-10-01 17:38:30.039210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.713 qpair failed and we were unable to recover it. 00:38:31.713 [2024-10-01 17:38:30.039583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.713 [2024-10-01 17:38:30.039618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.713 qpair failed and we were unable to recover it. 00:38:31.713 [2024-10-01 17:38:30.039879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.713 [2024-10-01 17:38:30.039909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.713 qpair failed and we were unable to recover it. 00:38:31.713 [2024-10-01 17:38:30.040065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.713 [2024-10-01 17:38:30.040095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.713 qpair failed and we were unable to recover it. 00:38:31.713 [2024-10-01 17:38:30.040232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.713 [2024-10-01 17:38:30.040260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.713 qpair failed and we were unable to recover it. 00:38:31.713 [2024-10-01 17:38:30.040494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.713 [2024-10-01 17:38:30.040524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.713 qpair failed and we were unable to recover it. 00:38:31.713 [2024-10-01 17:38:30.040668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.713 [2024-10-01 17:38:30.040697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.713 qpair failed and we were unable to recover it. 00:38:31.713 [2024-10-01 17:38:30.041079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.713 [2024-10-01 17:38:30.041109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.713 qpair failed and we were unable to recover it. 00:38:31.713 [2024-10-01 17:38:30.041376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.713 [2024-10-01 17:38:30.041406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.713 qpair failed and we were unable to recover it. 00:38:31.713 [2024-10-01 17:38:30.041624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.713 [2024-10-01 17:38:30.041653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.713 qpair failed and we were unable to recover it. 00:38:31.713 [2024-10-01 17:38:30.041906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.713 [2024-10-01 17:38:30.041933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.713 qpair failed and we were unable to recover it. 00:38:31.713 [2024-10-01 17:38:30.042257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.713 [2024-10-01 17:38:30.042287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.713 qpair failed and we were unable to recover it. 00:38:31.713 [2024-10-01 17:38:30.042668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.713 [2024-10-01 17:38:30.042697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.713 qpair failed and we were unable to recover it. 00:38:31.713 [2024-10-01 17:38:30.043034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.713 [2024-10-01 17:38:30.043069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.713 qpair failed and we were unable to recover it. 00:38:31.713 [2024-10-01 17:38:30.043285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.713 [2024-10-01 17:38:30.043316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.713 qpair failed and we were unable to recover it. 00:38:31.713 [2024-10-01 17:38:30.043707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.713 [2024-10-01 17:38:30.043737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.713 qpair failed and we were unable to recover it. 00:38:31.713 [2024-10-01 17:38:30.044116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.713 [2024-10-01 17:38:30.044147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.713 qpair failed and we were unable to recover it. 00:38:31.713 [2024-10-01 17:38:30.044495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.713 [2024-10-01 17:38:30.044524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.713 qpair failed and we were unable to recover it. 00:38:31.713 [2024-10-01 17:38:30.044794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.713 [2024-10-01 17:38:30.044824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.713 qpair failed and we were unable to recover it. 00:38:31.713 [2024-10-01 17:38:30.045149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.713 [2024-10-01 17:38:30.045181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.713 qpair failed and we were unable to recover it. 00:38:31.713 [2024-10-01 17:38:30.045398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.713 [2024-10-01 17:38:30.045430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.713 qpair failed and we were unable to recover it. 00:38:31.713 [2024-10-01 17:38:30.045651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.713 [2024-10-01 17:38:30.045680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.713 qpair failed and we were unable to recover it. 00:38:31.713 [2024-10-01 17:38:30.045911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.713 [2024-10-01 17:38:30.045942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.713 qpair failed and we were unable to recover it. 00:38:31.713 [2024-10-01 17:38:30.046183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.713 [2024-10-01 17:38:30.046213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.713 qpair failed and we were unable to recover it. 00:38:31.713 [2024-10-01 17:38:30.046559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.713 [2024-10-01 17:38:30.046588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.713 qpair failed and we were unable to recover it. 00:38:31.713 [2024-10-01 17:38:30.046943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.714 [2024-10-01 17:38:30.046972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.714 qpair failed and we were unable to recover it. 00:38:31.714 [2024-10-01 17:38:30.047373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.714 [2024-10-01 17:38:30.047405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.714 qpair failed and we were unable to recover it. 00:38:31.714 [2024-10-01 17:38:30.047702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.714 [2024-10-01 17:38:30.047732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.714 qpair failed and we were unable to recover it. 00:38:31.714 [2024-10-01 17:38:30.048092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.714 [2024-10-01 17:38:30.048124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.714 qpair failed and we were unable to recover it. 00:38:31.714 [2024-10-01 17:38:30.048440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.714 [2024-10-01 17:38:30.048471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.714 qpair failed and we were unable to recover it. 00:38:31.714 [2024-10-01 17:38:30.048812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.714 [2024-10-01 17:38:30.048842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.714 qpair failed and we were unable to recover it. 00:38:31.714 [2024-10-01 17:38:30.049171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.714 [2024-10-01 17:38:30.049202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.714 qpair failed and we were unable to recover it. 00:38:31.714 [2024-10-01 17:38:30.049541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.714 [2024-10-01 17:38:30.049571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.714 qpair failed and we were unable to recover it. 00:38:31.714 [2024-10-01 17:38:30.049802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.714 [2024-10-01 17:38:30.049832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.714 qpair failed and we were unable to recover it. 00:38:31.714 [2024-10-01 17:38:30.050182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.714 [2024-10-01 17:38:30.050213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.714 qpair failed and we were unable to recover it. 00:38:31.714 [2024-10-01 17:38:30.050536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.714 [2024-10-01 17:38:30.050565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.714 qpair failed and we were unable to recover it. 00:38:31.714 [2024-10-01 17:38:30.050905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.714 [2024-10-01 17:38:30.050935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.714 qpair failed and we were unable to recover it. 00:38:31.714 [2024-10-01 17:38:30.051260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.714 [2024-10-01 17:38:30.051290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.714 qpair failed and we were unable to recover it. 00:38:31.714 [2024-10-01 17:38:30.051628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.714 [2024-10-01 17:38:30.051657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.714 qpair failed and we were unable to recover it. 00:38:31.714 [2024-10-01 17:38:30.051889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.714 [2024-10-01 17:38:30.051919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.714 qpair failed and we were unable to recover it. 00:38:31.714 [2024-10-01 17:38:30.052271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.714 [2024-10-01 17:38:30.052303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.714 qpair failed and we were unable to recover it. 00:38:31.714 [2024-10-01 17:38:30.052645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.714 [2024-10-01 17:38:30.052682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.714 qpair failed and we were unable to recover it. 00:38:31.714 [2024-10-01 17:38:30.053027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.714 [2024-10-01 17:38:30.053058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.714 qpair failed and we were unable to recover it. 00:38:31.714 [2024-10-01 17:38:30.053440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.714 [2024-10-01 17:38:30.053469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.714 qpair failed and we were unable to recover it. 00:38:31.714 [2024-10-01 17:38:30.053831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.714 [2024-10-01 17:38:30.053860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.714 qpair failed and we were unable to recover it. 00:38:31.714 [2024-10-01 17:38:30.054072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.714 [2024-10-01 17:38:30.054102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.714 qpair failed and we were unable to recover it. 00:38:31.714 [2024-10-01 17:38:30.054471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.714 [2024-10-01 17:38:30.054501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.714 qpair failed and we were unable to recover it. 00:38:31.714 [2024-10-01 17:38:30.054732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.714 [2024-10-01 17:38:30.054761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.714 qpair failed and we were unable to recover it. 00:38:31.714 [2024-10-01 17:38:30.054954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.714 [2024-10-01 17:38:30.054983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.714 qpair failed and we were unable to recover it. 00:38:31.714 [2024-10-01 17:38:30.055306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.714 [2024-10-01 17:38:30.055335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.714 qpair failed and we were unable to recover it. 00:38:31.714 [2024-10-01 17:38:30.055566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.714 [2024-10-01 17:38:30.055595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.714 qpair failed and we were unable to recover it. 00:38:31.714 [2024-10-01 17:38:30.055786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.714 [2024-10-01 17:38:30.055816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.714 qpair failed and we were unable to recover it. 00:38:31.714 [2024-10-01 17:38:30.056020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.714 [2024-10-01 17:38:30.056050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.714 qpair failed and we were unable to recover it. 00:38:31.714 [2024-10-01 17:38:30.056410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.715 [2024-10-01 17:38:30.056439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.715 qpair failed and we were unable to recover it. 00:38:31.715 [2024-10-01 17:38:30.056642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.715 [2024-10-01 17:38:30.056671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.715 qpair failed and we were unable to recover it. 00:38:31.715 [2024-10-01 17:38:30.057026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.715 [2024-10-01 17:38:30.057057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.715 qpair failed and we were unable to recover it. 00:38:31.715 [2024-10-01 17:38:30.057393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.715 [2024-10-01 17:38:30.057423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.715 qpair failed and we were unable to recover it. 00:38:31.715 [2024-10-01 17:38:30.057762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.715 [2024-10-01 17:38:30.057792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.715 qpair failed and we were unable to recover it. 00:38:31.715 [2024-10-01 17:38:30.058118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.715 [2024-10-01 17:38:30.058147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.715 qpair failed and we were unable to recover it. 00:38:31.715 [2024-10-01 17:38:30.058486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.715 [2024-10-01 17:38:30.058515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.715 qpair failed and we were unable to recover it. 00:38:31.715 [2024-10-01 17:38:30.058806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.715 [2024-10-01 17:38:30.058836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.715 qpair failed and we were unable to recover it. 00:38:31.715 [2024-10-01 17:38:30.059040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.715 [2024-10-01 17:38:30.059070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.715 qpair failed and we were unable to recover it. 00:38:31.715 [2024-10-01 17:38:30.059293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.715 [2024-10-01 17:38:30.059322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.715 qpair failed and we were unable to recover it. 00:38:31.715 [2024-10-01 17:38:30.059628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.715 [2024-10-01 17:38:30.059658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.715 qpair failed and we were unable to recover it. 00:38:31.715 [2024-10-01 17:38:30.060019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.715 [2024-10-01 17:38:30.060051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.715 qpair failed and we were unable to recover it. 00:38:31.715 [2024-10-01 17:38:30.060254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.715 [2024-10-01 17:38:30.060283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.715 qpair failed and we were unable to recover it. 00:38:31.715 [2024-10-01 17:38:30.060660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.715 [2024-10-01 17:38:30.060689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.715 qpair failed and we were unable to recover it. 00:38:31.715 [2024-10-01 17:38:30.060991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.715 [2024-10-01 17:38:30.061032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.715 qpair failed and we were unable to recover it. 00:38:31.715 [2024-10-01 17:38:30.061303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.715 [2024-10-01 17:38:30.061333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.715 qpair failed and we were unable to recover it. 00:38:31.715 [2024-10-01 17:38:30.061628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.715 [2024-10-01 17:38:30.061657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.715 qpair failed and we were unable to recover it. 00:38:31.715 [2024-10-01 17:38:30.062029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.715 [2024-10-01 17:38:30.062060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.715 qpair failed and we were unable to recover it. 00:38:31.715 [2024-10-01 17:38:30.062448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.715 [2024-10-01 17:38:30.062477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.715 qpair failed and we were unable to recover it. 00:38:31.715 [2024-10-01 17:38:30.062725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.715 [2024-10-01 17:38:30.062754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.715 qpair failed and we were unable to recover it. 00:38:31.715 [2024-10-01 17:38:30.063108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.715 [2024-10-01 17:38:30.063139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.715 qpair failed and we were unable to recover it. 00:38:31.715 [2024-10-01 17:38:30.063470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.715 [2024-10-01 17:38:30.063499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.715 qpair failed and we were unable to recover it. 00:38:31.715 [2024-10-01 17:38:30.063839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.715 [2024-10-01 17:38:30.063869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.715 qpair failed and we were unable to recover it. 00:38:31.715 [2024-10-01 17:38:30.064117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.715 [2024-10-01 17:38:30.064148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.715 qpair failed and we were unable to recover it. 00:38:31.715 [2024-10-01 17:38:30.064237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.715 [2024-10-01 17:38:30.064264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.715 qpair failed and we were unable to recover it. 00:38:31.715 [2024-10-01 17:38:30.064644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.715 [2024-10-01 17:38:30.064672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.715 qpair failed and we were unable to recover it. 00:38:31.715 [2024-10-01 17:38:30.065033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.715 [2024-10-01 17:38:30.065063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.715 qpair failed and we were unable to recover it. 00:38:31.715 [2024-10-01 17:38:30.065263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.715 [2024-10-01 17:38:30.065293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.715 qpair failed and we were unable to recover it. 00:38:31.715 [2024-10-01 17:38:30.065653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.715 [2024-10-01 17:38:30.065688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.715 qpair failed and we were unable to recover it. 00:38:31.715 [2024-10-01 17:38:30.066044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.715 [2024-10-01 17:38:30.066073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.715 qpair failed and we were unable to recover it. 00:38:31.715 [2024-10-01 17:38:30.066384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.715 [2024-10-01 17:38:30.066413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.715 qpair failed and we were unable to recover it. 00:38:31.715 [2024-10-01 17:38:30.066606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.715 [2024-10-01 17:38:30.066636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.715 qpair failed and we were unable to recover it. 00:38:31.715 [2024-10-01 17:38:30.066871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.715 [2024-10-01 17:38:30.066899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.715 qpair failed and we were unable to recover it. 00:38:31.715 [2024-10-01 17:38:30.067215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.716 [2024-10-01 17:38:30.067244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.716 qpair failed and we were unable to recover it. 00:38:31.716 [2024-10-01 17:38:30.067587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.716 [2024-10-01 17:38:30.067617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.716 qpair failed and we were unable to recover it. 00:38:31.716 [2024-10-01 17:38:30.067825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.716 [2024-10-01 17:38:30.067854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.716 qpair failed and we were unable to recover it. 00:38:31.716 [2024-10-01 17:38:30.068244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.716 [2024-10-01 17:38:30.068276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.716 qpair failed and we were unable to recover it. 00:38:31.716 [2024-10-01 17:38:30.068630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.716 [2024-10-01 17:38:30.068659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.716 qpair failed and we were unable to recover it. 00:38:31.716 [2024-10-01 17:38:30.068976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.716 [2024-10-01 17:38:30.069015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.716 qpair failed and we were unable to recover it. 00:38:31.716 [2024-10-01 17:38:30.069222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.716 [2024-10-01 17:38:30.069252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.716 qpair failed and we were unable to recover it. 00:38:31.716 [2024-10-01 17:38:30.069594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.716 [2024-10-01 17:38:30.069622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.716 qpair failed and we were unable to recover it. 00:38:31.716 [2024-10-01 17:38:30.070009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.716 [2024-10-01 17:38:30.070040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.716 qpair failed and we were unable to recover it. 00:38:31.716 [2024-10-01 17:38:30.070391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.716 [2024-10-01 17:38:30.070421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.716 qpair failed and we were unable to recover it. 00:38:31.716 [2024-10-01 17:38:30.070782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.716 [2024-10-01 17:38:30.070812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.716 qpair failed and we were unable to recover it. 00:38:31.716 [2024-10-01 17:38:30.070955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.716 [2024-10-01 17:38:30.070983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.716 qpair failed and we were unable to recover it. 00:38:31.716 [2024-10-01 17:38:30.071212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.716 [2024-10-01 17:38:30.071242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.716 qpair failed and we were unable to recover it. 00:38:31.716 [2024-10-01 17:38:30.071465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.716 [2024-10-01 17:38:30.071495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.716 qpair failed and we were unable to recover it. 00:38:31.716 [2024-10-01 17:38:30.071850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.716 [2024-10-01 17:38:30.071879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.716 qpair failed and we were unable to recover it. 00:38:31.716 [2024-10-01 17:38:30.072223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.716 [2024-10-01 17:38:30.072254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.716 qpair failed and we were unable to recover it. 00:38:31.716 [2024-10-01 17:38:30.072595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.716 [2024-10-01 17:38:30.072624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.716 qpair failed and we were unable to recover it. 00:38:31.716 [2024-10-01 17:38:30.072949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.716 [2024-10-01 17:38:30.072978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.716 qpair failed and we were unable to recover it. 00:38:31.716 [2024-10-01 17:38:30.073205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.716 [2024-10-01 17:38:30.073235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.716 qpair failed and we were unable to recover it. 00:38:31.716 [2024-10-01 17:38:30.073490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.716 [2024-10-01 17:38:30.073524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.716 qpair failed and we were unable to recover it. 00:38:31.716 [2024-10-01 17:38:30.073876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.716 [2024-10-01 17:38:30.073906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.716 qpair failed and we were unable to recover it. 00:38:31.716 [2024-10-01 17:38:30.074145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.716 [2024-10-01 17:38:30.074174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.716 qpair failed and we were unable to recover it. 00:38:31.716 [2024-10-01 17:38:30.074538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.716 [2024-10-01 17:38:30.074568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.716 qpair failed and we were unable to recover it. 00:38:31.716 [2024-10-01 17:38:30.074904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.716 [2024-10-01 17:38:30.074935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.716 qpair failed and we were unable to recover it. 00:38:31.716 [2024-10-01 17:38:30.075190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.716 [2024-10-01 17:38:30.075221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.716 qpair failed and we were unable to recover it. 00:38:31.716 [2024-10-01 17:38:30.075602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.716 [2024-10-01 17:38:30.075631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.716 qpair failed and we were unable to recover it. 00:38:31.716 [2024-10-01 17:38:30.075909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.716 [2024-10-01 17:38:30.075938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.716 qpair failed and we were unable to recover it. 00:38:31.716 [2024-10-01 17:38:30.076198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.716 [2024-10-01 17:38:30.076230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.716 qpair failed and we were unable to recover it. 00:38:31.716 [2024-10-01 17:38:30.076583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.716 [2024-10-01 17:38:30.076612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.716 qpair failed and we were unable to recover it. 00:38:31.716 [2024-10-01 17:38:30.076757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.716 [2024-10-01 17:38:30.076785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.716 qpair failed and we were unable to recover it. 00:38:31.716 [2024-10-01 17:38:30.077144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.716 [2024-10-01 17:38:30.077175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.716 qpair failed and we were unable to recover it. 00:38:31.716 [2024-10-01 17:38:30.077396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.717 [2024-10-01 17:38:30.077425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.717 qpair failed and we were unable to recover it. 00:38:31.717 [2024-10-01 17:38:30.077753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.717 [2024-10-01 17:38:30.077781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.717 qpair failed and we were unable to recover it. 00:38:31.717 [2024-10-01 17:38:30.078104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.717 [2024-10-01 17:38:30.078134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.717 qpair failed and we were unable to recover it. 00:38:31.717 [2024-10-01 17:38:30.078351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.717 [2024-10-01 17:38:30.078379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.717 qpair failed and we were unable to recover it. 00:38:31.717 [2024-10-01 17:38:30.078744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.717 [2024-10-01 17:38:30.078784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.717 qpair failed and we were unable to recover it. 00:38:31.717 [2024-10-01 17:38:30.079131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.717 [2024-10-01 17:38:30.079162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.717 qpair failed and we were unable to recover it. 00:38:31.717 [2024-10-01 17:38:30.079469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.717 [2024-10-01 17:38:30.079500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.717 qpair failed and we were unable to recover it. 00:38:31.717 [2024-10-01 17:38:30.079722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.717 [2024-10-01 17:38:30.079752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.717 qpair failed and we were unable to recover it. 00:38:31.717 [2024-10-01 17:38:30.079974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.717 [2024-10-01 17:38:30.080016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.717 qpair failed and we were unable to recover it. 00:38:31.717 [2024-10-01 17:38:30.080335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.717 [2024-10-01 17:38:30.080366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.717 qpair failed and we were unable to recover it. 00:38:31.717 [2024-10-01 17:38:30.080588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.717 [2024-10-01 17:38:30.080617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.717 qpair failed and we were unable to recover it. 00:38:31.717 [2024-10-01 17:38:30.080939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.717 [2024-10-01 17:38:30.080969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.717 qpair failed and we were unable to recover it. 00:38:31.717 [2024-10-01 17:38:30.081425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.717 [2024-10-01 17:38:30.081466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.717 qpair failed and we were unable to recover it. 00:38:31.717 [2024-10-01 17:38:30.081808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.717 [2024-10-01 17:38:30.081824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.717 qpair failed and we were unable to recover it. 00:38:31.717 [2024-10-01 17:38:30.082153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.717 [2024-10-01 17:38:30.082167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.717 qpair failed and we were unable to recover it. 00:38:31.717 [2024-10-01 17:38:30.082445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.717 [2024-10-01 17:38:30.082457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.717 qpair failed and we were unable to recover it. 00:38:31.717 [2024-10-01 17:38:30.082802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.717 [2024-10-01 17:38:30.082813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.717 qpair failed and we were unable to recover it. 00:38:31.717 [2024-10-01 17:38:30.082986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.717 [2024-10-01 17:38:30.083007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.717 qpair failed and we were unable to recover it. 00:38:31.717 [2024-10-01 17:38:30.083485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.717 [2024-10-01 17:38:30.083524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.717 qpair failed and we were unable to recover it. 00:38:31.717 [2024-10-01 17:38:30.083716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.717 [2024-10-01 17:38:30.083729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.717 qpair failed and we were unable to recover it. 00:38:31.717 [2024-10-01 17:38:30.083910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.717 [2024-10-01 17:38:30.083922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.717 qpair failed and we were unable to recover it. 00:38:31.717 [2024-10-01 17:38:30.084234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.717 [2024-10-01 17:38:30.084274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.717 qpair failed and we were unable to recover it. 00:38:31.717 [2024-10-01 17:38:30.084498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.717 [2024-10-01 17:38:30.084512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.717 qpair failed and we were unable to recover it. 00:38:31.717 [2024-10-01 17:38:30.084809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.717 [2024-10-01 17:38:30.084821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.717 qpair failed and we were unable to recover it. 00:38:31.717 [2024-10-01 17:38:30.085162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.717 [2024-10-01 17:38:30.085174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.717 qpair failed and we were unable to recover it. 00:38:31.717 [2024-10-01 17:38:30.085450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.717 [2024-10-01 17:38:30.085461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.717 qpair failed and we were unable to recover it. 00:38:31.717 [2024-10-01 17:38:30.085648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.717 [2024-10-01 17:38:30.085659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.717 qpair failed and we were unable to recover it. 00:38:31.717 [2024-10-01 17:38:30.085860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.717 [2024-10-01 17:38:30.085872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.717 qpair failed and we were unable to recover it. 00:38:31.717 [2024-10-01 17:38:30.086037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.717 [2024-10-01 17:38:30.086049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.717 qpair failed and we were unable to recover it. 00:38:31.717 [2024-10-01 17:38:30.086358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.718 [2024-10-01 17:38:30.086369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.718 qpair failed and we were unable to recover it. 00:38:31.718 [2024-10-01 17:38:30.086595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.718 [2024-10-01 17:38:30.086607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.718 qpair failed and we were unable to recover it. 00:38:31.718 [2024-10-01 17:38:30.086794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.718 [2024-10-01 17:38:30.086810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.718 qpair failed and we were unable to recover it. 00:38:31.718 [2024-10-01 17:38:30.087145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.718 [2024-10-01 17:38:30.087156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.718 qpair failed and we were unable to recover it. 00:38:31.718 [2024-10-01 17:38:30.087483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.718 [2024-10-01 17:38:30.087494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.718 qpair failed and we were unable to recover it. 00:38:31.718 [2024-10-01 17:38:30.087662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.718 [2024-10-01 17:38:30.087675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.718 qpair failed and we were unable to recover it. 00:38:31.718 [2024-10-01 17:38:30.087965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.718 [2024-10-01 17:38:30.087975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.718 qpair failed and we were unable to recover it. 00:38:31.718 [2024-10-01 17:38:30.088284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.718 [2024-10-01 17:38:30.088295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.718 qpair failed and we were unable to recover it. 00:38:31.718 [2024-10-01 17:38:30.088575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.718 [2024-10-01 17:38:30.088587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.718 qpair failed and we were unable to recover it. 00:38:31.718 [2024-10-01 17:38:30.088869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.718 [2024-10-01 17:38:30.088880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.718 qpair failed and we were unable to recover it. 00:38:31.718 [2024-10-01 17:38:30.089206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.718 [2024-10-01 17:38:30.089218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.718 qpair failed and we were unable to recover it. 00:38:31.718 [2024-10-01 17:38:30.089555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.718 [2024-10-01 17:38:30.089568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.718 qpair failed and we were unable to recover it. 00:38:31.718 [2024-10-01 17:38:30.089899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.718 [2024-10-01 17:38:30.089911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.718 qpair failed and we were unable to recover it. 00:38:31.718 [2024-10-01 17:38:30.090282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.718 [2024-10-01 17:38:30.090294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.718 qpair failed and we were unable to recover it. 00:38:31.718 [2024-10-01 17:38:30.090638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.718 [2024-10-01 17:38:30.090650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.718 qpair failed and we were unable to recover it. 00:38:31.718 [2024-10-01 17:38:30.090981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.718 [2024-10-01 17:38:30.090992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.718 qpair failed and we were unable to recover it. 00:38:31.718 [2024-10-01 17:38:30.091226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.718 [2024-10-01 17:38:30.091238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.718 qpair failed and we were unable to recover it. 00:38:31.718 [2024-10-01 17:38:30.091559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.718 [2024-10-01 17:38:30.091570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.718 qpair failed and we were unable to recover it. 00:38:31.718 [2024-10-01 17:38:30.091884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.718 [2024-10-01 17:38:30.091895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.718 qpair failed and we were unable to recover it. 00:38:31.718 [2024-10-01 17:38:30.092198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.718 [2024-10-01 17:38:30.092209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.718 qpair failed and we were unable to recover it. 00:38:31.718 [2024-10-01 17:38:30.092542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.718 [2024-10-01 17:38:30.092554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.718 qpair failed and we were unable to recover it. 00:38:31.718 [2024-10-01 17:38:30.092834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.718 [2024-10-01 17:38:30.092846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.718 qpair failed and we were unable to recover it. 00:38:31.718 [2024-10-01 17:38:30.093109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.718 [2024-10-01 17:38:30.093120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.718 qpair failed and we were unable to recover it. 00:38:31.718 [2024-10-01 17:38:30.093427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.718 [2024-10-01 17:38:30.093439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.718 qpair failed and we were unable to recover it. 00:38:31.718 [2024-10-01 17:38:30.093782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.718 [2024-10-01 17:38:30.093794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.718 qpair failed and we were unable to recover it. 00:38:31.718 [2024-10-01 17:38:30.094078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.718 [2024-10-01 17:38:30.094089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.718 qpair failed and we were unable to recover it. 00:38:31.718 [2024-10-01 17:38:30.094429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.718 [2024-10-01 17:38:30.094440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.718 qpair failed and we were unable to recover it. 00:38:31.718 [2024-10-01 17:38:30.094748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.718 [2024-10-01 17:38:30.094759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.718 qpair failed and we were unable to recover it. 00:38:31.718 [2024-10-01 17:38:30.095028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.718 [2024-10-01 17:38:30.095040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.718 qpair failed and we were unable to recover it. 00:38:31.718 [2024-10-01 17:38:30.095321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.718 [2024-10-01 17:38:30.095332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.718 qpair failed and we were unable to recover it. 00:38:31.718 [2024-10-01 17:38:30.095614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.718 [2024-10-01 17:38:30.095626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.718 qpair failed and we were unable to recover it. 00:38:31.718 [2024-10-01 17:38:30.095961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.718 [2024-10-01 17:38:30.095972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.718 qpair failed and we were unable to recover it. 00:38:31.719 [2024-10-01 17:38:30.096275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.719 [2024-10-01 17:38:30.096287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.719 qpair failed and we were unable to recover it. 00:38:31.719 [2024-10-01 17:38:30.096610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.719 [2024-10-01 17:38:30.096621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.719 qpair failed and we were unable to recover it. 00:38:31.719 [2024-10-01 17:38:30.096908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.719 [2024-10-01 17:38:30.096918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.719 qpair failed and we were unable to recover it. 00:38:31.719 [2024-10-01 17:38:30.097225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.719 [2024-10-01 17:38:30.097237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.719 qpair failed and we were unable to recover it. 00:38:31.719 [2024-10-01 17:38:30.097447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.719 [2024-10-01 17:38:30.097461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.719 qpair failed and we were unable to recover it. 00:38:31.719 [2024-10-01 17:38:30.097682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.719 [2024-10-01 17:38:30.097693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.719 qpair failed and we were unable to recover it. 00:38:31.719 [2024-10-01 17:38:30.098014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.719 [2024-10-01 17:38:30.098027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.719 qpair failed and we were unable to recover it. 00:38:31.719 [2024-10-01 17:38:30.098338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.719 [2024-10-01 17:38:30.098350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.719 qpair failed and we were unable to recover it. 00:38:31.719 [2024-10-01 17:38:30.098623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.719 [2024-10-01 17:38:30.098634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.719 qpair failed and we were unable to recover it. 00:38:31.719 [2024-10-01 17:38:30.098827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.719 [2024-10-01 17:38:30.098848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.719 qpair failed and we were unable to recover it. 00:38:31.719 [2024-10-01 17:38:30.099122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.719 [2024-10-01 17:38:30.099133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.719 qpair failed and we were unable to recover it. 00:38:31.719 [2024-10-01 17:38:30.099353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.719 [2024-10-01 17:38:30.099367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.719 qpair failed and we were unable to recover it. 00:38:31.719 [2024-10-01 17:38:30.099673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.719 [2024-10-01 17:38:30.099684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.719 qpair failed and we were unable to recover it. 00:38:31.719 [2024-10-01 17:38:30.100020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.719 [2024-10-01 17:38:30.100032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.719 qpair failed and we were unable to recover it. 00:38:31.719 [2024-10-01 17:38:30.100304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.719 [2024-10-01 17:38:30.100316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.719 qpair failed and we were unable to recover it. 00:38:31.719 [2024-10-01 17:38:30.100496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.719 [2024-10-01 17:38:30.100508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.719 qpair failed and we were unable to recover it. 00:38:31.719 [2024-10-01 17:38:30.100774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.719 [2024-10-01 17:38:30.100785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.719 qpair failed and we were unable to recover it. 00:38:31.719 [2024-10-01 17:38:30.101093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.719 [2024-10-01 17:38:30.101105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.719 qpair failed and we were unable to recover it. 00:38:31.719 [2024-10-01 17:38:30.101402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.719 [2024-10-01 17:38:30.101413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.719 qpair failed and we were unable to recover it. 00:38:31.719 [2024-10-01 17:38:30.101718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.719 [2024-10-01 17:38:30.101728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.719 qpair failed and we were unable to recover it. 00:38:31.719 [2024-10-01 17:38:30.102036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.719 [2024-10-01 17:38:30.102047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.719 qpair failed and we were unable to recover it. 00:38:31.719 [2024-10-01 17:38:30.102266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.719 [2024-10-01 17:38:30.102277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.719 qpair failed and we were unable to recover it. 00:38:31.719 [2024-10-01 17:38:30.102607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.719 [2024-10-01 17:38:30.102619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.719 qpair failed and we were unable to recover it. 00:38:31.719 [2024-10-01 17:38:30.102919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.719 [2024-10-01 17:38:30.102929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.719 qpair failed and we were unable to recover it. 00:38:31.719 [2024-10-01 17:38:30.103229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.719 [2024-10-01 17:38:30.103241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.719 qpair failed and we were unable to recover it. 00:38:31.719 [2024-10-01 17:38:30.103581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.719 [2024-10-01 17:38:30.103592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.719 qpair failed and we were unable to recover it. 00:38:31.719 [2024-10-01 17:38:30.103893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.719 [2024-10-01 17:38:30.103904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.719 qpair failed and we were unable to recover it. 00:38:31.719 [2024-10-01 17:38:30.104070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.719 [2024-10-01 17:38:30.104082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.719 qpair failed and we were unable to recover it. 00:38:31.719 [2024-10-01 17:38:30.104386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.719 [2024-10-01 17:38:30.104396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.719 qpair failed and we were unable to recover it. 00:38:31.719 [2024-10-01 17:38:30.104708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.719 [2024-10-01 17:38:30.104719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.719 qpair failed and we were unable to recover it. 00:38:31.719 [2024-10-01 17:38:30.104875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.719 [2024-10-01 17:38:30.104888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.719 qpair failed and we were unable to recover it. 00:38:31.719 [2024-10-01 17:38:30.105211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.720 [2024-10-01 17:38:30.105222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.720 qpair failed and we were unable to recover it. 00:38:31.720 [2024-10-01 17:38:30.105532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.720 [2024-10-01 17:38:30.105544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.720 qpair failed and we were unable to recover it. 00:38:31.720 [2024-10-01 17:38:30.105877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.720 [2024-10-01 17:38:30.105889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.720 qpair failed and we were unable to recover it. 00:38:31.720 [2024-10-01 17:38:30.105936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.720 [2024-10-01 17:38:30.105948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.720 qpair failed and we were unable to recover it. 00:38:31.720 [2024-10-01 17:38:30.106003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.720 [2024-10-01 17:38:30.106016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.720 qpair failed and we were unable to recover it. 00:38:31.720 [2024-10-01 17:38:30.106332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.720 [2024-10-01 17:38:30.106344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.720 qpair failed and we were unable to recover it. 00:38:31.720 [2024-10-01 17:38:30.106634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.720 [2024-10-01 17:38:30.106645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.720 qpair failed and we were unable to recover it. 00:38:31.720 [2024-10-01 17:38:30.106910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.720 [2024-10-01 17:38:30.106924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.720 qpair failed and we were unable to recover it. 00:38:31.720 [2024-10-01 17:38:30.107233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.720 [2024-10-01 17:38:30.107245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.720 qpair failed and we were unable to recover it. 00:38:31.720 [2024-10-01 17:38:30.107541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.720 [2024-10-01 17:38:30.107552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.720 qpair failed and we were unable to recover it. 00:38:31.720 [2024-10-01 17:38:30.107730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.720 [2024-10-01 17:38:30.107742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.720 qpair failed and we were unable to recover it. 00:38:31.720 [2024-10-01 17:38:30.108015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.720 [2024-10-01 17:38:30.108026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.720 qpair failed and we were unable to recover it. 00:38:31.720 [2024-10-01 17:38:30.108351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.720 [2024-10-01 17:38:30.108361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.720 qpair failed and we were unable to recover it. 00:38:31.720 [2024-10-01 17:38:30.108541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.720 [2024-10-01 17:38:30.108553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.720 qpair failed and we were unable to recover it. 00:38:31.720 [2024-10-01 17:38:30.108751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.720 [2024-10-01 17:38:30.108762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.720 qpair failed and we were unable to recover it. 00:38:31.720 [2024-10-01 17:38:30.109009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.720 [2024-10-01 17:38:30.109022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.720 qpair failed and we were unable to recover it. 00:38:31.720 [2024-10-01 17:38:30.109365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.720 [2024-10-01 17:38:30.109376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.720 qpair failed and we were unable to recover it. 00:38:31.720 [2024-10-01 17:38:30.109755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.720 [2024-10-01 17:38:30.109766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.720 qpair failed and we were unable to recover it. 00:38:31.720 [2024-10-01 17:38:30.110068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.720 [2024-10-01 17:38:30.110082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.720 qpair failed and we were unable to recover it. 00:38:31.720 [2024-10-01 17:38:30.110362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.720 [2024-10-01 17:38:30.110374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.720 qpair failed and we were unable to recover it. 00:38:31.720 [2024-10-01 17:38:30.110555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.720 [2024-10-01 17:38:30.110567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.720 qpair failed and we were unable to recover it. 00:38:31.720 [2024-10-01 17:38:30.110643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.720 [2024-10-01 17:38:30.110655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.720 qpair failed and we were unable to recover it. 00:38:31.720 [2024-10-01 17:38:30.110945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.720 [2024-10-01 17:38:30.110956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.720 qpair failed and we were unable to recover it. 00:38:31.720 [2024-10-01 17:38:30.111242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.720 [2024-10-01 17:38:30.111254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.720 qpair failed and we were unable to recover it. 00:38:31.720 [2024-10-01 17:38:30.111541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.720 [2024-10-01 17:38:30.111553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.720 qpair failed and we were unable to recover it. 00:38:31.720 [2024-10-01 17:38:30.111892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.720 [2024-10-01 17:38:30.111903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.720 qpair failed and we were unable to recover it. 00:38:31.720 [2024-10-01 17:38:30.112215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.720 [2024-10-01 17:38:30.112227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.720 qpair failed and we were unable to recover it. 00:38:31.720 [2024-10-01 17:38:30.112533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.720 [2024-10-01 17:38:30.112546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.720 qpair failed and we were unable to recover it. 00:38:31.720 [2024-10-01 17:38:30.112732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.720 [2024-10-01 17:38:30.112744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.720 qpair failed and we were unable to recover it. 00:38:31.720 [2024-10-01 17:38:30.112925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.720 [2024-10-01 17:38:30.112936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.720 qpair failed and we were unable to recover it. 00:38:31.720 [2024-10-01 17:38:30.113285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.720 [2024-10-01 17:38:30.113298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.720 qpair failed and we were unable to recover it. 00:38:31.720 [2024-10-01 17:38:30.113624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.720 [2024-10-01 17:38:30.113637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.720 qpair failed and we were unable to recover it. 00:38:31.720 [2024-10-01 17:38:30.113968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.720 [2024-10-01 17:38:30.113979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.721 qpair failed and we were unable to recover it. 00:38:31.721 [2024-10-01 17:38:30.114296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.721 [2024-10-01 17:38:30.114308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.721 qpair failed and we were unable to recover it. 00:38:31.721 [2024-10-01 17:38:30.114644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.721 [2024-10-01 17:38:30.114656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.721 qpair failed and we were unable to recover it. 00:38:31.721 [2024-10-01 17:38:30.115002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.721 [2024-10-01 17:38:30.115014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.721 qpair failed and we were unable to recover it. 00:38:31.721 [2024-10-01 17:38:30.115326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.721 [2024-10-01 17:38:30.115338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.721 qpair failed and we were unable to recover it. 00:38:31.721 [2024-10-01 17:38:30.115676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.721 [2024-10-01 17:38:30.115688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.721 qpair failed and we were unable to recover it. 00:38:31.721 [2024-10-01 17:38:30.115973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.721 [2024-10-01 17:38:30.115984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.721 qpair failed and we were unable to recover it. 00:38:31.721 [2024-10-01 17:38:30.116326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.721 [2024-10-01 17:38:30.116338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.721 qpair failed and we were unable to recover it. 00:38:31.721 [2024-10-01 17:38:30.116621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.721 [2024-10-01 17:38:30.116633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.721 qpair failed and we were unable to recover it. 00:38:31.721 [2024-10-01 17:38:30.116949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.721 [2024-10-01 17:38:30.116961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.721 qpair failed and we were unable to recover it. 00:38:31.721 [2024-10-01 17:38:30.117279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.721 [2024-10-01 17:38:30.117291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.721 qpair failed and we were unable to recover it. 00:38:31.721 [2024-10-01 17:38:30.117578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.721 [2024-10-01 17:38:30.117590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.721 qpair failed and we were unable to recover it. 00:38:31.721 [2024-10-01 17:38:30.117872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.721 [2024-10-01 17:38:30.117885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.721 qpair failed and we were unable to recover it. 00:38:31.721 [2024-10-01 17:38:30.118159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.721 [2024-10-01 17:38:30.118171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.721 qpair failed and we were unable to recover it. 00:38:31.721 [2024-10-01 17:38:30.118465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.721 [2024-10-01 17:38:30.118476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.721 qpair failed and we were unable to recover it. 00:38:31.721 [2024-10-01 17:38:30.118789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.721 [2024-10-01 17:38:30.118801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.721 qpair failed and we were unable to recover it. 00:38:31.721 [2024-10-01 17:38:30.119108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.721 [2024-10-01 17:38:30.119122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.721 qpair failed and we were unable to recover it. 00:38:31.721 [2024-10-01 17:38:30.119420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.721 [2024-10-01 17:38:30.119431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.721 qpair failed and we were unable to recover it. 00:38:31.721 [2024-10-01 17:38:30.119727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.721 [2024-10-01 17:38:30.119739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.721 qpair failed and we were unable to recover it. 00:38:31.721 [2024-10-01 17:38:30.120006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.721 [2024-10-01 17:38:30.120019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.721 qpair failed and we were unable to recover it. 00:38:31.721 [2024-10-01 17:38:30.120325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.721 [2024-10-01 17:38:30.120337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.721 qpair failed and we were unable to recover it. 00:38:31.721 [2024-10-01 17:38:30.120660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.721 [2024-10-01 17:38:30.120671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.721 qpair failed and we were unable to recover it. 00:38:31.721 [2024-10-01 17:38:30.120954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.721 [2024-10-01 17:38:30.120966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.721 qpair failed and we were unable to recover it. 00:38:31.721 [2024-10-01 17:38:30.121385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.721 [2024-10-01 17:38:30.121396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.721 qpair failed and we were unable to recover it. 00:38:31.721 [2024-10-01 17:38:30.121677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.721 [2024-10-01 17:38:30.121688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.721 qpair failed and we were unable to recover it. 00:38:31.721 [2024-10-01 17:38:30.121909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.721 [2024-10-01 17:38:30.121920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.721 qpair failed and we were unable to recover it. 00:38:31.721 [2024-10-01 17:38:30.122212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.721 [2024-10-01 17:38:30.122224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.721 qpair failed and we were unable to recover it. 00:38:31.721 [2024-10-01 17:38:30.122540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.721 [2024-10-01 17:38:30.122551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.721 qpair failed and we were unable to recover it. 00:38:31.721 [2024-10-01 17:38:30.122723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.721 [2024-10-01 17:38:30.122736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.721 qpair failed and we were unable to recover it. 00:38:31.721 [2024-10-01 17:38:30.122907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.721 [2024-10-01 17:38:30.122920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.721 qpair failed and we were unable to recover it. 00:38:31.721 [2024-10-01 17:38:30.123089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.721 [2024-10-01 17:38:30.123101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.721 qpair failed and we were unable to recover it. 00:38:31.721 [2024-10-01 17:38:30.123387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.721 [2024-10-01 17:38:30.123399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.721 qpair failed and we were unable to recover it. 00:38:31.721 [2024-10-01 17:38:30.123702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.721 [2024-10-01 17:38:30.123713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.721 qpair failed and we were unable to recover it. 00:38:31.721 [2024-10-01 17:38:30.124021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.721 [2024-10-01 17:38:30.124034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.721 qpair failed and we were unable to recover it. 00:38:31.721 [2024-10-01 17:38:30.124323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.722 [2024-10-01 17:38:30.124333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.722 qpair failed and we were unable to recover it. 00:38:31.722 [2024-10-01 17:38:30.124608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.722 [2024-10-01 17:38:30.124619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.722 qpair failed and we were unable to recover it. 00:38:31.722 [2024-10-01 17:38:30.124799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.722 [2024-10-01 17:38:30.124812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.722 qpair failed and we were unable to recover it. 00:38:31.722 [2024-10-01 17:38:30.125132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.722 [2024-10-01 17:38:30.125145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.722 qpair failed and we were unable to recover it. 00:38:31.722 [2024-10-01 17:38:30.125327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.722 [2024-10-01 17:38:30.125337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.722 qpair failed and we were unable to recover it. 00:38:31.722 [2024-10-01 17:38:30.125524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.722 [2024-10-01 17:38:30.125536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.722 qpair failed and we were unable to recover it. 00:38:31.722 [2024-10-01 17:38:30.125762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.722 [2024-10-01 17:38:30.125774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.722 qpair failed and we were unable to recover it. 00:38:31.722 [2024-10-01 17:38:30.125952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.722 [2024-10-01 17:38:30.125962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.722 qpair failed and we were unable to recover it. 00:38:31.722 [2024-10-01 17:38:30.126195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.722 [2024-10-01 17:38:30.126206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.722 qpair failed and we were unable to recover it. 00:38:31.722 [2024-10-01 17:38:30.126493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.722 [2024-10-01 17:38:30.126506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.722 qpair failed and we were unable to recover it. 00:38:31.722 [2024-10-01 17:38:30.126805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.722 [2024-10-01 17:38:30.126817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.722 qpair failed and we were unable to recover it. 00:38:31.722 [2024-10-01 17:38:30.127148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.722 [2024-10-01 17:38:30.127160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.722 qpair failed and we were unable to recover it. 00:38:31.722 [2024-10-01 17:38:30.127435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.722 [2024-10-01 17:38:30.127448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.722 qpair failed and we were unable to recover it. 00:38:31.722 [2024-10-01 17:38:30.127664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.722 [2024-10-01 17:38:30.127676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.722 qpair failed and we were unable to recover it. 00:38:31.722 [2024-10-01 17:38:30.127772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.722 [2024-10-01 17:38:30.127783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.722 qpair failed and we were unable to recover it. 00:38:31.722 [2024-10-01 17:38:30.128102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.722 [2024-10-01 17:38:30.128113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.722 qpair failed and we were unable to recover it. 00:38:31.722 [2024-10-01 17:38:30.128416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.722 [2024-10-01 17:38:30.128427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.722 qpair failed and we were unable to recover it. 00:38:31.722 [2024-10-01 17:38:30.128590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.722 [2024-10-01 17:38:30.128603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.722 qpair failed and we were unable to recover it. 00:38:31.722 [2024-10-01 17:38:30.128877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.722 [2024-10-01 17:38:30.128887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.722 qpair failed and we were unable to recover it. 00:38:31.722 [2024-10-01 17:38:30.129065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.722 [2024-10-01 17:38:30.129077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.722 qpair failed and we were unable to recover it. 00:38:31.722 [2024-10-01 17:38:30.129290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.722 [2024-10-01 17:38:30.129300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.722 qpair failed and we were unable to recover it. 00:38:31.722 [2024-10-01 17:38:30.129631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.722 [2024-10-01 17:38:30.129643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.722 qpair failed and we were unable to recover it. 00:38:31.722 [2024-10-01 17:38:30.129971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.722 [2024-10-01 17:38:30.129982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.722 qpair failed and we were unable to recover it. 00:38:31.722 [2024-10-01 17:38:30.130274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.722 [2024-10-01 17:38:30.130286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.722 qpair failed and we were unable to recover it. 00:38:31.722 [2024-10-01 17:38:30.130550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.722 [2024-10-01 17:38:30.130562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.722 qpair failed and we were unable to recover it. 00:38:31.722 [2024-10-01 17:38:30.130608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.722 [2024-10-01 17:38:30.130620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.722 qpair failed and we were unable to recover it. 00:38:31.722 [2024-10-01 17:38:30.130939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.722 [2024-10-01 17:38:30.130950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.722 qpair failed and we were unable to recover it. 00:38:31.723 [2024-10-01 17:38:30.131266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.723 [2024-10-01 17:38:30.131277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.723 qpair failed and we were unable to recover it. 00:38:31.723 [2024-10-01 17:38:30.131477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.723 [2024-10-01 17:38:30.131488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.723 qpair failed and we were unable to recover it. 00:38:31.723 [2024-10-01 17:38:30.131794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.723 [2024-10-01 17:38:30.131806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.723 qpair failed and we were unable to recover it. 00:38:31.723 [2024-10-01 17:38:30.132100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.723 [2024-10-01 17:38:30.132111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.723 qpair failed and we were unable to recover it. 00:38:31.723 [2024-10-01 17:38:30.132409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.723 [2024-10-01 17:38:30.132420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.723 qpair failed and we were unable to recover it. 00:38:31.723 [2024-10-01 17:38:30.132724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.723 [2024-10-01 17:38:30.132734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.723 qpair failed and we were unable to recover it. 00:38:31.723 [2024-10-01 17:38:30.133057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.723 [2024-10-01 17:38:30.133068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.723 qpair failed and we were unable to recover it. 00:38:31.723 [2024-10-01 17:38:30.133267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.723 [2024-10-01 17:38:30.133278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.723 qpair failed and we were unable to recover it. 00:38:31.723 [2024-10-01 17:38:30.133558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.723 [2024-10-01 17:38:30.133568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.723 qpair failed and we were unable to recover it. 00:38:31.723 [2024-10-01 17:38:30.133792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.723 [2024-10-01 17:38:30.133802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.723 qpair failed and we were unable to recover it. 00:38:31.723 [2024-10-01 17:38:30.133850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.723 [2024-10-01 17:38:30.133860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.723 qpair failed and we were unable to recover it. 00:38:31.723 [2024-10-01 17:38:30.134159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.723 [2024-10-01 17:38:30.134172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.723 qpair failed and we were unable to recover it. 00:38:31.723 [2024-10-01 17:38:30.134498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.723 [2024-10-01 17:38:30.134509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.723 qpair failed and we were unable to recover it. 00:38:31.723 [2024-10-01 17:38:30.134815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.723 [2024-10-01 17:38:30.134826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.723 qpair failed and we were unable to recover it. 00:38:31.723 [2024-10-01 17:38:30.135175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.723 [2024-10-01 17:38:30.135187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.723 qpair failed and we were unable to recover it. 00:38:31.723 [2024-10-01 17:38:30.135451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.723 [2024-10-01 17:38:30.135462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.723 qpair failed and we were unable to recover it. 00:38:31.723 [2024-10-01 17:38:30.135755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.723 [2024-10-01 17:38:30.135766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.723 qpair failed and we were unable to recover it. 00:38:31.723 [2024-10-01 17:38:30.135951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.723 [2024-10-01 17:38:30.135961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.723 qpair failed and we were unable to recover it. 00:38:31.723 [2024-10-01 17:38:30.136148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.723 [2024-10-01 17:38:30.136160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.723 qpair failed and we were unable to recover it. 00:38:31.723 [2024-10-01 17:38:30.136500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.723 [2024-10-01 17:38:30.136512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.723 qpair failed and we were unable to recover it. 00:38:31.723 [2024-10-01 17:38:30.136796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.723 [2024-10-01 17:38:30.136808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.723 qpair failed and we were unable to recover it. 00:38:31.723 [2024-10-01 17:38:30.137089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.723 [2024-10-01 17:38:30.137101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.723 qpair failed and we were unable to recover it. 00:38:31.723 [2024-10-01 17:38:30.137433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.723 [2024-10-01 17:38:30.137444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.723 qpair failed and we were unable to recover it. 00:38:31.723 [2024-10-01 17:38:30.137775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.723 [2024-10-01 17:38:30.137789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.723 qpair failed and we were unable to recover it. 00:38:31.723 [2024-10-01 17:38:30.138131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.723 [2024-10-01 17:38:30.138142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.723 qpair failed and we were unable to recover it. 00:38:31.723 [2024-10-01 17:38:30.138457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.723 [2024-10-01 17:38:30.138468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.723 qpair failed and we were unable to recover it. 00:38:31.723 [2024-10-01 17:38:30.138654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.723 [2024-10-01 17:38:30.138667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.723 qpair failed and we were unable to recover it. 00:38:31.723 [2024-10-01 17:38:30.138937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.723 [2024-10-01 17:38:30.138949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.723 qpair failed and we were unable to recover it. 00:38:31.723 [2024-10-01 17:38:30.139268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.723 [2024-10-01 17:38:30.139281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.723 qpair failed and we were unable to recover it. 00:38:31.723 [2024-10-01 17:38:30.139556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.723 [2024-10-01 17:38:30.139567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.723 qpair failed and we were unable to recover it. 00:38:31.723 [2024-10-01 17:38:30.139826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.723 [2024-10-01 17:38:30.139837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.723 qpair failed and we were unable to recover it. 00:38:31.723 [2024-10-01 17:38:30.140148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.723 [2024-10-01 17:38:30.140158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.723 qpair failed and we were unable to recover it. 00:38:31.723 [2024-10-01 17:38:30.140471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.723 [2024-10-01 17:38:30.140482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.723 qpair failed and we were unable to recover it. 00:38:31.723 [2024-10-01 17:38:30.140742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.723 [2024-10-01 17:38:30.140752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.723 qpair failed and we were unable to recover it. 00:38:31.723 [2024-10-01 17:38:30.140904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.723 [2024-10-01 17:38:30.140915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.723 qpair failed and we were unable to recover it. 00:38:31.724 [2024-10-01 17:38:30.141221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.724 [2024-10-01 17:38:30.141232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.724 qpair failed and we were unable to recover it. 00:38:31.724 [2024-10-01 17:38:30.141429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.724 [2024-10-01 17:38:30.141440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.724 qpair failed and we were unable to recover it. 00:38:31.724 [2024-10-01 17:38:30.141764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.724 [2024-10-01 17:38:30.141775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.724 qpair failed and we were unable to recover it. 00:38:31.724 [2024-10-01 17:38:30.142096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.724 [2024-10-01 17:38:30.142107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.724 qpair failed and we were unable to recover it. 00:38:31.724 [2024-10-01 17:38:30.142511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.724 [2024-10-01 17:38:30.142522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.724 qpair failed and we were unable to recover it. 00:38:31.724 [2024-10-01 17:38:30.142831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.724 [2024-10-01 17:38:30.142842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.724 qpair failed and we were unable to recover it. 00:38:31.724 [2024-10-01 17:38:30.143163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.724 [2024-10-01 17:38:30.143173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.724 qpair failed and we were unable to recover it. 00:38:31.724 [2024-10-01 17:38:30.143477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.724 [2024-10-01 17:38:30.143489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.724 qpair failed and we were unable to recover it. 00:38:31.724 [2024-10-01 17:38:30.143794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.724 [2024-10-01 17:38:30.143804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.724 qpair failed and we were unable to recover it. 00:38:31.724 [2024-10-01 17:38:30.144088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.724 [2024-10-01 17:38:30.144099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.724 qpair failed and we were unable to recover it. 00:38:31.724 [2024-10-01 17:38:30.144320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.724 [2024-10-01 17:38:30.144332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.724 qpair failed and we were unable to recover it. 00:38:31.724 [2024-10-01 17:38:30.144404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.724 [2024-10-01 17:38:30.144416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.724 qpair failed and we were unable to recover it. 00:38:31.724 [2024-10-01 17:38:30.144696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.724 [2024-10-01 17:38:30.144706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.724 qpair failed and we were unable to recover it. 00:38:31.724 [2024-10-01 17:38:30.144832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.724 [2024-10-01 17:38:30.144842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.724 qpair failed and we were unable to recover it. 00:38:31.724 [2024-10-01 17:38:30.145111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.724 [2024-10-01 17:38:30.145122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.724 qpair failed and we were unable to recover it. 00:38:31.724 [2024-10-01 17:38:30.145290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.724 [2024-10-01 17:38:30.145302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.724 qpair failed and we were unable to recover it. 00:38:31.724 [2024-10-01 17:38:30.145459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.724 [2024-10-01 17:38:30.145470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.724 qpair failed and we were unable to recover it. 00:38:31.724 [2024-10-01 17:38:30.145788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.724 [2024-10-01 17:38:30.145799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.724 qpair failed and we were unable to recover it. 00:38:31.724 [2024-10-01 17:38:30.145959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.724 [2024-10-01 17:38:30.145970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.724 qpair failed and we were unable to recover it. 00:38:31.724 [2024-10-01 17:38:30.146267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.724 [2024-10-01 17:38:30.146278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.724 qpair failed and we were unable to recover it. 00:38:31.724 [2024-10-01 17:38:30.146454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.724 [2024-10-01 17:38:30.146465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.724 qpair failed and we were unable to recover it. 00:38:31.724 [2024-10-01 17:38:30.146774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.724 [2024-10-01 17:38:30.146784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.724 qpair failed and we were unable to recover it. 00:38:31.724 [2024-10-01 17:38:30.146956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.724 [2024-10-01 17:38:30.146967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.724 qpair failed and we were unable to recover it. 00:38:31.724 [2024-10-01 17:38:30.147284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.724 [2024-10-01 17:38:30.147295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.724 qpair failed and we were unable to recover it. 00:38:31.724 [2024-10-01 17:38:30.147448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.724 [2024-10-01 17:38:30.147467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.724 qpair failed and we were unable to recover it. 00:38:31.724 [2024-10-01 17:38:30.147767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.724 [2024-10-01 17:38:30.147777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.724 qpair failed and we were unable to recover it. 00:38:31.724 [2024-10-01 17:38:30.148107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.724 [2024-10-01 17:38:30.148117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.724 qpair failed and we were unable to recover it. 00:38:31.724 [2024-10-01 17:38:30.148297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.724 [2024-10-01 17:38:30.148308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.724 qpair failed and we were unable to recover it. 00:38:31.724 [2024-10-01 17:38:30.148668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.724 [2024-10-01 17:38:30.148679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.724 qpair failed and we were unable to recover it. 00:38:31.724 [2024-10-01 17:38:30.148979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.724 [2024-10-01 17:38:30.148990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.724 qpair failed and we were unable to recover it. 00:38:31.724 [2024-10-01 17:38:30.149279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.725 [2024-10-01 17:38:30.149291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.725 qpair failed and we were unable to recover it. 00:38:31.725 [2024-10-01 17:38:30.149620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.725 [2024-10-01 17:38:30.149632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.725 qpair failed and we were unable to recover it. 00:38:31.725 [2024-10-01 17:38:30.149969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.725 [2024-10-01 17:38:30.149981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.725 qpair failed and we were unable to recover it. 00:38:31.725 [2024-10-01 17:38:30.150172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.725 [2024-10-01 17:38:30.150183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.725 qpair failed and we were unable to recover it. 00:38:31.725 [2024-10-01 17:38:30.150460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.725 [2024-10-01 17:38:30.150471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.725 qpair failed and we were unable to recover it. 00:38:31.725 [2024-10-01 17:38:30.150770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.725 [2024-10-01 17:38:30.150782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.725 qpair failed and we were unable to recover it. 00:38:31.725 [2024-10-01 17:38:30.150937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.725 [2024-10-01 17:38:30.150948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.725 qpair failed and we were unable to recover it. 00:38:31.725 [2024-10-01 17:38:30.151119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.725 [2024-10-01 17:38:30.151130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.725 qpair failed and we were unable to recover it. 00:38:31.725 [2024-10-01 17:38:30.151417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.725 [2024-10-01 17:38:30.151428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.725 qpair failed and we were unable to recover it. 00:38:31.725 [2024-10-01 17:38:30.151760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.725 [2024-10-01 17:38:30.151772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.725 qpair failed and we were unable to recover it. 00:38:31.725 [2024-10-01 17:38:30.152065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.725 [2024-10-01 17:38:30.152075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.725 qpair failed and we were unable to recover it. 00:38:31.725 [2024-10-01 17:38:30.152372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.725 [2024-10-01 17:38:30.152383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.725 qpair failed and we were unable to recover it. 00:38:31.725 [2024-10-01 17:38:30.152705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.725 [2024-10-01 17:38:30.152716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.725 qpair failed and we were unable to recover it. 00:38:31.725 [2024-10-01 17:38:30.153015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.725 [2024-10-01 17:38:30.153026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.725 qpair failed and we were unable to recover it. 00:38:31.725 [2024-10-01 17:38:30.153367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.725 [2024-10-01 17:38:30.153378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.725 qpair failed and we were unable to recover it. 00:38:31.725 [2024-10-01 17:38:30.153685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.725 [2024-10-01 17:38:30.153696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.725 qpair failed and we were unable to recover it. 00:38:31.725 [2024-10-01 17:38:30.154001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.725 [2024-10-01 17:38:30.154012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.725 qpair failed and we were unable to recover it. 00:38:31.725 [2024-10-01 17:38:30.154199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.725 [2024-10-01 17:38:30.154210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.725 qpair failed and we were unable to recover it. 00:38:31.725 [2024-10-01 17:38:30.154526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.725 [2024-10-01 17:38:30.154537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.725 qpair failed and we were unable to recover it. 00:38:31.725 [2024-10-01 17:38:30.154838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.725 [2024-10-01 17:38:30.154849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.725 qpair failed and we were unable to recover it. 00:38:31.725 [2024-10-01 17:38:30.155136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.725 [2024-10-01 17:38:30.155147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.725 qpair failed and we were unable to recover it. 00:38:31.725 [2024-10-01 17:38:30.155466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.725 [2024-10-01 17:38:30.155476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.725 qpair failed and we were unable to recover it. 00:38:31.725 [2024-10-01 17:38:30.155785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.725 [2024-10-01 17:38:30.155796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.725 qpair failed and we were unable to recover it. 00:38:31.725 [2024-10-01 17:38:30.155990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.725 [2024-10-01 17:38:30.156004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.725 qpair failed and we were unable to recover it. 00:38:31.725 [2024-10-01 17:38:30.156305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.725 [2024-10-01 17:38:30.156317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.725 qpair failed and we were unable to recover it. 00:38:31.725 [2024-10-01 17:38:30.156623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.725 [2024-10-01 17:38:30.156634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.725 qpair failed and we were unable to recover it. 00:38:31.725 [2024-10-01 17:38:30.156968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.725 [2024-10-01 17:38:30.156982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.725 qpair failed and we were unable to recover it. 00:38:31.725 [2024-10-01 17:38:30.157298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.725 [2024-10-01 17:38:30.157310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.725 qpair failed and we were unable to recover it. 00:38:31.725 [2024-10-01 17:38:30.157594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.725 [2024-10-01 17:38:30.157605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.725 qpair failed and we were unable to recover it. 00:38:31.725 [2024-10-01 17:38:30.157971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.725 [2024-10-01 17:38:30.157982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.725 qpair failed and we were unable to recover it. 00:38:31.725 [2024-10-01 17:38:30.158285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.725 [2024-10-01 17:38:30.158296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.725 qpair failed and we were unable to recover it. 00:38:31.725 [2024-10-01 17:38:30.158628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.725 [2024-10-01 17:38:30.158640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.725 qpair failed and we were unable to recover it. 00:38:31.725 [2024-10-01 17:38:30.158925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.725 [2024-10-01 17:38:30.158936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.725 qpair failed and we were unable to recover it. 00:38:31.725 [2024-10-01 17:38:30.159200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.725 [2024-10-01 17:38:30.159211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.725 qpair failed and we were unable to recover it. 00:38:31.725 [2024-10-01 17:38:30.159532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.725 [2024-10-01 17:38:30.159542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.726 qpair failed and we were unable to recover it. 00:38:31.726 [2024-10-01 17:38:30.159734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.726 [2024-10-01 17:38:30.159746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.726 qpair failed and we were unable to recover it. 00:38:31.726 [2024-10-01 17:38:30.160082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.726 [2024-10-01 17:38:30.160094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.726 qpair failed and we were unable to recover it. 00:38:31.726 [2024-10-01 17:38:30.160421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.726 [2024-10-01 17:38:30.160432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.726 qpair failed and we were unable to recover it. 00:38:31.726 [2024-10-01 17:38:30.160641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.726 [2024-10-01 17:38:30.160652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.726 qpair failed and we were unable to recover it. 00:38:31.726 [2024-10-01 17:38:30.160958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.726 [2024-10-01 17:38:30.160970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.726 qpair failed and we were unable to recover it. 00:38:31.726 [2024-10-01 17:38:30.161281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.726 [2024-10-01 17:38:30.161293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.726 qpair failed and we were unable to recover it. 00:38:31.726 [2024-10-01 17:38:30.161624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.726 [2024-10-01 17:38:30.161636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.726 qpair failed and we were unable to recover it. 00:38:31.726 [2024-10-01 17:38:30.161943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.726 [2024-10-01 17:38:30.161955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.726 qpair failed and we were unable to recover it. 00:38:31.726 [2024-10-01 17:38:30.162253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.726 [2024-10-01 17:38:30.162264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.726 qpair failed and we were unable to recover it. 00:38:31.726 [2024-10-01 17:38:30.162554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.726 [2024-10-01 17:38:30.162564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.726 qpair failed and we were unable to recover it. 00:38:31.726 [2024-10-01 17:38:30.162760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.726 [2024-10-01 17:38:30.162771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.726 qpair failed and we were unable to recover it. 00:38:31.726 [2024-10-01 17:38:30.163075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.726 [2024-10-01 17:38:30.163087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.726 qpair failed and we were unable to recover it. 00:38:31.726 [2024-10-01 17:38:30.163436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.726 [2024-10-01 17:38:30.163446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.726 qpair failed and we were unable to recover it. 00:38:31.726 [2024-10-01 17:38:30.163773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.726 [2024-10-01 17:38:30.163785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.726 qpair failed and we were unable to recover it. 00:38:31.726 [2024-10-01 17:38:30.164073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.726 [2024-10-01 17:38:30.164084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.726 qpair failed and we were unable to recover it. 00:38:31.726 [2024-10-01 17:38:30.164407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.726 [2024-10-01 17:38:30.164418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.726 qpair failed and we were unable to recover it. 00:38:31.726 [2024-10-01 17:38:30.164752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.726 [2024-10-01 17:38:30.164764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.726 qpair failed and we were unable to recover it. 00:38:31.726 [2024-10-01 17:38:30.165066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.726 [2024-10-01 17:38:30.165077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.726 qpair failed and we were unable to recover it. 00:38:31.726 [2024-10-01 17:38:30.165392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.726 [2024-10-01 17:38:30.165403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.726 qpair failed and we were unable to recover it. 00:38:31.726 [2024-10-01 17:38:30.165717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.726 [2024-10-01 17:38:30.165728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.726 qpair failed and we were unable to recover it. 00:38:31.726 [2024-10-01 17:38:30.166034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.726 [2024-10-01 17:38:30.166046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.726 qpair failed and we were unable to recover it. 00:38:31.726 [2024-10-01 17:38:30.166377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.726 [2024-10-01 17:38:30.166388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.726 qpair failed and we were unable to recover it. 00:38:31.726 [2024-10-01 17:38:30.166575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.726 [2024-10-01 17:38:30.166586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.726 qpair failed and we were unable to recover it. 00:38:31.726 [2024-10-01 17:38:30.166911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.726 [2024-10-01 17:38:30.166922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.726 qpair failed and we were unable to recover it. 00:38:31.726 [2024-10-01 17:38:30.167228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.726 [2024-10-01 17:38:30.167240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.726 qpair failed and we were unable to recover it. 00:38:31.726 17:38:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:31.726 [2024-10-01 17:38:30.167577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.726 [2024-10-01 17:38:30.167588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.726 qpair failed and we were unable to recover it. 00:38:31.726 17:38:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:38:31.726 [2024-10-01 17:38:30.167903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.726 [2024-10-01 17:38:30.167913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.726 qpair failed and we were unable to recover it. 00:38:31.726 17:38:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:31.726 [2024-10-01 17:38:30.168215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.726 [2024-10-01 17:38:30.168226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.726 qpair failed and we were unable to recover it. 00:38:31.726 17:38:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:31.726 [2024-10-01 17:38:30.168538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.726 [2024-10-01 17:38:30.168550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.726 qpair failed and we were unable to recover it. 00:38:31.726 17:38:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:31.726 [2024-10-01 17:38:30.168858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.726 [2024-10-01 17:38:30.168870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.726 qpair failed and we were unable to recover it. 00:38:31.726 [2024-10-01 17:38:30.169205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.726 [2024-10-01 17:38:30.169220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.726 qpair failed and we were unable to recover it. 00:38:31.726 [2024-10-01 17:38:30.169550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.726 [2024-10-01 17:38:30.169561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.726 qpair failed and we were unable to recover it. 00:38:31.726 [2024-10-01 17:38:30.169915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.726 [2024-10-01 17:38:30.169926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.726 qpair failed and we were unable to recover it. 00:38:31.727 [2024-10-01 17:38:30.170247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.727 [2024-10-01 17:38:30.170258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.727 qpair failed and we were unable to recover it. 00:38:31.727 [2024-10-01 17:38:30.170581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.727 [2024-10-01 17:38:30.170591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.727 qpair failed and we were unable to recover it. 00:38:31.727 [2024-10-01 17:38:30.170902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.727 [2024-10-01 17:38:30.170913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.727 qpair failed and we were unable to recover it. 00:38:31.727 [2024-10-01 17:38:30.171210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.727 [2024-10-01 17:38:30.171221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.727 qpair failed and we were unable to recover it. 00:38:31.727 [2024-10-01 17:38:30.171525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.727 [2024-10-01 17:38:30.171536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.727 qpair failed and we were unable to recover it. 00:38:31.727 [2024-10-01 17:38:30.171718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.727 [2024-10-01 17:38:30.171731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.727 qpair failed and we were unable to recover it. 00:38:31.727 [2024-10-01 17:38:30.171905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.727 [2024-10-01 17:38:30.171915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.727 qpair failed and we were unable to recover it. 00:38:31.727 [2024-10-01 17:38:30.172199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.727 [2024-10-01 17:38:30.172210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.727 qpair failed and we were unable to recover it. 00:38:31.727 [2024-10-01 17:38:30.172533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.727 [2024-10-01 17:38:30.172543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.727 qpair failed and we were unable to recover it. 00:38:31.727 [2024-10-01 17:38:30.172717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.727 [2024-10-01 17:38:30.172728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.727 qpair failed and we were unable to recover it. 00:38:31.727 [2024-10-01 17:38:30.173039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.727 [2024-10-01 17:38:30.173050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.727 qpair failed and we were unable to recover it. 00:38:31.727 [2024-10-01 17:38:30.173239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.727 [2024-10-01 17:38:30.173250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.727 qpair failed and we were unable to recover it. 00:38:31.727 [2024-10-01 17:38:30.173637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.727 [2024-10-01 17:38:30.173647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.727 qpair failed and we were unable to recover it. 00:38:31.727 [2024-10-01 17:38:30.173807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.727 [2024-10-01 17:38:30.173816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.727 qpair failed and we were unable to recover it. 00:38:31.727 [2024-10-01 17:38:30.174044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.727 [2024-10-01 17:38:30.174057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.727 qpair failed and we were unable to recover it. 00:38:31.727 [2024-10-01 17:38:30.174254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.727 [2024-10-01 17:38:30.174264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.727 qpair failed and we were unable to recover it. 00:38:31.727 [2024-10-01 17:38:30.174532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.727 [2024-10-01 17:38:30.174543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.727 qpair failed and we were unable to recover it. 00:38:31.727 [2024-10-01 17:38:30.174741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.727 [2024-10-01 17:38:30.174752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.727 qpair failed and we were unable to recover it. 00:38:31.727 [2024-10-01 17:38:30.174970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.727 [2024-10-01 17:38:30.174981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.727 qpair failed and we were unable to recover it. 00:38:31.727 [2024-10-01 17:38:30.175088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.727 [2024-10-01 17:38:30.175098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.727 qpair failed and we were unable to recover it. 00:38:31.727 [2024-10-01 17:38:30.175380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.727 [2024-10-01 17:38:30.175390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.727 qpair failed and we were unable to recover it. 00:38:31.727 [2024-10-01 17:38:30.175598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.727 [2024-10-01 17:38:30.175607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.727 qpair failed and we were unable to recover it. 00:38:31.727 [2024-10-01 17:38:30.175854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.727 [2024-10-01 17:38:30.175865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.727 qpair failed and we were unable to recover it. 00:38:31.727 [2024-10-01 17:38:30.176168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.727 [2024-10-01 17:38:30.176178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.727 qpair failed and we were unable to recover it. 00:38:31.727 [2024-10-01 17:38:30.176453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.727 [2024-10-01 17:38:30.176467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.727 qpair failed and we were unable to recover it. 00:38:31.727 [2024-10-01 17:38:30.176824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.727 [2024-10-01 17:38:30.176835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.727 qpair failed and we were unable to recover it. 00:38:31.727 [2024-10-01 17:38:30.177159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.727 [2024-10-01 17:38:30.177169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.727 qpair failed and we were unable to recover it. 00:38:31.727 [2024-10-01 17:38:30.177495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.727 [2024-10-01 17:38:30.177505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.727 qpair failed and we were unable to recover it. 00:38:31.727 [2024-10-01 17:38:30.177678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.727 [2024-10-01 17:38:30.177688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.727 qpair failed and we were unable to recover it. 00:38:31.727 [2024-10-01 17:38:30.178004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.727 [2024-10-01 17:38:30.178015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.727 qpair failed and we were unable to recover it. 00:38:31.727 [2024-10-01 17:38:30.178369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.727 [2024-10-01 17:38:30.178379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.727 qpair failed and we were unable to recover it. 00:38:31.727 [2024-10-01 17:38:30.178671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.727 [2024-10-01 17:38:30.178681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.727 qpair failed and we were unable to recover it. 00:38:31.727 [2024-10-01 17:38:30.178900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.727 [2024-10-01 17:38:30.178911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.727 qpair failed and we were unable to recover it. 00:38:31.727 [2024-10-01 17:38:30.179242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.727 [2024-10-01 17:38:30.179255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.727 qpair failed and we were unable to recover it. 00:38:31.728 [2024-10-01 17:38:30.179621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.728 [2024-10-01 17:38:30.179631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.728 qpair failed and we were unable to recover it. 00:38:31.728 [2024-10-01 17:38:30.179826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.728 [2024-10-01 17:38:30.179837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.728 qpair failed and we were unable to recover it. 00:38:31.728 [2024-10-01 17:38:30.180170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.728 [2024-10-01 17:38:30.180180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.728 qpair failed and we were unable to recover it. 00:38:31.728 [2024-10-01 17:38:30.180474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.728 [2024-10-01 17:38:30.180484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.728 qpair failed and we were unable to recover it. 00:38:31.728 [2024-10-01 17:38:30.180672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.728 [2024-10-01 17:38:30.180683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.728 qpair failed and we were unable to recover it. 00:38:31.728 [2024-10-01 17:38:30.180891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.728 [2024-10-01 17:38:30.180901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.728 qpair failed and we were unable to recover it. 00:38:31.728 [2024-10-01 17:38:30.181117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.728 [2024-10-01 17:38:30.181130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.728 qpair failed and we were unable to recover it. 00:38:31.728 [2024-10-01 17:38:30.181344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.728 [2024-10-01 17:38:30.181354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.728 qpair failed and we were unable to recover it. 00:38:31.728 [2024-10-01 17:38:30.181631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.728 [2024-10-01 17:38:30.181642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.728 qpair failed and we were unable to recover it. 00:38:31.728 [2024-10-01 17:38:30.181945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.728 [2024-10-01 17:38:30.181957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.728 qpair failed and we were unable to recover it. 00:38:31.728 [2024-10-01 17:38:30.182253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.728 [2024-10-01 17:38:30.182263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.728 qpair failed and we were unable to recover it. 00:38:31.728 [2024-10-01 17:38:30.182443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.728 [2024-10-01 17:38:30.182452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.728 qpair failed and we were unable to recover it. 00:38:31.728 [2024-10-01 17:38:30.182756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.728 [2024-10-01 17:38:30.182767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.728 qpair failed and we were unable to recover it. 00:38:31.728 [2024-10-01 17:38:30.183054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.728 [2024-10-01 17:38:30.183064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.728 qpair failed and we were unable to recover it. 00:38:31.728 [2024-10-01 17:38:30.183358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.728 [2024-10-01 17:38:30.183368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.728 qpair failed and we were unable to recover it. 00:38:31.728 [2024-10-01 17:38:30.183700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.728 [2024-10-01 17:38:30.183710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.728 qpair failed and we were unable to recover it. 00:38:31.728 [2024-10-01 17:38:30.184001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.728 [2024-10-01 17:38:30.184013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.728 qpair failed and we were unable to recover it. 00:38:31.728 [2024-10-01 17:38:30.184222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.728 [2024-10-01 17:38:30.184234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.728 qpair failed and we were unable to recover it. 00:38:31.728 [2024-10-01 17:38:30.184536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.728 [2024-10-01 17:38:30.184546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.728 qpair failed and we were unable to recover it. 00:38:31.728 [2024-10-01 17:38:30.184855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.728 [2024-10-01 17:38:30.184865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.728 qpair failed and we were unable to recover it. 00:38:31.728 [2024-10-01 17:38:30.185184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.728 [2024-10-01 17:38:30.185195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.728 qpair failed and we were unable to recover it. 00:38:31.728 [2024-10-01 17:38:30.185364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.728 [2024-10-01 17:38:30.185374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.728 qpair failed and we were unable to recover it. 00:38:31.728 [2024-10-01 17:38:30.185708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.728 [2024-10-01 17:38:30.185719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.728 qpair failed and we were unable to recover it. 00:38:31.728 [2024-10-01 17:38:30.186036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.728 [2024-10-01 17:38:30.186046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.728 qpair failed and we were unable to recover it. 00:38:31.728 [2024-10-01 17:38:30.186382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.728 [2024-10-01 17:38:30.186393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.728 qpair failed and we were unable to recover it. 00:38:31.728 [2024-10-01 17:38:30.186555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.728 [2024-10-01 17:38:30.186566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.728 qpair failed and we were unable to recover it. 00:38:31.728 [2024-10-01 17:38:30.186897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.728 [2024-10-01 17:38:30.186908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.728 qpair failed and we were unable to recover it. 00:38:31.728 [2024-10-01 17:38:30.187215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.728 [2024-10-01 17:38:30.187225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.728 qpair failed and we were unable to recover it. 00:38:31.728 [2024-10-01 17:38:30.187495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.728 [2024-10-01 17:38:30.187505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.728 qpair failed and we were unable to recover it. 00:38:31.728 [2024-10-01 17:38:30.187697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.728 [2024-10-01 17:38:30.187706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.728 qpair failed and we were unable to recover it. 00:38:31.728 [2024-10-01 17:38:30.187936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.729 [2024-10-01 17:38:30.187947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.729 qpair failed and we were unable to recover it. 00:38:31.729 [2024-10-01 17:38:30.188298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.729 [2024-10-01 17:38:30.188308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.729 qpair failed and we were unable to recover it. 00:38:31.729 [2024-10-01 17:38:30.188699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.729 [2024-10-01 17:38:30.188710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.729 qpair failed and we were unable to recover it. 00:38:31.729 [2024-10-01 17:38:30.189012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.729 [2024-10-01 17:38:30.189024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.729 qpair failed and we were unable to recover it. 00:38:31.729 [2024-10-01 17:38:30.189345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.729 [2024-10-01 17:38:30.189355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.729 qpair failed and we were unable to recover it. 00:38:31.729 [2024-10-01 17:38:30.189541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.729 [2024-10-01 17:38:30.189554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.729 qpair failed and we were unable to recover it. 00:38:31.729 [2024-10-01 17:38:30.189867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.729 [2024-10-01 17:38:30.189878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.729 qpair failed and we were unable to recover it. 00:38:31.729 [2024-10-01 17:38:30.190162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.729 [2024-10-01 17:38:30.190172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.729 qpair failed and we were unable to recover it. 00:38:31.729 [2024-10-01 17:38:30.190477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.729 [2024-10-01 17:38:30.190487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.729 qpair failed and we were unable to recover it. 00:38:31.729 [2024-10-01 17:38:30.190782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.729 [2024-10-01 17:38:30.190791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.729 qpair failed and we were unable to recover it. 00:38:31.729 [2024-10-01 17:38:30.190949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.729 [2024-10-01 17:38:30.190959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.729 qpair failed and we were unable to recover it. 00:38:31.729 [2024-10-01 17:38:30.191161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.729 [2024-10-01 17:38:30.191172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.729 qpair failed and we were unable to recover it. 00:38:31.729 [2024-10-01 17:38:30.191462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.729 [2024-10-01 17:38:30.191472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.729 qpair failed and we were unable to recover it. 00:38:31.729 [2024-10-01 17:38:30.191791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.729 [2024-10-01 17:38:30.191802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.729 qpair failed and we were unable to recover it. 00:38:31.729 [2024-10-01 17:38:30.192165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.729 [2024-10-01 17:38:30.192176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.729 qpair failed and we were unable to recover it. 00:38:31.729 [2024-10-01 17:38:30.192421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.729 [2024-10-01 17:38:30.192430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.729 qpair failed and we were unable to recover it. 00:38:31.729 [2024-10-01 17:38:30.192742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.729 [2024-10-01 17:38:30.192754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.729 qpair failed and we were unable to recover it. 00:38:31.729 [2024-10-01 17:38:30.192831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.729 [2024-10-01 17:38:30.192842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.729 qpair failed and we were unable to recover it. 00:38:31.729 [2024-10-01 17:38:30.192888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.729 [2024-10-01 17:38:30.192899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.729 qpair failed and we were unable to recover it. 00:38:31.729 [2024-10-01 17:38:30.193189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.729 [2024-10-01 17:38:30.193199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.729 qpair failed and we were unable to recover it. 00:38:31.729 [2024-10-01 17:38:30.193466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.729 [2024-10-01 17:38:30.193476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.729 qpair failed and we were unable to recover it. 00:38:31.729 [2024-10-01 17:38:30.193868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.729 [2024-10-01 17:38:30.193879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.729 qpair failed and we were unable to recover it. 00:38:31.729 [2024-10-01 17:38:30.194060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.729 [2024-10-01 17:38:30.194070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.729 qpair failed and we were unable to recover it. 00:38:31.729 [2024-10-01 17:38:30.194367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.729 [2024-10-01 17:38:30.194377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.729 qpair failed and we were unable to recover it. 00:38:31.729 [2024-10-01 17:38:30.194706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.729 [2024-10-01 17:38:30.194716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.729 qpair failed and we were unable to recover it. 00:38:31.729 [2024-10-01 17:38:30.194966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.729 [2024-10-01 17:38:30.194984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.729 qpair failed and we were unable to recover it. 00:38:31.729 [2024-10-01 17:38:30.195356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.729 [2024-10-01 17:38:30.195367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.729 qpair failed and we were unable to recover it. 00:38:31.729 [2024-10-01 17:38:30.195644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.729 [2024-10-01 17:38:30.195654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.729 qpair failed and we were unable to recover it. 00:38:31.729 [2024-10-01 17:38:30.195853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.729 [2024-10-01 17:38:30.195867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.729 qpair failed and we were unable to recover it. 00:38:31.729 [2024-10-01 17:38:30.196168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.729 [2024-10-01 17:38:30.196180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.729 qpair failed and we were unable to recover it. 00:38:31.729 [2024-10-01 17:38:30.196473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.729 [2024-10-01 17:38:30.196484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.729 qpair failed and we were unable to recover it. 00:38:31.729 [2024-10-01 17:38:30.196655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.729 [2024-10-01 17:38:30.196666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.729 qpair failed and we were unable to recover it. 00:38:31.729 [2024-10-01 17:38:30.196940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.730 [2024-10-01 17:38:30.196950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.730 qpair failed and we were unable to recover it. 00:38:31.730 [2024-10-01 17:38:30.197249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.730 [2024-10-01 17:38:30.197259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.730 qpair failed and we were unable to recover it. 00:38:31.730 [2024-10-01 17:38:30.197565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.730 [2024-10-01 17:38:30.197575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.730 qpair failed and we were unable to recover it. 00:38:31.730 [2024-10-01 17:38:30.197862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.730 [2024-10-01 17:38:30.197873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.730 qpair failed and we were unable to recover it. 00:38:31.730 [2024-10-01 17:38:30.198213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.730 [2024-10-01 17:38:30.198224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.730 qpair failed and we were unable to recover it. 00:38:31.730 [2024-10-01 17:38:30.198409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.730 [2024-10-01 17:38:30.198419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.730 qpair failed and we were unable to recover it. 00:38:31.730 [2024-10-01 17:38:30.198687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.730 [2024-10-01 17:38:30.198698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.730 qpair failed and we were unable to recover it. 00:38:31.730 [2024-10-01 17:38:30.198883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.730 [2024-10-01 17:38:30.198893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.730 qpair failed and we were unable to recover it. 00:38:31.730 [2024-10-01 17:38:30.199231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.730 [2024-10-01 17:38:30.199241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.730 qpair failed and we were unable to recover it. 00:38:31.730 [2024-10-01 17:38:30.199416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.730 [2024-10-01 17:38:30.199427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.730 qpair failed and we were unable to recover it. 00:38:31.730 [2024-10-01 17:38:30.199709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.730 [2024-10-01 17:38:30.199720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.730 qpair failed and we were unable to recover it. 00:38:31.730 [2024-10-01 17:38:30.199908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.730 [2024-10-01 17:38:30.199918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.730 qpair failed and we were unable to recover it. 00:38:31.730 [2024-10-01 17:38:30.200105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.730 [2024-10-01 17:38:30.200115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.730 qpair failed and we were unable to recover it. 00:38:31.730 [2024-10-01 17:38:30.200299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.730 [2024-10-01 17:38:30.200309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.730 qpair failed and we were unable to recover it. 00:38:31.730 [2024-10-01 17:38:30.200476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.730 [2024-10-01 17:38:30.200485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.730 qpair failed and we were unable to recover it. 00:38:31.730 [2024-10-01 17:38:30.200798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.730 [2024-10-01 17:38:30.200808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.730 qpair failed and we were unable to recover it. 00:38:31.730 [2024-10-01 17:38:30.201002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.730 [2024-10-01 17:38:30.201012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.730 qpair failed and we were unable to recover it. 00:38:31.730 [2024-10-01 17:38:30.201363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.730 [2024-10-01 17:38:30.201373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.730 qpair failed and we were unable to recover it. 00:38:31.730 [2024-10-01 17:38:30.201731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.730 [2024-10-01 17:38:30.201741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.730 qpair failed and we were unable to recover it. 00:38:31.730 [2024-10-01 17:38:30.201819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.730 [2024-10-01 17:38:30.201828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.730 qpair failed and we were unable to recover it. 00:38:31.730 [2024-10-01 17:38:30.202098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.730 [2024-10-01 17:38:30.202108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.730 qpair failed and we were unable to recover it. 00:38:31.730 [2024-10-01 17:38:30.202446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.730 [2024-10-01 17:38:30.202458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.730 qpair failed and we were unable to recover it. 00:38:31.730 [2024-10-01 17:38:30.202758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.730 [2024-10-01 17:38:30.202769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.730 qpair failed and we were unable to recover it. 00:38:31.730 [2024-10-01 17:38:30.203053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.730 [2024-10-01 17:38:30.203063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.730 qpair failed and we were unable to recover it. 00:38:31.730 [2024-10-01 17:38:30.203370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.730 [2024-10-01 17:38:30.203380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.730 qpair failed and we were unable to recover it. 00:38:31.730 [2024-10-01 17:38:30.203694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.730 [2024-10-01 17:38:30.203704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.730 qpair failed and we were unable to recover it. 00:38:31.730 [2024-10-01 17:38:30.204017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.730 [2024-10-01 17:38:30.204028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.730 qpair failed and we were unable to recover it. 00:38:31.730 [2024-10-01 17:38:30.204326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.730 [2024-10-01 17:38:30.204336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.730 qpair failed and we were unable to recover it. 00:38:31.730 [2024-10-01 17:38:30.204646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.730 [2024-10-01 17:38:30.204655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.730 qpair failed and we were unable to recover it. 00:38:31.730 [2024-10-01 17:38:30.204947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.730 [2024-10-01 17:38:30.204957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.730 qpair failed and we were unable to recover it. 00:38:31.730 [2024-10-01 17:38:30.205268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.730 [2024-10-01 17:38:30.205279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.730 qpair failed and we were unable to recover it. 00:38:31.730 [2024-10-01 17:38:30.205621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.730 [2024-10-01 17:38:30.205632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.730 qpair failed and we were unable to recover it. 00:38:31.730 [2024-10-01 17:38:30.205946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.730 [2024-10-01 17:38:30.205956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.730 qpair failed and we were unable to recover it. 00:38:31.730 [2024-10-01 17:38:30.206286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.730 [2024-10-01 17:38:30.206297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.730 qpair failed and we were unable to recover it. 00:38:31.730 [2024-10-01 17:38:30.206627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.730 [2024-10-01 17:38:30.206637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.731 qpair failed and we were unable to recover it. 00:38:31.731 [2024-10-01 17:38:30.206916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.731 [2024-10-01 17:38:30.206926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.731 qpair failed and we were unable to recover it. 00:38:31.731 [2024-10-01 17:38:30.207282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.731 [2024-10-01 17:38:30.207291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.731 qpair failed and we were unable to recover it. 00:38:31.731 [2024-10-01 17:38:30.207569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.731 [2024-10-01 17:38:30.207579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.731 qpair failed and we were unable to recover it. 00:38:31.731 [2024-10-01 17:38:30.207764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.731 [2024-10-01 17:38:30.207774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.731 qpair failed and we were unable to recover it. 00:38:31.731 17:38:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:31.731 [2024-10-01 17:38:30.208070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.731 [2024-10-01 17:38:30.208082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.731 qpair failed and we were unable to recover it. 00:38:31.731 [2024-10-01 17:38:30.208294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.731 [2024-10-01 17:38:30.208304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.731 qpair failed and we were unable to recover it. 00:38:31.731 17:38:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:31.731 [2024-10-01 17:38:30.208590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.731 [2024-10-01 17:38:30.208601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.731 qpair failed and we were unable to recover it. 00:38:31.731 17:38:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.731 [2024-10-01 17:38:30.208977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.731 17:38:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:31.731 [2024-10-01 17:38:30.208989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.731 qpair failed and we were unable to recover it. 00:38:31.731 [2024-10-01 17:38:30.209290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.731 [2024-10-01 17:38:30.209300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.731 qpair failed and we were unable to recover it. 00:38:31.731 [2024-10-01 17:38:30.209584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.731 [2024-10-01 17:38:30.209595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.731 qpair failed and we were unable to recover it. 00:38:31.731 [2024-10-01 17:38:30.209929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.731 [2024-10-01 17:38:30.209940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.731 qpair failed and we were unable to recover it. 00:38:31.731 [2024-10-01 17:38:30.210226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.731 [2024-10-01 17:38:30.210237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.731 qpair failed and we were unable to recover it. 00:38:31.731 [2024-10-01 17:38:30.210538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.731 [2024-10-01 17:38:30.210547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.731 qpair failed and we were unable to recover it. 00:38:31.731 [2024-10-01 17:38:30.210743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.731 [2024-10-01 17:38:30.210761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.731 qpair failed and we were unable to recover it. 00:38:31.731 [2024-10-01 17:38:30.211010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.731 [2024-10-01 17:38:30.211020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.731 qpair failed and we were unable to recover it. 00:38:31.731 [2024-10-01 17:38:30.211332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.731 [2024-10-01 17:38:30.211342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.731 qpair failed and we were unable to recover it. 00:38:31.731 [2024-10-01 17:38:30.211529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.731 [2024-10-01 17:38:30.211538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.731 qpair failed and we were unable to recover it. 00:38:31.731 [2024-10-01 17:38:30.211761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.731 [2024-10-01 17:38:30.211773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.731 qpair failed and we were unable to recover it. 00:38:31.731 [2024-10-01 17:38:30.212084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.731 [2024-10-01 17:38:30.212094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.731 qpair failed and we were unable to recover it. 00:38:31.731 [2024-10-01 17:38:30.212492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.731 [2024-10-01 17:38:30.212501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.731 qpair failed and we were unable to recover it. 00:38:31.731 [2024-10-01 17:38:30.212774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.731 [2024-10-01 17:38:30.212784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.731 qpair failed and we were unable to recover it. 00:38:31.731 [2024-10-01 17:38:30.213075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.731 [2024-10-01 17:38:30.213085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.731 qpair failed and we were unable to recover it. 00:38:31.731 [2024-10-01 17:38:30.213399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.731 [2024-10-01 17:38:30.213409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.731 qpair failed and we were unable to recover it. 00:38:31.731 [2024-10-01 17:38:30.213609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.731 [2024-10-01 17:38:30.213619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.731 qpair failed and we were unable to recover it. 00:38:31.731 [2024-10-01 17:38:30.213992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.731 [2024-10-01 17:38:30.214007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.731 qpair failed and we were unable to recover it. 00:38:31.731 [2024-10-01 17:38:30.214388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.731 [2024-10-01 17:38:30.214398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.731 qpair failed and we were unable to recover it. 00:38:31.731 [2024-10-01 17:38:30.214667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.731 [2024-10-01 17:38:30.214677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.731 qpair failed and we were unable to recover it. 00:38:31.731 [2024-10-01 17:38:30.214871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.731 [2024-10-01 17:38:30.214883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.731 qpair failed and we were unable to recover it. 00:38:31.731 [2024-10-01 17:38:30.215225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.731 [2024-10-01 17:38:30.215236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.731 qpair failed and we were unable to recover it. 00:38:31.731 [2024-10-01 17:38:30.215532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.731 [2024-10-01 17:38:30.215542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.731 qpair failed and we were unable to recover it. 00:38:31.731 [2024-10-01 17:38:30.215845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.731 [2024-10-01 17:38:30.215856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.732 qpair failed and we were unable to recover it. 00:38:31.732 [2024-10-01 17:38:30.216068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.732 [2024-10-01 17:38:30.216078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.732 qpair failed and we were unable to recover it. 00:38:31.732 [2024-10-01 17:38:30.216386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.732 [2024-10-01 17:38:30.216396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.732 qpair failed and we were unable to recover it. 00:38:31.732 [2024-10-01 17:38:30.216680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.732 [2024-10-01 17:38:30.216690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.732 qpair failed and we were unable to recover it. 00:38:31.732 [2024-10-01 17:38:30.216983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.732 [2024-10-01 17:38:30.216993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.732 qpair failed and we were unable to recover it. 00:38:31.732 [2024-10-01 17:38:30.217315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.732 [2024-10-01 17:38:30.217326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.732 qpair failed and we were unable to recover it. 00:38:31.732 [2024-10-01 17:38:30.217638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.732 [2024-10-01 17:38:30.217649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.732 qpair failed and we were unable to recover it. 00:38:31.732 [2024-10-01 17:38:30.217968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.732 [2024-10-01 17:38:30.217979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.732 qpair failed and we were unable to recover it. 00:38:31.732 [2024-10-01 17:38:30.218156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.732 [2024-10-01 17:38:30.218166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.732 qpair failed and we were unable to recover it. 00:38:31.732 [2024-10-01 17:38:30.218485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.732 [2024-10-01 17:38:30.218497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.732 qpair failed and we were unable to recover it. 00:38:31.732 [2024-10-01 17:38:30.218685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.732 [2024-10-01 17:38:30.218696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.732 qpair failed and we were unable to recover it. 00:38:31.732 [2024-10-01 17:38:30.218916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.732 [2024-10-01 17:38:30.218926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.732 qpair failed and we were unable to recover it. 00:38:31.732 [2024-10-01 17:38:30.219109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.732 [2024-10-01 17:38:30.219120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.732 qpair failed and we were unable to recover it. 00:38:31.732 [2024-10-01 17:38:30.219332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.732 [2024-10-01 17:38:30.219342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.732 qpair failed and we were unable to recover it. 00:38:31.732 [2024-10-01 17:38:30.219651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.732 [2024-10-01 17:38:30.219661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.732 qpair failed and we were unable to recover it. 00:38:31.732 [2024-10-01 17:38:30.219731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.732 [2024-10-01 17:38:30.219740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.732 qpair failed and we were unable to recover it. 00:38:31.732 [2024-10-01 17:38:30.219943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.732 [2024-10-01 17:38:30.219952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.732 qpair failed and we were unable to recover it. 00:38:31.732 [2024-10-01 17:38:30.220225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.732 [2024-10-01 17:38:30.220235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.732 qpair failed and we were unable to recover it. 00:38:31.732 [2024-10-01 17:38:30.220546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.732 [2024-10-01 17:38:30.220556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.732 qpair failed and we were unable to recover it. 00:38:31.732 [2024-10-01 17:38:30.220860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.732 [2024-10-01 17:38:30.220871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.732 qpair failed and we were unable to recover it. 00:38:31.732 [2024-10-01 17:38:30.221250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.732 [2024-10-01 17:38:30.221261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.732 qpair failed and we were unable to recover it. 00:38:31.732 [2024-10-01 17:38:30.221449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.732 [2024-10-01 17:38:30.221459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.732 qpair failed and we were unable to recover it. 00:38:31.732 [2024-10-01 17:38:30.221761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.732 [2024-10-01 17:38:30.221771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.732 qpair failed and we were unable to recover it. 00:38:31.732 [2024-10-01 17:38:30.222065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.732 [2024-10-01 17:38:30.222076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.732 qpair failed and we were unable to recover it. 00:38:31.732 [2024-10-01 17:38:30.222366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.732 [2024-10-01 17:38:30.222376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.732 qpair failed and we were unable to recover it. 00:38:31.732 [2024-10-01 17:38:30.222720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.732 [2024-10-01 17:38:30.222730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.732 qpair failed and we were unable to recover it. 00:38:31.732 [2024-10-01 17:38:30.223014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.732 [2024-10-01 17:38:30.223025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.732 qpair failed and we were unable to recover it. 00:38:31.732 [2024-10-01 17:38:30.223151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.732 [2024-10-01 17:38:30.223161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.732 qpair failed and we were unable to recover it. 00:38:31.732 Malloc0 00:38:31.732 [2024-10-01 17:38:30.223483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.732 [2024-10-01 17:38:30.223493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.732 qpair failed and we were unable to recover it. 00:38:31.732 [2024-10-01 17:38:30.223831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.732 [2024-10-01 17:38:30.223841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.732 qpair failed and we were unable to recover it. 00:38:31.732 [2024-10-01 17:38:30.224013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.732 [2024-10-01 17:38:30.224024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.732 qpair failed and we were unable to recover it. 00:38:31.732 17:38:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.732 [2024-10-01 17:38:30.224210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.732 [2024-10-01 17:38:30.224221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.732 qpair failed and we were unable to recover it. 00:38:31.733 [2024-10-01 17:38:30.224564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.733 [2024-10-01 17:38:30.224575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.733 qpair failed and we were unable to recover it. 00:38:31.733 17:38:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:38:31.733 [2024-10-01 17:38:30.224906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.733 [2024-10-01 17:38:30.224917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.733 qpair failed and we were unable to recover it. 00:38:31.733 17:38:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.733 [2024-10-01 17:38:30.225097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.733 [2024-10-01 17:38:30.225107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.733 qpair failed and we were unable to recover it. 00:38:31.733 17:38:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:31.733 [2024-10-01 17:38:30.225423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.733 [2024-10-01 17:38:30.225434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.733 qpair failed and we were unable to recover it. 00:38:31.733 [2024-10-01 17:38:30.225741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.733 [2024-10-01 17:38:30.225753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.733 qpair failed and we were unable to recover it. 00:38:31.733 [2024-10-01 17:38:30.225957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.733 [2024-10-01 17:38:30.225967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.733 qpair failed and we were unable to recover it. 00:38:31.733 [2024-10-01 17:38:30.226191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.733 [2024-10-01 17:38:30.226201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.733 qpair failed and we were unable to recover it. 00:38:31.733 [2024-10-01 17:38:30.226424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.733 [2024-10-01 17:38:30.226434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.733 qpair failed and we were unable to recover it. 00:38:31.733 [2024-10-01 17:38:30.226741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.733 [2024-10-01 17:38:30.226751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.733 qpair failed and we were unable to recover it. 00:38:31.733 [2024-10-01 17:38:30.226894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.733 [2024-10-01 17:38:30.226904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.733 qpair failed and we were unable to recover it. 00:38:31.733 [2024-10-01 17:38:30.227176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.733 [2024-10-01 17:38:30.227186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.733 qpair failed and we were unable to recover it. 00:38:31.733 [2024-10-01 17:38:30.227474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.733 [2024-10-01 17:38:30.227484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.733 qpair failed and we were unable to recover it. 00:38:31.733 [2024-10-01 17:38:30.227683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.733 [2024-10-01 17:38:30.227694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.733 qpair failed and we were unable to recover it. 00:38:31.733 [2024-10-01 17:38:30.227895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.733 [2024-10-01 17:38:30.227905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.733 qpair failed and we were unable to recover it. 00:38:31.733 [2024-10-01 17:38:30.228086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.733 [2024-10-01 17:38:30.228096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.733 qpair failed and we were unable to recover it. 00:38:31.733 [2024-10-01 17:38:30.228433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.733 [2024-10-01 17:38:30.228443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.733 qpair failed and we were unable to recover it. 00:38:31.733 [2024-10-01 17:38:30.228712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.733 [2024-10-01 17:38:30.228721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.733 qpair failed and we were unable to recover it. 00:38:31.733 [2024-10-01 17:38:30.229063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.733 [2024-10-01 17:38:30.229073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.733 qpair failed and we were unable to recover it. 00:38:31.733 [2024-10-01 17:38:30.229398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.733 [2024-10-01 17:38:30.229408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.733 qpair failed and we were unable to recover it. 00:38:31.733 [2024-10-01 17:38:30.229616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.733 [2024-10-01 17:38:30.229625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.733 qpair failed and we were unable to recover it. 00:38:31.733 [2024-10-01 17:38:30.229866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.733 [2024-10-01 17:38:30.229876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.733 qpair failed and we were unable to recover it. 00:38:31.733 [2024-10-01 17:38:30.230080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.733 [2024-10-01 17:38:30.230091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.733 qpair failed and we were unable to recover it. 00:38:31.733 [2024-10-01 17:38:30.230419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.733 [2024-10-01 17:38:30.230428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.733 qpair failed and we were unable to recover it. 00:38:31.733 [2024-10-01 17:38:30.230620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.733 [2024-10-01 17:38:30.230639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.733 qpair failed and we were unable to recover it. 00:38:31.733 [2024-10-01 17:38:30.230818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.733 [2024-10-01 17:38:30.230827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.733 qpair failed and we were unable to recover it. 00:38:31.733 [2024-10-01 17:38:30.230836] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:31.733 [2024-10-01 17:38:30.231199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.733 [2024-10-01 17:38:30.231210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.733 qpair failed and we were unable to recover it. 00:38:31.733 [2024-10-01 17:38:30.231401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.733 [2024-10-01 17:38:30.231410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.733 qpair failed and we were unable to recover it. 00:38:31.733 [2024-10-01 17:38:30.231619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.733 [2024-10-01 17:38:30.231629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.733 qpair failed and we were unable to recover it. 00:38:31.733 [2024-10-01 17:38:30.231806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.733 [2024-10-01 17:38:30.231817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.733 qpair failed and we were unable to recover it. 00:38:31.734 [2024-10-01 17:38:30.232167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.734 [2024-10-01 17:38:30.232177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.734 qpair failed and we were unable to recover it. 00:38:31.734 [2024-10-01 17:38:30.232366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.734 [2024-10-01 17:38:30.232376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.734 qpair failed and we were unable to recover it. 00:38:31.734 [2024-10-01 17:38:30.232538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.734 [2024-10-01 17:38:30.232552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.734 qpair failed and we were unable to recover it. 00:38:31.997 [2024-10-01 17:38:30.232848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.997 [2024-10-01 17:38:30.232859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.997 qpair failed and we were unable to recover it. 00:38:31.997 [2024-10-01 17:38:30.233079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.997 [2024-10-01 17:38:30.233091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.997 qpair failed and we were unable to recover it. 00:38:31.997 [2024-10-01 17:38:30.233264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.997 [2024-10-01 17:38:30.233273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.997 qpair failed and we were unable to recover it. 00:38:31.997 [2024-10-01 17:38:30.233446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.997 [2024-10-01 17:38:30.233455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.997 qpair failed and we were unable to recover it. 00:38:31.997 [2024-10-01 17:38:30.233698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.997 [2024-10-01 17:38:30.233708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.997 qpair failed and we were unable to recover it. 00:38:31.997 [2024-10-01 17:38:30.234079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.997 [2024-10-01 17:38:30.234089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.997 qpair failed and we were unable to recover it. 00:38:31.997 [2024-10-01 17:38:30.234357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.997 [2024-10-01 17:38:30.234366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.997 qpair failed and we were unable to recover it. 00:38:31.997 [2024-10-01 17:38:30.234654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.997 [2024-10-01 17:38:30.234665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.997 qpair failed and we were unable to recover it. 00:38:31.997 [2024-10-01 17:38:30.234956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.997 [2024-10-01 17:38:30.234966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.997 qpair failed and we were unable to recover it. 00:38:31.997 [2024-10-01 17:38:30.235260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.997 [2024-10-01 17:38:30.235270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.997 qpair failed and we were unable to recover it. 00:38:31.997 [2024-10-01 17:38:30.235601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.997 [2024-10-01 17:38:30.235611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.997 qpair failed and we were unable to recover it. 00:38:31.997 [2024-10-01 17:38:30.235939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.997 [2024-10-01 17:38:30.235949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.997 qpair failed and we were unable to recover it. 00:38:31.997 [2024-10-01 17:38:30.236283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.997 [2024-10-01 17:38:30.236293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.997 qpair failed and we were unable to recover it. 00:38:31.997 [2024-10-01 17:38:30.236579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.997 [2024-10-01 17:38:30.236589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.997 qpair failed and we were unable to recover it. 00:38:31.997 [2024-10-01 17:38:30.236927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.997 [2024-10-01 17:38:30.236937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.997 qpair failed and we were unable to recover it. 00:38:31.997 [2024-10-01 17:38:30.237232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.997 [2024-10-01 17:38:30.237242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.997 qpair failed and we were unable to recover it. 00:38:31.997 [2024-10-01 17:38:30.237461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.997 [2024-10-01 17:38:30.237471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.997 qpair failed and we were unable to recover it. 00:38:31.997 [2024-10-01 17:38:30.237741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.997 [2024-10-01 17:38:30.237751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.997 qpair failed and we were unable to recover it. 00:38:31.997 [2024-10-01 17:38:30.238036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.998 [2024-10-01 17:38:30.238046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.998 qpair failed and we were unable to recover it. 00:38:31.998 [2024-10-01 17:38:30.238364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.998 [2024-10-01 17:38:30.238374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.998 qpair failed and we were unable to recover it. 00:38:31.998 [2024-10-01 17:38:30.238762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.998 [2024-10-01 17:38:30.238772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.998 qpair failed and we were unable to recover it. 00:38:31.998 [2024-10-01 17:38:30.239088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.998 [2024-10-01 17:38:30.239098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.998 qpair failed and we were unable to recover it. 00:38:31.998 [2024-10-01 17:38:30.239407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.998 [2024-10-01 17:38:30.239417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.998 qpair failed and we were unable to recover it. 00:38:31.998 [2024-10-01 17:38:30.239693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.998 [2024-10-01 17:38:30.239703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.998 qpair failed and we were unable to recover it. 00:38:31.998 [2024-10-01 17:38:30.240005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.998 [2024-10-01 17:38:30.240016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.998 qpair failed and we were unable to recover it. 00:38:31.998 17:38:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.998 [2024-10-01 17:38:30.240345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.998 [2024-10-01 17:38:30.240356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.998 qpair failed and we were unable to recover it. 00:38:31.998 17:38:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:31.998 [2024-10-01 17:38:30.240576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.998 [2024-10-01 17:38:30.240587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.998 qpair failed and we were unable to recover it. 00:38:31.998 17:38:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.998 [2024-10-01 17:38:30.240906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.998 [2024-10-01 17:38:30.240917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.998 qpair failed and we were unable to recover it. 00:38:31.998 [2024-10-01 17:38:30.241015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.998 [2024-10-01 17:38:30.241026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.998 qpair failed and we were unable to recover it. 00:38:31.998 17:38:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:31.998 [2024-10-01 17:38:30.241344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.998 [2024-10-01 17:38:30.241354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.998 qpair failed and we were unable to recover it. 00:38:31.998 [2024-10-01 17:38:30.241685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.998 [2024-10-01 17:38:30.241695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.998 qpair failed and we were unable to recover it. 00:38:31.998 [2024-10-01 17:38:30.241900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.998 [2024-10-01 17:38:30.241911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.998 qpair failed and we were unable to recover it. 00:38:31.998 [2024-10-01 17:38:30.242218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.998 [2024-10-01 17:38:30.242228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.998 qpair failed and we were unable to recover it. 00:38:31.998 [2024-10-01 17:38:30.242497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.998 [2024-10-01 17:38:30.242506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.998 qpair failed and we were unable to recover it. 00:38:31.998 [2024-10-01 17:38:30.242843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.998 [2024-10-01 17:38:30.242853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.998 qpair failed and we were unable to recover it. 00:38:31.998 [2024-10-01 17:38:30.243166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.998 [2024-10-01 17:38:30.243176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.998 qpair failed and we were unable to recover it. 00:38:31.998 [2024-10-01 17:38:30.243346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.998 [2024-10-01 17:38:30.243356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.998 qpair failed and we were unable to recover it. 00:38:31.998 [2024-10-01 17:38:30.243687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.998 [2024-10-01 17:38:30.243697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.998 qpair failed and we were unable to recover it. 00:38:31.998 [2024-10-01 17:38:30.243990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.998 [2024-10-01 17:38:30.244003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.998 qpair failed and we were unable to recover it. 00:38:31.998 [2024-10-01 17:38:30.244234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.998 [2024-10-01 17:38:30.244244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.998 qpair failed and we were unable to recover it. 00:38:31.998 [2024-10-01 17:38:30.244548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.998 [2024-10-01 17:38:30.244557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.998 qpair failed and we were unable to recover it. 00:38:31.998 [2024-10-01 17:38:30.244898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.998 [2024-10-01 17:38:30.244908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.998 qpair failed and we were unable to recover it. 00:38:31.998 [2024-10-01 17:38:30.245219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.998 [2024-10-01 17:38:30.245229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.998 qpair failed and we were unable to recover it. 00:38:31.998 [2024-10-01 17:38:30.245558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.998 [2024-10-01 17:38:30.245568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.998 qpair failed and we were unable to recover it. 00:38:31.998 [2024-10-01 17:38:30.245731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.998 [2024-10-01 17:38:30.245741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.998 qpair failed and we were unable to recover it. 00:38:31.998 [2024-10-01 17:38:30.245953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.998 [2024-10-01 17:38:30.245963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.998 qpair failed and we were unable to recover it. 00:38:31.998 [2024-10-01 17:38:30.246288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.998 [2024-10-01 17:38:30.246298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.998 qpair failed and we were unable to recover it. 00:38:31.998 [2024-10-01 17:38:30.246490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.998 [2024-10-01 17:38:30.246501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.998 qpair failed and we were unable to recover it. 00:38:31.998 [2024-10-01 17:38:30.246831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.998 [2024-10-01 17:38:30.246840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.998 qpair failed and we were unable to recover it. 00:38:31.998 [2024-10-01 17:38:30.247158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.998 [2024-10-01 17:38:30.247168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.998 qpair failed and we were unable to recover it. 00:38:31.998 [2024-10-01 17:38:30.247468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.998 [2024-10-01 17:38:30.247478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.998 qpair failed and we were unable to recover it. 00:38:31.998 [2024-10-01 17:38:30.247787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.998 [2024-10-01 17:38:30.247797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.998 qpair failed and we were unable to recover it. 00:38:31.998 [2024-10-01 17:38:30.248056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.998 [2024-10-01 17:38:30.248067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.998 qpair failed and we were unable to recover it. 00:38:31.998 [2024-10-01 17:38:30.248448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.998 [2024-10-01 17:38:30.248457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.998 qpair failed and we were unable to recover it. 00:38:31.998 [2024-10-01 17:38:30.248768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.998 [2024-10-01 17:38:30.248777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.998 qpair failed and we were unable to recover it. 00:38:31.998 [2024-10-01 17:38:30.249107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.999 [2024-10-01 17:38:30.249117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.999 qpair failed and we were unable to recover it. 00:38:31.999 [2024-10-01 17:38:30.249277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.999 [2024-10-01 17:38:30.249287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.999 qpair failed and we were unable to recover it. 00:38:31.999 [2024-10-01 17:38:30.249558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.999 [2024-10-01 17:38:30.249568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.999 qpair failed and we were unable to recover it. 00:38:31.999 [2024-10-01 17:38:30.249731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.999 [2024-10-01 17:38:30.249741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.999 qpair failed and we were unable to recover it. 00:38:31.999 [2024-10-01 17:38:30.250094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.999 [2024-10-01 17:38:30.250104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.999 qpair failed and we were unable to recover it. 00:38:31.999 [2024-10-01 17:38:30.250452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.999 [2024-10-01 17:38:30.250462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.999 qpair failed and we were unable to recover it. 00:38:31.999 [2024-10-01 17:38:30.250818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.999 [2024-10-01 17:38:30.250829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.999 qpair failed and we were unable to recover it. 00:38:31.999 [2024-10-01 17:38:30.251125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.999 [2024-10-01 17:38:30.251134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.999 qpair failed and we were unable to recover it. 00:38:31.999 [2024-10-01 17:38:30.251308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.999 [2024-10-01 17:38:30.251318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.999 qpair failed and we were unable to recover it. 00:38:31.999 [2024-10-01 17:38:30.251661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.999 [2024-10-01 17:38:30.251672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.999 qpair failed and we were unable to recover it. 00:38:31.999 17:38:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.999 [2024-10-01 17:38:30.251966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.999 [2024-10-01 17:38:30.251991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.999 qpair failed and we were unable to recover it. 00:38:31.999 [2024-10-01 17:38:30.252264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.999 [2024-10-01 17:38:30.252275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.999 qpair failed and we were unable to recover it. 00:38:31.999 17:38:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:31.999 [2024-10-01 17:38:30.252582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.999 [2024-10-01 17:38:30.252593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.999 qpair failed and we were unable to recover it. 00:38:31.999 17:38:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.999 [2024-10-01 17:38:30.252938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.999 [2024-10-01 17:38:30.252948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.999 qpair failed and we were unable to recover it. 00:38:31.999 17:38:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:31.999 [2024-10-01 17:38:30.253258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.999 [2024-10-01 17:38:30.253269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.999 qpair failed and we were unable to recover it. 00:38:31.999 [2024-10-01 17:38:30.253604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.999 [2024-10-01 17:38:30.253614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.999 qpair failed and we were unable to recover it. 00:38:31.999 [2024-10-01 17:38:30.253786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.999 [2024-10-01 17:38:30.253797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.999 qpair failed and we were unable to recover it. 00:38:31.999 [2024-10-01 17:38:30.254156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.999 [2024-10-01 17:38:30.254166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.999 qpair failed and we were unable to recover it. 00:38:31.999 [2024-10-01 17:38:30.254461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.999 [2024-10-01 17:38:30.254471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.999 qpair failed and we were unable to recover it. 00:38:31.999 [2024-10-01 17:38:30.254835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.999 [2024-10-01 17:38:30.254845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.999 qpair failed and we were unable to recover it. 00:38:31.999 [2024-10-01 17:38:30.255048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.999 [2024-10-01 17:38:30.255065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.999 qpair failed and we were unable to recover it. 00:38:31.999 [2024-10-01 17:38:30.255272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.999 [2024-10-01 17:38:30.255282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.999 qpair failed and we were unable to recover it. 00:38:31.999 [2024-10-01 17:38:30.255774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.999 [2024-10-01 17:38:30.255868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.999 qpair failed and we were unable to recover it. 00:38:31.999 [2024-10-01 17:38:30.256401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.999 [2024-10-01 17:38:30.256494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f956c000b90 with addr=10.0.0.2, port=4420 00:38:31.999 qpair failed and we were unable to recover it. 00:38:31.999 [2024-10-01 17:38:30.256830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.999 [2024-10-01 17:38:30.256842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.999 qpair failed and we were unable to recover it. 00:38:31.999 [2024-10-01 17:38:30.257155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.999 [2024-10-01 17:38:30.257165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.999 qpair failed and we were unable to recover it. 00:38:31.999 [2024-10-01 17:38:30.257453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.999 [2024-10-01 17:38:30.257463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.999 qpair failed and we were unable to recover it. 00:38:31.999 [2024-10-01 17:38:30.257797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.999 [2024-10-01 17:38:30.257807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.999 qpair failed and we were unable to recover it. 00:38:31.999 [2024-10-01 17:38:30.258107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.999 [2024-10-01 17:38:30.258116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.999 qpair failed and we were unable to recover it. 00:38:31.999 [2024-10-01 17:38:30.258464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.999 [2024-10-01 17:38:30.258474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.999 qpair failed and we were unable to recover it. 00:38:31.999 [2024-10-01 17:38:30.258780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.999 [2024-10-01 17:38:30.258790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.999 qpair failed and we were unable to recover it. 00:38:31.999 [2024-10-01 17:38:30.259100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.999 [2024-10-01 17:38:30.259110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.999 qpair failed and we were unable to recover it. 00:38:31.999 [2024-10-01 17:38:30.259326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.999 [2024-10-01 17:38:30.259342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.999 qpair failed and we were unable to recover it. 00:38:31.999 [2024-10-01 17:38:30.259653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.999 [2024-10-01 17:38:30.259663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.999 qpair failed and we were unable to recover it. 00:38:31.999 [2024-10-01 17:38:30.259954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.999 [2024-10-01 17:38:30.259963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.999 qpair failed and we were unable to recover it. 00:38:31.999 [2024-10-01 17:38:30.260278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.999 [2024-10-01 17:38:30.260287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.999 qpair failed and we were unable to recover it. 00:38:31.999 [2024-10-01 17:38:30.260598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.999 [2024-10-01 17:38:30.260608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:31.999 qpair failed and we were unable to recover it. 00:38:31.999 [2024-10-01 17:38:30.260942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.000 [2024-10-01 17:38:30.260953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:32.000 qpair failed and we were unable to recover it. 00:38:32.000 [2024-10-01 17:38:30.261232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.000 [2024-10-01 17:38:30.261242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:32.000 qpair failed and we were unable to recover it. 00:38:32.000 [2024-10-01 17:38:30.261421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.000 [2024-10-01 17:38:30.261431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:32.000 qpair failed and we were unable to recover it. 00:38:32.000 [2024-10-01 17:38:30.261743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.000 [2024-10-01 17:38:30.261753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:32.000 qpair failed and we were unable to recover it. 00:38:32.000 [2024-10-01 17:38:30.262069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.000 [2024-10-01 17:38:30.262080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:32.000 qpair failed and we were unable to recover it. 00:38:32.000 [2024-10-01 17:38:30.262392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.000 [2024-10-01 17:38:30.262402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:32.000 qpair failed and we were unable to recover it. 00:38:32.000 [2024-10-01 17:38:30.262743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.000 [2024-10-01 17:38:30.262753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:32.000 qpair failed and we were unable to recover it. 00:38:32.000 [2024-10-01 17:38:30.262961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.000 [2024-10-01 17:38:30.262971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:32.000 qpair failed and we were unable to recover it. 00:38:32.000 [2024-10-01 17:38:30.263194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.000 [2024-10-01 17:38:30.263204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:32.000 qpair failed and we were unable to recover it. 00:38:32.000 [2024-10-01 17:38:30.263574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.000 [2024-10-01 17:38:30.263585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:32.000 qpair failed and we were unable to recover it. 00:38:32.000 [2024-10-01 17:38:30.263916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.000 17:38:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:32.000 [2024-10-01 17:38:30.263927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:32.000 qpair failed and we were unable to recover it. 00:38:32.000 [2024-10-01 17:38:30.264241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.000 [2024-10-01 17:38:30.264251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:32.000 qpair failed and we were unable to recover it. 00:38:32.000 17:38:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:32.000 [2024-10-01 17:38:30.264532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.000 [2024-10-01 17:38:30.264543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:32.000 qpair failed and we were unable to recover it. 00:38:32.000 17:38:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:32.000 [2024-10-01 17:38:30.264852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.000 [2024-10-01 17:38:30.264862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:32.000 qpair failed and we were unable to recover it. 00:38:32.000 17:38:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:32.000 [2024-10-01 17:38:30.265167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.000 [2024-10-01 17:38:30.265177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:32.000 qpair failed and we were unable to recover it. 00:38:32.000 [2024-10-01 17:38:30.265397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.000 [2024-10-01 17:38:30.265407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:32.000 qpair failed and we were unable to recover it. 00:38:32.000 [2024-10-01 17:38:30.265737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.000 [2024-10-01 17:38:30.265748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:32.000 qpair failed and we were unable to recover it. 00:38:32.000 [2024-10-01 17:38:30.266082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.000 [2024-10-01 17:38:30.266092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:32.000 qpair failed and we were unable to recover it. 00:38:32.000 [2024-10-01 17:38:30.266387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.000 [2024-10-01 17:38:30.266397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:32.000 qpair failed and we were unable to recover it. 00:38:32.000 [2024-10-01 17:38:30.266599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.000 [2024-10-01 17:38:30.266609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:32.000 qpair failed and we were unable to recover it. 00:38:32.000 [2024-10-01 17:38:30.266788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.000 [2024-10-01 17:38:30.266798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:32.000 qpair failed and we were unable to recover it. 00:38:32.000 [2024-10-01 17:38:30.267109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.000 [2024-10-01 17:38:30.267119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:32.000 qpair failed and we were unable to recover it. 00:38:32.000 [2024-10-01 17:38:30.267456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.000 [2024-10-01 17:38:30.267466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:32.000 qpair failed and we were unable to recover it. 00:38:32.000 [2024-10-01 17:38:30.267754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.000 [2024-10-01 17:38:30.267765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:32.000 qpair failed and we were unable to recover it. 00:38:32.000 [2024-10-01 17:38:30.267956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.000 [2024-10-01 17:38:30.267967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:32.000 qpair failed and we were unable to recover it. 00:38:32.000 [2024-10-01 17:38:30.268135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.000 [2024-10-01 17:38:30.268146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:32.000 qpair failed and we were unable to recover it. 00:38:32.000 [2024-10-01 17:38:30.268480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.000 [2024-10-01 17:38:30.268490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:32.000 qpair failed and we were unable to recover it. 00:38:32.000 [2024-10-01 17:38:30.268694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.000 [2024-10-01 17:38:30.268704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:32.000 qpair failed and we were unable to recover it. 00:38:32.000 [2024-10-01 17:38:30.268872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.000 [2024-10-01 17:38:30.268883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:32.000 qpair failed and we were unable to recover it. 00:38:32.000 [2024-10-01 17:38:30.269185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.000 [2024-10-01 17:38:30.269194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:32.000 qpair failed and we were unable to recover it. 00:38:32.000 [2024-10-01 17:38:30.269490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.000 [2024-10-01 17:38:30.269500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:32.000 qpair failed and we were unable to recover it. 00:38:32.000 [2024-10-01 17:38:30.269810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.000 [2024-10-01 17:38:30.269822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:32.000 qpair failed and we were unable to recover it. 00:38:32.000 [2024-10-01 17:38:30.270157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.000 [2024-10-01 17:38:30.270167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:32.000 qpair failed and we were unable to recover it. 00:38:32.000 [2024-10-01 17:38:30.270356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.000 [2024-10-01 17:38:30.270366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:32.000 qpair failed and we were unable to recover it. 00:38:32.000 [2024-10-01 17:38:30.270749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.000 [2024-10-01 17:38:30.270759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:32.000 qpair failed and we were unable to recover it. 00:38:32.000 [2024-10-01 17:38:30.271037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.000 [2024-10-01 17:38:30.271047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ff1f0 with addr=10.0.0.2, port=4420 00:38:32.000 qpair failed and we were unable to recover it. 00:38:32.000 [2024-10-01 17:38:30.271126] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:32.000 17:38:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:32.000 17:38:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:32.001 17:38:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:32.001 17:38:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:32.001 [2024-10-01 17:38:30.281797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.001 [2024-10-01 17:38:30.281880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.001 [2024-10-01 17:38:30.281898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.001 [2024-10-01 17:38:30.281906] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.001 [2024-10-01 17:38:30.281913] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.001 [2024-10-01 17:38:30.281931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.001 qpair failed and we were unable to recover it. 00:38:32.001 17:38:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:32.001 17:38:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3297689 00:38:32.001 [2024-10-01 17:38:30.291696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.001 [2024-10-01 17:38:30.291759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.001 [2024-10-01 17:38:30.291773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.001 [2024-10-01 17:38:30.291781] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.001 [2024-10-01 17:38:30.291787] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.001 [2024-10-01 17:38:30.291802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.001 qpair failed and we were unable to recover it. 00:38:32.001 [2024-10-01 17:38:30.301720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.001 [2024-10-01 17:38:30.301805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.001 [2024-10-01 17:38:30.301819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.001 [2024-10-01 17:38:30.301827] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.001 [2024-10-01 17:38:30.301833] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.001 [2024-10-01 17:38:30.301846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.001 qpair failed and we were unable to recover it. 00:38:32.001 [2024-10-01 17:38:30.311759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.001 [2024-10-01 17:38:30.311818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.001 [2024-10-01 17:38:30.311832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.001 [2024-10-01 17:38:30.311839] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.001 [2024-10-01 17:38:30.311846] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.001 [2024-10-01 17:38:30.311859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.001 qpair failed and we were unable to recover it. 00:38:32.001 [2024-10-01 17:38:30.321608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.001 [2024-10-01 17:38:30.321664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.001 [2024-10-01 17:38:30.321680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.001 [2024-10-01 17:38:30.321687] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.001 [2024-10-01 17:38:30.321693] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.001 [2024-10-01 17:38:30.321707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.001 qpair failed and we were unable to recover it. 00:38:32.001 [2024-10-01 17:38:30.331725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.001 [2024-10-01 17:38:30.331828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.001 [2024-10-01 17:38:30.331843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.001 [2024-10-01 17:38:30.331850] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.001 [2024-10-01 17:38:30.331857] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.001 [2024-10-01 17:38:30.331870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.001 qpair failed and we were unable to recover it. 00:38:32.001 [2024-10-01 17:38:30.341736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.001 [2024-10-01 17:38:30.341819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.001 [2024-10-01 17:38:30.341833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.001 [2024-10-01 17:38:30.341840] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.001 [2024-10-01 17:38:30.341847] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.001 [2024-10-01 17:38:30.341860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.001 qpair failed and we were unable to recover it. 00:38:32.001 [2024-10-01 17:38:30.351791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.001 [2024-10-01 17:38:30.351846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.001 [2024-10-01 17:38:30.351860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.001 [2024-10-01 17:38:30.351867] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.001 [2024-10-01 17:38:30.351874] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.001 [2024-10-01 17:38:30.351887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.001 qpair failed and we were unable to recover it. 00:38:32.001 [2024-10-01 17:38:30.361841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.001 [2024-10-01 17:38:30.361919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.001 [2024-10-01 17:38:30.361932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.001 [2024-10-01 17:38:30.361943] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.001 [2024-10-01 17:38:30.361950] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.001 [2024-10-01 17:38:30.361963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.001 qpair failed and we were unable to recover it. 00:38:32.001 [2024-10-01 17:38:30.371859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.001 [2024-10-01 17:38:30.371967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.001 [2024-10-01 17:38:30.371981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.001 [2024-10-01 17:38:30.371989] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.001 [2024-10-01 17:38:30.371999] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.001 [2024-10-01 17:38:30.372013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.001 qpair failed and we were unable to recover it. 00:38:32.001 [2024-10-01 17:38:30.381883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.001 [2024-10-01 17:38:30.381965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.001 [2024-10-01 17:38:30.381979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.001 [2024-10-01 17:38:30.381986] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.001 [2024-10-01 17:38:30.381997] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.001 [2024-10-01 17:38:30.382011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.001 qpair failed and we were unable to recover it. 00:38:32.001 [2024-10-01 17:38:30.391897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.001 [2024-10-01 17:38:30.391951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.001 [2024-10-01 17:38:30.391965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.001 [2024-10-01 17:38:30.391972] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.001 [2024-10-01 17:38:30.391979] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.001 [2024-10-01 17:38:30.391992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.001 qpair failed and we were unable to recover it. 00:38:32.001 [2024-10-01 17:38:30.401934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.001 [2024-10-01 17:38:30.401992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.002 [2024-10-01 17:38:30.402010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.002 [2024-10-01 17:38:30.402017] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.002 [2024-10-01 17:38:30.402023] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.002 [2024-10-01 17:38:30.402037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.002 qpair failed and we were unable to recover it. 00:38:32.002 [2024-10-01 17:38:30.411950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.002 [2024-10-01 17:38:30.412040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.002 [2024-10-01 17:38:30.412055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.002 [2024-10-01 17:38:30.412062] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.002 [2024-10-01 17:38:30.412069] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.002 [2024-10-01 17:38:30.412082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.002 qpair failed and we were unable to recover it. 00:38:32.002 [2024-10-01 17:38:30.421972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.002 [2024-10-01 17:38:30.422024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.002 [2024-10-01 17:38:30.422038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.002 [2024-10-01 17:38:30.422045] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.002 [2024-10-01 17:38:30.422052] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.002 [2024-10-01 17:38:30.422065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.002 qpair failed and we were unable to recover it. 00:38:32.002 [2024-10-01 17:38:30.431966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.002 [2024-10-01 17:38:30.432028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.002 [2024-10-01 17:38:30.432042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.002 [2024-10-01 17:38:30.432049] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.002 [2024-10-01 17:38:30.432056] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.002 [2024-10-01 17:38:30.432069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.002 qpair failed and we were unable to recover it. 00:38:32.002 [2024-10-01 17:38:30.442038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.002 [2024-10-01 17:38:30.442093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.002 [2024-10-01 17:38:30.442107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.002 [2024-10-01 17:38:30.442114] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.002 [2024-10-01 17:38:30.442121] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.002 [2024-10-01 17:38:30.442134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.002 qpair failed and we were unable to recover it. 00:38:32.002 [2024-10-01 17:38:30.452049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.002 [2024-10-01 17:38:30.452104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.002 [2024-10-01 17:38:30.452121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.002 [2024-10-01 17:38:30.452128] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.002 [2024-10-01 17:38:30.452135] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.002 [2024-10-01 17:38:30.452149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.002 qpair failed and we were unable to recover it. 00:38:32.002 [2024-10-01 17:38:30.462019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.002 [2024-10-01 17:38:30.462103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.002 [2024-10-01 17:38:30.462118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.002 [2024-10-01 17:38:30.462126] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.002 [2024-10-01 17:38:30.462133] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.002 [2024-10-01 17:38:30.462147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.002 qpair failed and we were unable to recover it. 00:38:32.002 [2024-10-01 17:38:30.472139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.002 [2024-10-01 17:38:30.472201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.002 [2024-10-01 17:38:30.472215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.002 [2024-10-01 17:38:30.472226] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.002 [2024-10-01 17:38:30.472235] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.002 [2024-10-01 17:38:30.472249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.002 qpair failed and we were unable to recover it. 00:38:32.002 [2024-10-01 17:38:30.482149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.002 [2024-10-01 17:38:30.482204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.002 [2024-10-01 17:38:30.482218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.002 [2024-10-01 17:38:30.482225] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.002 [2024-10-01 17:38:30.482232] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.002 [2024-10-01 17:38:30.482245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.002 qpair failed and we were unable to recover it. 00:38:32.002 [2024-10-01 17:38:30.492175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.002 [2024-10-01 17:38:30.492232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.002 [2024-10-01 17:38:30.492246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.002 [2024-10-01 17:38:30.492254] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.002 [2024-10-01 17:38:30.492261] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.002 [2024-10-01 17:38:30.492274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.002 qpair failed and we were unable to recover it. 00:38:32.002 [2024-10-01 17:38:30.502270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.002 [2024-10-01 17:38:30.502350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.002 [2024-10-01 17:38:30.502364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.002 [2024-10-01 17:38:30.502371] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.002 [2024-10-01 17:38:30.502377] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.002 [2024-10-01 17:38:30.502390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.002 qpair failed and we were unable to recover it. 00:38:32.002 [2024-10-01 17:38:30.512292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.002 [2024-10-01 17:38:30.512355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.002 [2024-10-01 17:38:30.512368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.002 [2024-10-01 17:38:30.512377] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.002 [2024-10-01 17:38:30.512384] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.002 [2024-10-01 17:38:30.512397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.002 qpair failed and we were unable to recover it. 00:38:32.002 [2024-10-01 17:38:30.522283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.002 [2024-10-01 17:38:30.522338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.002 [2024-10-01 17:38:30.522352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.003 [2024-10-01 17:38:30.522359] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.003 [2024-10-01 17:38:30.522366] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.003 [2024-10-01 17:38:30.522379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.003 qpair failed and we were unable to recover it. 00:38:32.003 [2024-10-01 17:38:30.532209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.003 [2024-10-01 17:38:30.532258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.003 [2024-10-01 17:38:30.532273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.003 [2024-10-01 17:38:30.532280] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.003 [2024-10-01 17:38:30.532287] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.003 [2024-10-01 17:38:30.532300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.003 qpair failed and we were unable to recover it. 00:38:32.264 [2024-10-01 17:38:30.542326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.264 [2024-10-01 17:38:30.542379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.264 [2024-10-01 17:38:30.542396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.264 [2024-10-01 17:38:30.542403] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.264 [2024-10-01 17:38:30.542410] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.264 [2024-10-01 17:38:30.542423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.264 qpair failed and we were unable to recover it. 00:38:32.264 [2024-10-01 17:38:30.552348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.264 [2024-10-01 17:38:30.552406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.264 [2024-10-01 17:38:30.552419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.264 [2024-10-01 17:38:30.552427] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.264 [2024-10-01 17:38:30.552434] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.264 [2024-10-01 17:38:30.552447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.264 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 17:38:30.562373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.265 [2024-10-01 17:38:30.562427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.265 [2024-10-01 17:38:30.562441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.265 [2024-10-01 17:38:30.562448] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.265 [2024-10-01 17:38:30.562454] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.265 [2024-10-01 17:38:30.562468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 17:38:30.572406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.265 [2024-10-01 17:38:30.572460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.265 [2024-10-01 17:38:30.572473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.265 [2024-10-01 17:38:30.572481] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.265 [2024-10-01 17:38:30.572487] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.265 [2024-10-01 17:38:30.572501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 17:38:30.582411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.265 [2024-10-01 17:38:30.582473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.265 [2024-10-01 17:38:30.582489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.265 [2024-10-01 17:38:30.582496] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.265 [2024-10-01 17:38:30.582503] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.265 [2024-10-01 17:38:30.582517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 17:38:30.592453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.265 [2024-10-01 17:38:30.592510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.265 [2024-10-01 17:38:30.592523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.265 [2024-10-01 17:38:30.592530] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.265 [2024-10-01 17:38:30.592537] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.265 [2024-10-01 17:38:30.592551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 17:38:30.602485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.265 [2024-10-01 17:38:30.602538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.265 [2024-10-01 17:38:30.602551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.265 [2024-10-01 17:38:30.602559] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.265 [2024-10-01 17:38:30.602565] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.265 [2024-10-01 17:38:30.602579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 17:38:30.612507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.265 [2024-10-01 17:38:30.612564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.265 [2024-10-01 17:38:30.612577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.265 [2024-10-01 17:38:30.612584] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.265 [2024-10-01 17:38:30.612591] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.265 [2024-10-01 17:38:30.612604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 17:38:30.622488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.265 [2024-10-01 17:38:30.622545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.265 [2024-10-01 17:38:30.622559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.265 [2024-10-01 17:38:30.622566] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.265 [2024-10-01 17:38:30.622573] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.265 [2024-10-01 17:38:30.622586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 17:38:30.632575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.265 [2024-10-01 17:38:30.632631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.265 [2024-10-01 17:38:30.632648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.265 [2024-10-01 17:38:30.632655] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.265 [2024-10-01 17:38:30.632662] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.265 [2024-10-01 17:38:30.632675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 17:38:30.642590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.265 [2024-10-01 17:38:30.642680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.265 [2024-10-01 17:38:30.642694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.265 [2024-10-01 17:38:30.642701] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.265 [2024-10-01 17:38:30.642708] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.265 [2024-10-01 17:38:30.642722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 17:38:30.652508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.265 [2024-10-01 17:38:30.652570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.265 [2024-10-01 17:38:30.652584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.265 [2024-10-01 17:38:30.652591] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.265 [2024-10-01 17:38:30.652598] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.265 [2024-10-01 17:38:30.652611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 17:38:30.662624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.265 [2024-10-01 17:38:30.662678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.265 [2024-10-01 17:38:30.662691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.265 [2024-10-01 17:38:30.662698] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.265 [2024-10-01 17:38:30.662705] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.265 [2024-10-01 17:38:30.662718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 17:38:30.672694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.265 [2024-10-01 17:38:30.672752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.265 [2024-10-01 17:38:30.672765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.265 [2024-10-01 17:38:30.672773] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.265 [2024-10-01 17:38:30.672779] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.265 [2024-10-01 17:38:30.672796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 17:38:30.682711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.265 [2024-10-01 17:38:30.682775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.265 [2024-10-01 17:38:30.682800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.265 [2024-10-01 17:38:30.682809] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.265 [2024-10-01 17:38:30.682817] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.265 [2024-10-01 17:38:30.682836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.265 qpair failed and we were unable to recover it. 00:38:32.265 [2024-10-01 17:38:30.692739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.265 [2024-10-01 17:38:30.692796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.265 [2024-10-01 17:38:30.692822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.266 [2024-10-01 17:38:30.692831] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.266 [2024-10-01 17:38:30.692838] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.266 [2024-10-01 17:38:30.692857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.266 [2024-10-01 17:38:30.702727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.266 [2024-10-01 17:38:30.702782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.266 [2024-10-01 17:38:30.702799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.266 [2024-10-01 17:38:30.702806] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.266 [2024-10-01 17:38:30.702813] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.266 [2024-10-01 17:38:30.702827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.266 [2024-10-01 17:38:30.712787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.266 [2024-10-01 17:38:30.712872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.266 [2024-10-01 17:38:30.712886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.266 [2024-10-01 17:38:30.712893] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.266 [2024-10-01 17:38:30.712901] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.266 [2024-10-01 17:38:30.712914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.266 [2024-10-01 17:38:30.722839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.266 [2024-10-01 17:38:30.722901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.266 [2024-10-01 17:38:30.722919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.266 [2024-10-01 17:38:30.722926] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.266 [2024-10-01 17:38:30.722933] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.266 [2024-10-01 17:38:30.722947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.266 [2024-10-01 17:38:30.732829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.266 [2024-10-01 17:38:30.732878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.266 [2024-10-01 17:38:30.732892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.266 [2024-10-01 17:38:30.732899] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.266 [2024-10-01 17:38:30.732906] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.266 [2024-10-01 17:38:30.732919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.266 [2024-10-01 17:38:30.742866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.266 [2024-10-01 17:38:30.742917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.266 [2024-10-01 17:38:30.742933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.266 [2024-10-01 17:38:30.742940] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.266 [2024-10-01 17:38:30.742946] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.266 [2024-10-01 17:38:30.742960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.266 [2024-10-01 17:38:30.752880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.266 [2024-10-01 17:38:30.752941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.266 [2024-10-01 17:38:30.752955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.266 [2024-10-01 17:38:30.752962] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.266 [2024-10-01 17:38:30.752968] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.266 [2024-10-01 17:38:30.752982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.266 [2024-10-01 17:38:30.762924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.266 [2024-10-01 17:38:30.762980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.266 [2024-10-01 17:38:30.762998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.266 [2024-10-01 17:38:30.763006] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.266 [2024-10-01 17:38:30.763012] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.266 [2024-10-01 17:38:30.763030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.266 [2024-10-01 17:38:30.772952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.266 [2024-10-01 17:38:30.773007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.266 [2024-10-01 17:38:30.773021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.266 [2024-10-01 17:38:30.773028] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.266 [2024-10-01 17:38:30.773034] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.266 [2024-10-01 17:38:30.773048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.266 [2024-10-01 17:38:30.782902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.266 [2024-10-01 17:38:30.782956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.266 [2024-10-01 17:38:30.782969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.266 [2024-10-01 17:38:30.782977] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.266 [2024-10-01 17:38:30.782983] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.266 [2024-10-01 17:38:30.783001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.266 [2024-10-01 17:38:30.792986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.266 [2024-10-01 17:38:30.793048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.266 [2024-10-01 17:38:30.793061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.266 [2024-10-01 17:38:30.793068] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.266 [2024-10-01 17:38:30.793075] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.266 [2024-10-01 17:38:30.793088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.266 [2024-10-01 17:38:30.803050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.266 [2024-10-01 17:38:30.803103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.266 [2024-10-01 17:38:30.803117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.266 [2024-10-01 17:38:30.803124] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.266 [2024-10-01 17:38:30.803131] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.266 [2024-10-01 17:38:30.803144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.266 qpair failed and we were unable to recover it. 00:38:32.528 [2024-10-01 17:38:30.813077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.528 [2024-10-01 17:38:30.813146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.528 [2024-10-01 17:38:30.813163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.528 [2024-10-01 17:38:30.813170] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.528 [2024-10-01 17:38:30.813177] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.528 [2024-10-01 17:38:30.813190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.528 qpair failed and we were unable to recover it. 00:38:32.528 [2024-10-01 17:38:30.823144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.528 [2024-10-01 17:38:30.823208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.528 [2024-10-01 17:38:30.823222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.528 [2024-10-01 17:38:30.823229] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.528 [2024-10-01 17:38:30.823236] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.528 [2024-10-01 17:38:30.823249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.528 qpair failed and we were unable to recover it. 00:38:32.528 [2024-10-01 17:38:30.833145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.528 [2024-10-01 17:38:30.833231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.528 [2024-10-01 17:38:30.833244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.528 [2024-10-01 17:38:30.833253] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.528 [2024-10-01 17:38:30.833260] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.528 [2024-10-01 17:38:30.833273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.528 qpair failed and we were unable to recover it. 00:38:32.528 [2024-10-01 17:38:30.843049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.528 [2024-10-01 17:38:30.843103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.528 [2024-10-01 17:38:30.843117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.528 [2024-10-01 17:38:30.843125] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.528 [2024-10-01 17:38:30.843132] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.528 [2024-10-01 17:38:30.843144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.528 qpair failed and we were unable to recover it. 00:38:32.528 [2024-10-01 17:38:30.853108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.528 [2024-10-01 17:38:30.853168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.528 [2024-10-01 17:38:30.853181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.528 [2024-10-01 17:38:30.853188] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.528 [2024-10-01 17:38:30.853195] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.529 [2024-10-01 17:38:30.853212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.529 qpair failed and we were unable to recover it. 00:38:32.529 [2024-10-01 17:38:30.863228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.529 [2024-10-01 17:38:30.863280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.529 [2024-10-01 17:38:30.863294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.529 [2024-10-01 17:38:30.863301] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.529 [2024-10-01 17:38:30.863307] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.529 [2024-10-01 17:38:30.863320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.529 qpair failed and we were unable to recover it. 00:38:32.529 [2024-10-01 17:38:30.873279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.529 [2024-10-01 17:38:30.873333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.529 [2024-10-01 17:38:30.873347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.529 [2024-10-01 17:38:30.873354] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.529 [2024-10-01 17:38:30.873361] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.529 [2024-10-01 17:38:30.873374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.529 qpair failed and we were unable to recover it. 00:38:32.529 [2024-10-01 17:38:30.883270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.529 [2024-10-01 17:38:30.883323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.529 [2024-10-01 17:38:30.883337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.529 [2024-10-01 17:38:30.883344] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.529 [2024-10-01 17:38:30.883351] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.529 [2024-10-01 17:38:30.883364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.529 qpair failed and we were unable to recover it. 00:38:32.529 [2024-10-01 17:38:30.893292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.529 [2024-10-01 17:38:30.893348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.529 [2024-10-01 17:38:30.893362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.529 [2024-10-01 17:38:30.893369] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.529 [2024-10-01 17:38:30.893375] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.529 [2024-10-01 17:38:30.893389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.529 qpair failed and we were unable to recover it. 00:38:32.529 [2024-10-01 17:38:30.903305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.529 [2024-10-01 17:38:30.903355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.529 [2024-10-01 17:38:30.903372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.529 [2024-10-01 17:38:30.903379] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.529 [2024-10-01 17:38:30.903386] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.529 [2024-10-01 17:38:30.903400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.529 qpair failed and we were unable to recover it. 00:38:32.529 [2024-10-01 17:38:30.913341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.529 [2024-10-01 17:38:30.913437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.529 [2024-10-01 17:38:30.913452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.529 [2024-10-01 17:38:30.913459] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.529 [2024-10-01 17:38:30.913465] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.529 [2024-10-01 17:38:30.913479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.529 qpair failed and we were unable to recover it. 00:38:32.529 [2024-10-01 17:38:30.923326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.529 [2024-10-01 17:38:30.923379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.529 [2024-10-01 17:38:30.923392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.529 [2024-10-01 17:38:30.923399] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.529 [2024-10-01 17:38:30.923407] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.529 [2024-10-01 17:38:30.923420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.529 qpair failed and we were unable to recover it. 00:38:32.529 [2024-10-01 17:38:30.933399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.529 [2024-10-01 17:38:30.933449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.529 [2024-10-01 17:38:30.933463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.529 [2024-10-01 17:38:30.933470] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.529 [2024-10-01 17:38:30.933477] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.529 [2024-10-01 17:38:30.933489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.529 qpair failed and we were unable to recover it. 00:38:32.529 [2024-10-01 17:38:30.943435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.529 [2024-10-01 17:38:30.943492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.529 [2024-10-01 17:38:30.943506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.529 [2024-10-01 17:38:30.943514] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.529 [2024-10-01 17:38:30.943521] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.529 [2024-10-01 17:38:30.943538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.529 qpair failed and we were unable to recover it. 00:38:32.529 [2024-10-01 17:38:30.953497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.529 [2024-10-01 17:38:30.953552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.529 [2024-10-01 17:38:30.953565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.529 [2024-10-01 17:38:30.953573] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.530 [2024-10-01 17:38:30.953580] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.530 [2024-10-01 17:38:30.953593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-10-01 17:38:30.963392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.530 [2024-10-01 17:38:30.963473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.530 [2024-10-01 17:38:30.963486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.530 [2024-10-01 17:38:30.963494] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.530 [2024-10-01 17:38:30.963501] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.530 [2024-10-01 17:38:30.963514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-10-01 17:38:30.973478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.530 [2024-10-01 17:38:30.973530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.530 [2024-10-01 17:38:30.973543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.530 [2024-10-01 17:38:30.973551] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.530 [2024-10-01 17:38:30.973557] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.530 [2024-10-01 17:38:30.973571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-10-01 17:38:30.983533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.530 [2024-10-01 17:38:30.983584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.530 [2024-10-01 17:38:30.983599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.530 [2024-10-01 17:38:30.983606] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.530 [2024-10-01 17:38:30.983613] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.530 [2024-10-01 17:38:30.983627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-10-01 17:38:30.993591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.530 [2024-10-01 17:38:30.993646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.530 [2024-10-01 17:38:30.993663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.530 [2024-10-01 17:38:30.993670] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.530 [2024-10-01 17:38:30.993677] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.530 [2024-10-01 17:38:30.993690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-10-01 17:38:31.003621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.530 [2024-10-01 17:38:31.003678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.530 [2024-10-01 17:38:31.003692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.530 [2024-10-01 17:38:31.003699] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.530 [2024-10-01 17:38:31.003705] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.530 [2024-10-01 17:38:31.003719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-10-01 17:38:31.013633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.530 [2024-10-01 17:38:31.013689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.530 [2024-10-01 17:38:31.013703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.530 [2024-10-01 17:38:31.013711] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.530 [2024-10-01 17:38:31.013717] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.530 [2024-10-01 17:38:31.013730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-10-01 17:38:31.023681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.530 [2024-10-01 17:38:31.023731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.530 [2024-10-01 17:38:31.023745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.530 [2024-10-01 17:38:31.023753] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.530 [2024-10-01 17:38:31.023759] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.530 [2024-10-01 17:38:31.023772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-10-01 17:38:31.033706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.530 [2024-10-01 17:38:31.033778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.530 [2024-10-01 17:38:31.033792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.530 [2024-10-01 17:38:31.033799] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.530 [2024-10-01 17:38:31.033805] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.530 [2024-10-01 17:38:31.033823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-10-01 17:38:31.043763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.530 [2024-10-01 17:38:31.043817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.530 [2024-10-01 17:38:31.043831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.530 [2024-10-01 17:38:31.043838] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.530 [2024-10-01 17:38:31.043845] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.530 [2024-10-01 17:38:31.043858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-10-01 17:38:31.053738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.530 [2024-10-01 17:38:31.053815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.530 [2024-10-01 17:38:31.053829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.530 [2024-10-01 17:38:31.053836] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.530 [2024-10-01 17:38:31.053844] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.530 [2024-10-01 17:38:31.053857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-10-01 17:38:31.063770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.530 [2024-10-01 17:38:31.063819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.530 [2024-10-01 17:38:31.063833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.530 [2024-10-01 17:38:31.063840] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.530 [2024-10-01 17:38:31.063847] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.530 [2024-10-01 17:38:31.063860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.792 [2024-10-01 17:38:31.073819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.792 [2024-10-01 17:38:31.073888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.792 [2024-10-01 17:38:31.073903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.792 [2024-10-01 17:38:31.073910] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.792 [2024-10-01 17:38:31.073917] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.792 [2024-10-01 17:38:31.073930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.792 qpair failed and we were unable to recover it. 00:38:32.792 [2024-10-01 17:38:31.083835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.792 [2024-10-01 17:38:31.083928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.792 [2024-10-01 17:38:31.083950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.792 [2024-10-01 17:38:31.083957] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.792 [2024-10-01 17:38:31.083963] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.792 [2024-10-01 17:38:31.083977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.792 qpair failed and we were unable to recover it. 00:38:32.792 [2024-10-01 17:38:31.093857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.792 [2024-10-01 17:38:31.093914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.792 [2024-10-01 17:38:31.093929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.792 [2024-10-01 17:38:31.093937] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.792 [2024-10-01 17:38:31.093944] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.792 [2024-10-01 17:38:31.093961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.792 qpair failed and we were unable to recover it. 00:38:32.792 [2024-10-01 17:38:31.103883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.792 [2024-10-01 17:38:31.103940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.792 [2024-10-01 17:38:31.103955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.792 [2024-10-01 17:38:31.103962] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.792 [2024-10-01 17:38:31.103969] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.792 [2024-10-01 17:38:31.103983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.792 qpair failed and we were unable to recover it. 00:38:32.792 [2024-10-01 17:38:31.113934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.792 [2024-10-01 17:38:31.114003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.792 [2024-10-01 17:38:31.114017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.792 [2024-10-01 17:38:31.114024] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.792 [2024-10-01 17:38:31.114031] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.792 [2024-10-01 17:38:31.114044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.792 qpair failed and we were unable to recover it. 00:38:32.792 [2024-10-01 17:38:31.123926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.792 [2024-10-01 17:38:31.123988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.792 [2024-10-01 17:38:31.124006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.792 [2024-10-01 17:38:31.124014] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.792 [2024-10-01 17:38:31.124024] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.792 [2024-10-01 17:38:31.124038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.793 qpair failed and we were unable to recover it. 00:38:32.793 [2024-10-01 17:38:31.133968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.793 [2024-10-01 17:38:31.134023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.793 [2024-10-01 17:38:31.134036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.793 [2024-10-01 17:38:31.134044] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.793 [2024-10-01 17:38:31.134050] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.793 [2024-10-01 17:38:31.134064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.793 qpair failed and we were unable to recover it. 00:38:32.793 [2024-10-01 17:38:31.143950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.793 [2024-10-01 17:38:31.144009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.793 [2024-10-01 17:38:31.144024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.793 [2024-10-01 17:38:31.144032] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.793 [2024-10-01 17:38:31.144041] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.793 [2024-10-01 17:38:31.144055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.793 qpair failed and we were unable to recover it. 00:38:32.793 [2024-10-01 17:38:31.154022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.793 [2024-10-01 17:38:31.154079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.793 [2024-10-01 17:38:31.154094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.793 [2024-10-01 17:38:31.154101] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.793 [2024-10-01 17:38:31.154108] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.793 [2024-10-01 17:38:31.154123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.793 qpair failed and we were unable to recover it. 00:38:32.793 [2024-10-01 17:38:31.164065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.793 [2024-10-01 17:38:31.164123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.793 [2024-10-01 17:38:31.164137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.793 [2024-10-01 17:38:31.164144] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.793 [2024-10-01 17:38:31.164151] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.793 [2024-10-01 17:38:31.164164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.793 qpair failed and we were unable to recover it. 00:38:32.793 [2024-10-01 17:38:31.174069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.793 [2024-10-01 17:38:31.174123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.793 [2024-10-01 17:38:31.174140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.793 [2024-10-01 17:38:31.174147] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.793 [2024-10-01 17:38:31.174154] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.793 [2024-10-01 17:38:31.174168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.793 qpair failed and we were unable to recover it. 00:38:32.793 [2024-10-01 17:38:31.184012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.793 [2024-10-01 17:38:31.184064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.793 [2024-10-01 17:38:31.184078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.793 [2024-10-01 17:38:31.184085] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.793 [2024-10-01 17:38:31.184092] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.793 [2024-10-01 17:38:31.184106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.793 qpair failed and we were unable to recover it. 00:38:32.793 [2024-10-01 17:38:31.194036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.793 [2024-10-01 17:38:31.194094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.793 [2024-10-01 17:38:31.194107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.793 [2024-10-01 17:38:31.194114] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.793 [2024-10-01 17:38:31.194121] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.793 [2024-10-01 17:38:31.194134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.793 qpair failed and we were unable to recover it. 00:38:32.793 [2024-10-01 17:38:31.204183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.793 [2024-10-01 17:38:31.204240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.793 [2024-10-01 17:38:31.204253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.793 [2024-10-01 17:38:31.204260] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.793 [2024-10-01 17:38:31.204267] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.793 [2024-10-01 17:38:31.204280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.793 qpair failed and we were unable to recover it. 00:38:32.793 [2024-10-01 17:38:31.214168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.793 [2024-10-01 17:38:31.214219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.793 [2024-10-01 17:38:31.214233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.793 [2024-10-01 17:38:31.214240] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.793 [2024-10-01 17:38:31.214250] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.793 [2024-10-01 17:38:31.214263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.793 qpair failed and we were unable to recover it. 00:38:32.793 [2024-10-01 17:38:31.224247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.793 [2024-10-01 17:38:31.224336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.793 [2024-10-01 17:38:31.224350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.793 [2024-10-01 17:38:31.224358] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.793 [2024-10-01 17:38:31.224365] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.793 [2024-10-01 17:38:31.224379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.793 qpair failed and we were unable to recover it. 00:38:32.793 [2024-10-01 17:38:31.234263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.793 [2024-10-01 17:38:31.234320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.793 [2024-10-01 17:38:31.234333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.793 [2024-10-01 17:38:31.234340] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.793 [2024-10-01 17:38:31.234347] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.793 [2024-10-01 17:38:31.234360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.793 qpair failed and we were unable to recover it. 00:38:32.793 [2024-10-01 17:38:31.244289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.793 [2024-10-01 17:38:31.244340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.793 [2024-10-01 17:38:31.244353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.793 [2024-10-01 17:38:31.244361] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.793 [2024-10-01 17:38:31.244367] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.793 [2024-10-01 17:38:31.244380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.793 qpair failed and we were unable to recover it. 00:38:32.793 [2024-10-01 17:38:31.254305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.793 [2024-10-01 17:38:31.254365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.793 [2024-10-01 17:38:31.254379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.793 [2024-10-01 17:38:31.254386] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.794 [2024-10-01 17:38:31.254393] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.794 [2024-10-01 17:38:31.254406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.794 qpair failed and we were unable to recover it. 00:38:32.794 [2024-10-01 17:38:31.264320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.794 [2024-10-01 17:38:31.264371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.794 [2024-10-01 17:38:31.264385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.794 [2024-10-01 17:38:31.264392] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.794 [2024-10-01 17:38:31.264399] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.794 [2024-10-01 17:38:31.264412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.794 qpair failed and we were unable to recover it. 00:38:32.794 [2024-10-01 17:38:31.274345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.794 [2024-10-01 17:38:31.274399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.794 [2024-10-01 17:38:31.274413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.794 [2024-10-01 17:38:31.274420] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.794 [2024-10-01 17:38:31.274426] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.794 [2024-10-01 17:38:31.274440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.794 qpair failed and we were unable to recover it. 00:38:32.794 [2024-10-01 17:38:31.284332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.794 [2024-10-01 17:38:31.284391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.794 [2024-10-01 17:38:31.284406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.794 [2024-10-01 17:38:31.284413] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.794 [2024-10-01 17:38:31.284420] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.794 [2024-10-01 17:38:31.284434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.794 qpair failed and we were unable to recover it. 00:38:32.794 [2024-10-01 17:38:31.294406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.794 [2024-10-01 17:38:31.294460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.794 [2024-10-01 17:38:31.294473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.794 [2024-10-01 17:38:31.294480] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.794 [2024-10-01 17:38:31.294487] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.794 [2024-10-01 17:38:31.294501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.794 qpair failed and we were unable to recover it. 00:38:32.794 [2024-10-01 17:38:31.304426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.794 [2024-10-01 17:38:31.304478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.794 [2024-10-01 17:38:31.304491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.794 [2024-10-01 17:38:31.304498] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.794 [2024-10-01 17:38:31.304508] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.794 [2024-10-01 17:38:31.304522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.794 qpair failed and we were unable to recover it. 00:38:32.794 [2024-10-01 17:38:31.314480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.794 [2024-10-01 17:38:31.314540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.794 [2024-10-01 17:38:31.314553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.794 [2024-10-01 17:38:31.314561] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.794 [2024-10-01 17:38:31.314567] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.794 [2024-10-01 17:38:31.314580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.794 qpair failed and we were unable to recover it. 00:38:32.794 [2024-10-01 17:38:31.324480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.794 [2024-10-01 17:38:31.324561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.794 [2024-10-01 17:38:31.324575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.794 [2024-10-01 17:38:31.324582] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.794 [2024-10-01 17:38:31.324590] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.794 [2024-10-01 17:38:31.324603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.794 qpair failed and we were unable to recover it. 00:38:32.794 [2024-10-01 17:38:31.334496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.794 [2024-10-01 17:38:31.334555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.794 [2024-10-01 17:38:31.334569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.794 [2024-10-01 17:38:31.334576] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.794 [2024-10-01 17:38:31.334583] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:32.794 [2024-10-01 17:38:31.334596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.794 qpair failed and we were unable to recover it. 00:38:33.057 [2024-10-01 17:38:31.344515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.057 [2024-10-01 17:38:31.344570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.057 [2024-10-01 17:38:31.344584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.057 [2024-10-01 17:38:31.344591] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.057 [2024-10-01 17:38:31.344598] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.057 [2024-10-01 17:38:31.344611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.057 qpair failed and we were unable to recover it. 00:38:33.057 [2024-10-01 17:38:31.354589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.057 [2024-10-01 17:38:31.354652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.057 [2024-10-01 17:38:31.354666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.057 [2024-10-01 17:38:31.354673] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.057 [2024-10-01 17:38:31.354680] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.057 [2024-10-01 17:38:31.354693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.057 qpair failed and we were unable to recover it. 00:38:33.057 [2024-10-01 17:38:31.364590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.057 [2024-10-01 17:38:31.364641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.057 [2024-10-01 17:38:31.364656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.057 [2024-10-01 17:38:31.364663] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.057 [2024-10-01 17:38:31.364670] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.057 [2024-10-01 17:38:31.364684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.057 qpair failed and we were unable to recover it. 00:38:33.057 [2024-10-01 17:38:31.374614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.057 [2024-10-01 17:38:31.374675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.057 [2024-10-01 17:38:31.374688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.057 [2024-10-01 17:38:31.374696] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.057 [2024-10-01 17:38:31.374702] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.057 [2024-10-01 17:38:31.374715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.057 qpair failed and we were unable to recover it. 00:38:33.057 [2024-10-01 17:38:31.384668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.057 [2024-10-01 17:38:31.384731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.057 [2024-10-01 17:38:31.384745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.057 [2024-10-01 17:38:31.384753] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.057 [2024-10-01 17:38:31.384759] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.057 [2024-10-01 17:38:31.384772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.057 qpair failed and we were unable to recover it. 00:38:33.057 [2024-10-01 17:38:31.394698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.057 [2024-10-01 17:38:31.394755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.057 [2024-10-01 17:38:31.394769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.057 [2024-10-01 17:38:31.394776] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.057 [2024-10-01 17:38:31.394786] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.057 [2024-10-01 17:38:31.394799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.057 qpair failed and we were unable to recover it. 00:38:33.057 [2024-10-01 17:38:31.404731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.057 [2024-10-01 17:38:31.404793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.057 [2024-10-01 17:38:31.404819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.057 [2024-10-01 17:38:31.404828] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.057 [2024-10-01 17:38:31.404836] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.057 [2024-10-01 17:38:31.404854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.057 qpair failed and we were unable to recover it. 00:38:33.057 [2024-10-01 17:38:31.414762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.057 [2024-10-01 17:38:31.414815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.058 [2024-10-01 17:38:31.414831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.058 [2024-10-01 17:38:31.414838] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.058 [2024-10-01 17:38:31.414845] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.058 [2024-10-01 17:38:31.414859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.058 qpair failed and we were unable to recover it. 00:38:33.058 [2024-10-01 17:38:31.424667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.058 [2024-10-01 17:38:31.424728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.058 [2024-10-01 17:38:31.424743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.058 [2024-10-01 17:38:31.424750] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.058 [2024-10-01 17:38:31.424757] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.058 [2024-10-01 17:38:31.424770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.058 qpair failed and we were unable to recover it. 00:38:33.058 [2024-10-01 17:38:31.434789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.058 [2024-10-01 17:38:31.434849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.058 [2024-10-01 17:38:31.434863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.058 [2024-10-01 17:38:31.434870] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.058 [2024-10-01 17:38:31.434877] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.058 [2024-10-01 17:38:31.434891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.058 qpair failed and we were unable to recover it. 00:38:33.058 [2024-10-01 17:38:31.444851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.058 [2024-10-01 17:38:31.444910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.058 [2024-10-01 17:38:31.444924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.058 [2024-10-01 17:38:31.444932] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.058 [2024-10-01 17:38:31.444938] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.058 [2024-10-01 17:38:31.444952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.058 qpair failed and we were unable to recover it. 00:38:33.058 [2024-10-01 17:38:31.454869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.058 [2024-10-01 17:38:31.454923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.058 [2024-10-01 17:38:31.454937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.058 [2024-10-01 17:38:31.454944] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.058 [2024-10-01 17:38:31.454951] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.058 [2024-10-01 17:38:31.454964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.058 qpair failed and we were unable to recover it. 00:38:33.058 [2024-10-01 17:38:31.464907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.058 [2024-10-01 17:38:31.465012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.058 [2024-10-01 17:38:31.465027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.058 [2024-10-01 17:38:31.465034] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.058 [2024-10-01 17:38:31.465042] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.058 [2024-10-01 17:38:31.465056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.058 qpair failed and we were unable to recover it. 00:38:33.058 [2024-10-01 17:38:31.474937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.058 [2024-10-01 17:38:31.475011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.058 [2024-10-01 17:38:31.475026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.058 [2024-10-01 17:38:31.475033] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.058 [2024-10-01 17:38:31.475040] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.058 [2024-10-01 17:38:31.475054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.058 qpair failed and we were unable to recover it. 00:38:33.058 [2024-10-01 17:38:31.484939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.058 [2024-10-01 17:38:31.485012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.058 [2024-10-01 17:38:31.485026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.058 [2024-10-01 17:38:31.485033] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.058 [2024-10-01 17:38:31.485044] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.058 [2024-10-01 17:38:31.485059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.058 qpair failed and we were unable to recover it. 00:38:33.058 [2024-10-01 17:38:31.494978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.058 [2024-10-01 17:38:31.495037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.058 [2024-10-01 17:38:31.495051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.058 [2024-10-01 17:38:31.495058] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.058 [2024-10-01 17:38:31.495064] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.058 [2024-10-01 17:38:31.495078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.058 qpair failed and we were unable to recover it. 00:38:33.058 [2024-10-01 17:38:31.504892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.058 [2024-10-01 17:38:31.504948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.058 [2024-10-01 17:38:31.504961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.058 [2024-10-01 17:38:31.504969] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.058 [2024-10-01 17:38:31.504975] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.058 [2024-10-01 17:38:31.504988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.058 qpair failed and we were unable to recover it. 00:38:33.058 [2024-10-01 17:38:31.515057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.058 [2024-10-01 17:38:31.515113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.058 [2024-10-01 17:38:31.515127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.058 [2024-10-01 17:38:31.515134] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.058 [2024-10-01 17:38:31.515141] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.058 [2024-10-01 17:38:31.515154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.058 qpair failed and we were unable to recover it. 00:38:33.058 [2024-10-01 17:38:31.525097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.058 [2024-10-01 17:38:31.525180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.058 [2024-10-01 17:38:31.525194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.058 [2024-10-01 17:38:31.525202] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.058 [2024-10-01 17:38:31.525209] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.058 [2024-10-01 17:38:31.525223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.058 qpair failed and we were unable to recover it. 00:38:33.058 [2024-10-01 17:38:31.535083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.058 [2024-10-01 17:38:31.535159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.058 [2024-10-01 17:38:31.535173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.058 [2024-10-01 17:38:31.535181] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.058 [2024-10-01 17:38:31.535187] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.058 [2024-10-01 17:38:31.535201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.058 qpair failed and we were unable to recover it. 00:38:33.058 [2024-10-01 17:38:31.545096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.058 [2024-10-01 17:38:31.545165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.058 [2024-10-01 17:38:31.545179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.058 [2024-10-01 17:38:31.545186] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.058 [2024-10-01 17:38:31.545193] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.058 [2024-10-01 17:38:31.545206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.058 qpair failed and we were unable to recover it. 00:38:33.059 [2024-10-01 17:38:31.555137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.059 [2024-10-01 17:38:31.555198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.059 [2024-10-01 17:38:31.555211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.059 [2024-10-01 17:38:31.555219] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.059 [2024-10-01 17:38:31.555225] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.059 [2024-10-01 17:38:31.555238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.059 qpair failed and we were unable to recover it. 00:38:33.059 [2024-10-01 17:38:31.565183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.059 [2024-10-01 17:38:31.565285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.059 [2024-10-01 17:38:31.565299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.059 [2024-10-01 17:38:31.565306] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.059 [2024-10-01 17:38:31.565313] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.059 [2024-10-01 17:38:31.565326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.059 qpair failed and we were unable to recover it. 00:38:33.059 [2024-10-01 17:38:31.575209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.059 [2024-10-01 17:38:31.575265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.059 [2024-10-01 17:38:31.575281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.059 [2024-10-01 17:38:31.575288] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.059 [2024-10-01 17:38:31.575299] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.059 [2024-10-01 17:38:31.575313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.059 qpair failed and we were unable to recover it. 00:38:33.059 [2024-10-01 17:38:31.585134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.059 [2024-10-01 17:38:31.585194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.059 [2024-10-01 17:38:31.585210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.059 [2024-10-01 17:38:31.585217] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.059 [2024-10-01 17:38:31.585224] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.059 [2024-10-01 17:38:31.585238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.059 qpair failed and we were unable to recover it. 00:38:33.059 [2024-10-01 17:38:31.595288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.059 [2024-10-01 17:38:31.595346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.059 [2024-10-01 17:38:31.595361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.059 [2024-10-01 17:38:31.595368] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.059 [2024-10-01 17:38:31.595375] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.059 [2024-10-01 17:38:31.595388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.059 qpair failed and we were unable to recover it. 00:38:33.321 [2024-10-01 17:38:31.605332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.321 [2024-10-01 17:38:31.605407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.321 [2024-10-01 17:38:31.605421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.321 [2024-10-01 17:38:31.605428] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.321 [2024-10-01 17:38:31.605435] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.321 [2024-10-01 17:38:31.605449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.321 qpair failed and we were unable to recover it. 00:38:33.321 [2024-10-01 17:38:31.615375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.321 [2024-10-01 17:38:31.615441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.321 [2024-10-01 17:38:31.615455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.321 [2024-10-01 17:38:31.615462] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.321 [2024-10-01 17:38:31.615469] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.321 [2024-10-01 17:38:31.615482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.321 qpair failed and we were unable to recover it. 00:38:33.321 [2024-10-01 17:38:31.625361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.321 [2024-10-01 17:38:31.625412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.321 [2024-10-01 17:38:31.625425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.321 [2024-10-01 17:38:31.625433] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.321 [2024-10-01 17:38:31.625440] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.321 [2024-10-01 17:38:31.625453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.321 qpair failed and we were unable to recover it. 00:38:33.321 [2024-10-01 17:38:31.635397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.321 [2024-10-01 17:38:31.635477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.321 [2024-10-01 17:38:31.635490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.321 [2024-10-01 17:38:31.635497] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.321 [2024-10-01 17:38:31.635504] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.321 [2024-10-01 17:38:31.635517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.321 qpair failed and we were unable to recover it. 00:38:33.321 [2024-10-01 17:38:31.645369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.321 [2024-10-01 17:38:31.645461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.321 [2024-10-01 17:38:31.645476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.321 [2024-10-01 17:38:31.645483] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.321 [2024-10-01 17:38:31.645490] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.321 [2024-10-01 17:38:31.645504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.321 qpair failed and we were unable to recover it. 00:38:33.321 [2024-10-01 17:38:31.655300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.321 [2024-10-01 17:38:31.655353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.321 [2024-10-01 17:38:31.655367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.322 [2024-10-01 17:38:31.655374] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.322 [2024-10-01 17:38:31.655381] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.322 [2024-10-01 17:38:31.655394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.322 qpair failed and we were unable to recover it. 00:38:33.322 [2024-10-01 17:38:31.665456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.322 [2024-10-01 17:38:31.665518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.322 [2024-10-01 17:38:31.665532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.322 [2024-10-01 17:38:31.665542] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.322 [2024-10-01 17:38:31.665549] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.322 [2024-10-01 17:38:31.665563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.322 qpair failed and we were unable to recover it. 00:38:33.322 [2024-10-01 17:38:31.675441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.322 [2024-10-01 17:38:31.675500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.322 [2024-10-01 17:38:31.675514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.322 [2024-10-01 17:38:31.675521] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.322 [2024-10-01 17:38:31.675528] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.322 [2024-10-01 17:38:31.675541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.322 qpair failed and we were unable to recover it. 00:38:33.322 [2024-10-01 17:38:31.685521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.322 [2024-10-01 17:38:31.685584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.322 [2024-10-01 17:38:31.685598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.322 [2024-10-01 17:38:31.685605] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.322 [2024-10-01 17:38:31.685612] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.322 [2024-10-01 17:38:31.685625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.322 qpair failed and we were unable to recover it. 00:38:33.322 [2024-10-01 17:38:31.695506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.322 [2024-10-01 17:38:31.695575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.322 [2024-10-01 17:38:31.695588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.322 [2024-10-01 17:38:31.695596] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.322 [2024-10-01 17:38:31.695602] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.322 [2024-10-01 17:38:31.695616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.322 qpair failed and we were unable to recover it. 00:38:33.322 [2024-10-01 17:38:31.705560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.322 [2024-10-01 17:38:31.705610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.322 [2024-10-01 17:38:31.705623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.322 [2024-10-01 17:38:31.705631] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.322 [2024-10-01 17:38:31.705638] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.322 [2024-10-01 17:38:31.705651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.322 qpair failed and we were unable to recover it. 00:38:33.322 [2024-10-01 17:38:31.715589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.322 [2024-10-01 17:38:31.715663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.322 [2024-10-01 17:38:31.715677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.322 [2024-10-01 17:38:31.715685] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.322 [2024-10-01 17:38:31.715691] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.322 [2024-10-01 17:38:31.715706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.322 qpair failed and we were unable to recover it. 00:38:33.322 [2024-10-01 17:38:31.725645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.322 [2024-10-01 17:38:31.725695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.322 [2024-10-01 17:38:31.725710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.322 [2024-10-01 17:38:31.725717] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.322 [2024-10-01 17:38:31.725724] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.322 [2024-10-01 17:38:31.725737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.322 qpair failed and we were unable to recover it. 00:38:33.322 [2024-10-01 17:38:31.735649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.322 [2024-10-01 17:38:31.735706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.322 [2024-10-01 17:38:31.735720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.322 [2024-10-01 17:38:31.735727] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.322 [2024-10-01 17:38:31.735734] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.322 [2024-10-01 17:38:31.735748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.322 qpair failed and we were unable to recover it. 00:38:33.322 [2024-10-01 17:38:31.745687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.322 [2024-10-01 17:38:31.745741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.322 [2024-10-01 17:38:31.745754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.322 [2024-10-01 17:38:31.745762] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.322 [2024-10-01 17:38:31.745768] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.322 [2024-10-01 17:38:31.745782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.322 qpair failed and we were unable to recover it. 00:38:33.322 [2024-10-01 17:38:31.755714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.322 [2024-10-01 17:38:31.755801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.322 [2024-10-01 17:38:31.755815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.322 [2024-10-01 17:38:31.755826] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.322 [2024-10-01 17:38:31.755833] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.322 [2024-10-01 17:38:31.755847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.322 qpair failed and we were unable to recover it. 00:38:33.322 [2024-10-01 17:38:31.765739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.322 [2024-10-01 17:38:31.765823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.322 [2024-10-01 17:38:31.765838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.322 [2024-10-01 17:38:31.765845] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.322 [2024-10-01 17:38:31.765852] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.322 [2024-10-01 17:38:31.765865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.322 qpair failed and we were unable to recover it. 00:38:33.322 [2024-10-01 17:38:31.775704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.322 [2024-10-01 17:38:31.775761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.322 [2024-10-01 17:38:31.775776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.322 [2024-10-01 17:38:31.775784] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.322 [2024-10-01 17:38:31.775792] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.322 [2024-10-01 17:38:31.775805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.322 qpair failed and we were unable to recover it. 00:38:33.322 [2024-10-01 17:38:31.785807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.322 [2024-10-01 17:38:31.785899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.322 [2024-10-01 17:38:31.785913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.322 [2024-10-01 17:38:31.785921] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.322 [2024-10-01 17:38:31.785927] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.322 [2024-10-01 17:38:31.785940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.322 qpair failed and we were unable to recover it. 00:38:33.322 [2024-10-01 17:38:31.795801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.323 [2024-10-01 17:38:31.795855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.323 [2024-10-01 17:38:31.795868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.323 [2024-10-01 17:38:31.795875] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.323 [2024-10-01 17:38:31.795882] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.323 [2024-10-01 17:38:31.795895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.323 qpair failed and we were unable to recover it. 00:38:33.323 [2024-10-01 17:38:31.805860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.323 [2024-10-01 17:38:31.805917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.323 [2024-10-01 17:38:31.805931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.323 [2024-10-01 17:38:31.805938] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.323 [2024-10-01 17:38:31.805944] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.323 [2024-10-01 17:38:31.805958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.323 qpair failed and we were unable to recover it. 00:38:33.323 [2024-10-01 17:38:31.815865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.323 [2024-10-01 17:38:31.815915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.323 [2024-10-01 17:38:31.815928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.323 [2024-10-01 17:38:31.815935] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.323 [2024-10-01 17:38:31.815942] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.323 [2024-10-01 17:38:31.815955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.323 qpair failed and we were unable to recover it. 00:38:33.323 [2024-10-01 17:38:31.825891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.323 [2024-10-01 17:38:31.825944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.323 [2024-10-01 17:38:31.825958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.323 [2024-10-01 17:38:31.825965] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.323 [2024-10-01 17:38:31.825971] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.323 [2024-10-01 17:38:31.825984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.323 qpair failed and we were unable to recover it. 00:38:33.323 [2024-10-01 17:38:31.835928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.323 [2024-10-01 17:38:31.835986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.323 [2024-10-01 17:38:31.836005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.323 [2024-10-01 17:38:31.836012] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.323 [2024-10-01 17:38:31.836019] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.323 [2024-10-01 17:38:31.836032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.323 qpair failed and we were unable to recover it. 00:38:33.323 [2024-10-01 17:38:31.845946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.323 [2024-10-01 17:38:31.846004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.323 [2024-10-01 17:38:31.846018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.323 [2024-10-01 17:38:31.846029] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.323 [2024-10-01 17:38:31.846036] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.323 [2024-10-01 17:38:31.846049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.323 qpair failed and we were unable to recover it. 00:38:33.323 [2024-10-01 17:38:31.855983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.323 [2024-10-01 17:38:31.856041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.323 [2024-10-01 17:38:31.856055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.323 [2024-10-01 17:38:31.856062] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.323 [2024-10-01 17:38:31.856069] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.323 [2024-10-01 17:38:31.856082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.323 qpair failed and we were unable to recover it. 00:38:33.323 [2024-10-01 17:38:31.865984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.323 [2024-10-01 17:38:31.866045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.323 [2024-10-01 17:38:31.866059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.323 [2024-10-01 17:38:31.866066] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.323 [2024-10-01 17:38:31.866072] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.323 [2024-10-01 17:38:31.866086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.323 qpair failed and we were unable to recover it. 00:38:33.585 [2024-10-01 17:38:31.876054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.585 [2024-10-01 17:38:31.876117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.585 [2024-10-01 17:38:31.876131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.585 [2024-10-01 17:38:31.876138] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.585 [2024-10-01 17:38:31.876144] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.585 [2024-10-01 17:38:31.876158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.585 qpair failed and we were unable to recover it. 00:38:33.585 [2024-10-01 17:38:31.886125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.585 [2024-10-01 17:38:31.886175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.585 [2024-10-01 17:38:31.886189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.585 [2024-10-01 17:38:31.886196] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.585 [2024-10-01 17:38:31.886203] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.585 [2024-10-01 17:38:31.886216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.585 qpair failed and we were unable to recover it. 00:38:33.585 [2024-10-01 17:38:31.896056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.585 [2024-10-01 17:38:31.896111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.585 [2024-10-01 17:38:31.896125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.585 [2024-10-01 17:38:31.896132] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.585 [2024-10-01 17:38:31.896139] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.585 [2024-10-01 17:38:31.896152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.585 qpair failed and we were unable to recover it. 00:38:33.585 [2024-10-01 17:38:31.906001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.585 [2024-10-01 17:38:31.906060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.585 [2024-10-01 17:38:31.906074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.585 [2024-10-01 17:38:31.906081] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.585 [2024-10-01 17:38:31.906088] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.585 [2024-10-01 17:38:31.906101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.585 qpair failed and we were unable to recover it. 00:38:33.585 [2024-10-01 17:38:31.916172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.585 [2024-10-01 17:38:31.916225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.585 [2024-10-01 17:38:31.916239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.585 [2024-10-01 17:38:31.916247] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.585 [2024-10-01 17:38:31.916253] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.585 [2024-10-01 17:38:31.916267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.585 qpair failed and we were unable to recover it. 00:38:33.585 [2024-10-01 17:38:31.926185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.585 [2024-10-01 17:38:31.926243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.585 [2024-10-01 17:38:31.926257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.585 [2024-10-01 17:38:31.926264] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.585 [2024-10-01 17:38:31.926270] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.585 [2024-10-01 17:38:31.926283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.585 qpair failed and we were unable to recover it. 00:38:33.585 [2024-10-01 17:38:31.936167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.585 [2024-10-01 17:38:31.936230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.586 [2024-10-01 17:38:31.936244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.586 [2024-10-01 17:38:31.936258] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.586 [2024-10-01 17:38:31.936264] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.586 [2024-10-01 17:38:31.936279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.586 qpair failed and we were unable to recover it. 00:38:33.586 [2024-10-01 17:38:31.946228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.586 [2024-10-01 17:38:31.946278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.586 [2024-10-01 17:38:31.946292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.586 [2024-10-01 17:38:31.946299] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.586 [2024-10-01 17:38:31.946306] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.586 [2024-10-01 17:38:31.946319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.586 qpair failed and we were unable to recover it. 00:38:33.586 [2024-10-01 17:38:31.956243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.586 [2024-10-01 17:38:31.956300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.586 [2024-10-01 17:38:31.956313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.586 [2024-10-01 17:38:31.956321] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.586 [2024-10-01 17:38:31.956327] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.586 [2024-10-01 17:38:31.956341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.586 qpair failed and we were unable to recover it. 00:38:33.586 [2024-10-01 17:38:31.966310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.586 [2024-10-01 17:38:31.966369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.586 [2024-10-01 17:38:31.966383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.586 [2024-10-01 17:38:31.966390] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.586 [2024-10-01 17:38:31.966396] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.586 [2024-10-01 17:38:31.966410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.586 qpair failed and we were unable to recover it. 00:38:33.586 [2024-10-01 17:38:31.976194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.586 [2024-10-01 17:38:31.976246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.586 [2024-10-01 17:38:31.976260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.586 [2024-10-01 17:38:31.976267] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.586 [2024-10-01 17:38:31.976274] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.586 [2024-10-01 17:38:31.976287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.586 qpair failed and we were unable to recover it. 00:38:33.586 [2024-10-01 17:38:31.986331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.586 [2024-10-01 17:38:31.986388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.586 [2024-10-01 17:38:31.986404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.586 [2024-10-01 17:38:31.986411] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.586 [2024-10-01 17:38:31.986417] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.586 [2024-10-01 17:38:31.986431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.586 qpair failed and we were unable to recover it. 00:38:33.586 [2024-10-01 17:38:31.996354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.586 [2024-10-01 17:38:31.996435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.586 [2024-10-01 17:38:31.996448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.586 [2024-10-01 17:38:31.996456] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.586 [2024-10-01 17:38:31.996463] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.586 [2024-10-01 17:38:31.996476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.586 qpair failed and we were unable to recover it. 00:38:33.586 [2024-10-01 17:38:32.006384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.586 [2024-10-01 17:38:32.006449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.586 [2024-10-01 17:38:32.006464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.586 [2024-10-01 17:38:32.006472] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.586 [2024-10-01 17:38:32.006478] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.586 [2024-10-01 17:38:32.006492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.586 qpair failed and we were unable to recover it. 00:38:33.586 [2024-10-01 17:38:32.016408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.586 [2024-10-01 17:38:32.016461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.586 [2024-10-01 17:38:32.016475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.586 [2024-10-01 17:38:32.016482] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.586 [2024-10-01 17:38:32.016489] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.586 [2024-10-01 17:38:32.016502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.586 qpair failed and we were unable to recover it. 00:38:33.586 [2024-10-01 17:38:32.026400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.586 [2024-10-01 17:38:32.026457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.586 [2024-10-01 17:38:32.026471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.586 [2024-10-01 17:38:32.026482] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.586 [2024-10-01 17:38:32.026488] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.586 [2024-10-01 17:38:32.026502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.586 qpair failed and we were unable to recover it. 00:38:33.586 [2024-10-01 17:38:32.036493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.586 [2024-10-01 17:38:32.036545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.586 [2024-10-01 17:38:32.036560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.586 [2024-10-01 17:38:32.036567] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.586 [2024-10-01 17:38:32.036573] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.586 [2024-10-01 17:38:32.036587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.586 qpair failed and we were unable to recover it. 00:38:33.586 [2024-10-01 17:38:32.046505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.586 [2024-10-01 17:38:32.046595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.586 [2024-10-01 17:38:32.046608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.586 [2024-10-01 17:38:32.046617] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.586 [2024-10-01 17:38:32.046623] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.586 [2024-10-01 17:38:32.046636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.586 qpair failed and we were unable to recover it. 00:38:33.586 [2024-10-01 17:38:32.056540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.586 [2024-10-01 17:38:32.056589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.586 [2024-10-01 17:38:32.056602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.586 [2024-10-01 17:38:32.056609] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.586 [2024-10-01 17:38:32.056616] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.586 [2024-10-01 17:38:32.056629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.586 qpair failed and we were unable to recover it. 00:38:33.586 [2024-10-01 17:38:32.066558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.586 [2024-10-01 17:38:32.066658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.586 [2024-10-01 17:38:32.066672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.586 [2024-10-01 17:38:32.066680] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.586 [2024-10-01 17:38:32.066687] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.586 [2024-10-01 17:38:32.066700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.586 qpair failed and we were unable to recover it. 00:38:33.587 [2024-10-01 17:38:32.076521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.587 [2024-10-01 17:38:32.076616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.587 [2024-10-01 17:38:32.076631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.587 [2024-10-01 17:38:32.076638] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.587 [2024-10-01 17:38:32.076645] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.587 [2024-10-01 17:38:32.076658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.587 qpair failed and we were unable to recover it. 00:38:33.587 [2024-10-01 17:38:32.086632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.587 [2024-10-01 17:38:32.086682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.587 [2024-10-01 17:38:32.086697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.587 [2024-10-01 17:38:32.086704] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.587 [2024-10-01 17:38:32.086710] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.587 [2024-10-01 17:38:32.086723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.587 qpair failed and we were unable to recover it. 00:38:33.587 [2024-10-01 17:38:32.096617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.587 [2024-10-01 17:38:32.096680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.587 [2024-10-01 17:38:32.096706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.587 [2024-10-01 17:38:32.096715] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.587 [2024-10-01 17:38:32.096722] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.587 [2024-10-01 17:38:32.096740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.587 qpair failed and we were unable to recover it. 00:38:33.587 [2024-10-01 17:38:32.106657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.587 [2024-10-01 17:38:32.106719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.587 [2024-10-01 17:38:32.106745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.587 [2024-10-01 17:38:32.106754] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.587 [2024-10-01 17:38:32.106761] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.587 [2024-10-01 17:38:32.106779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.587 qpair failed and we were unable to recover it. 00:38:33.587 [2024-10-01 17:38:32.116726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.587 [2024-10-01 17:38:32.116778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.587 [2024-10-01 17:38:32.116798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.587 [2024-10-01 17:38:32.116806] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.587 [2024-10-01 17:38:32.116813] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.587 [2024-10-01 17:38:32.116828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.587 qpair failed and we were unable to recover it. 00:38:33.587 [2024-10-01 17:38:32.126747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.587 [2024-10-01 17:38:32.126804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.587 [2024-10-01 17:38:32.126818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.587 [2024-10-01 17:38:32.126826] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.587 [2024-10-01 17:38:32.126832] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.587 [2024-10-01 17:38:32.126846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.587 qpair failed and we were unable to recover it. 00:38:33.848 [2024-10-01 17:38:32.136651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.848 [2024-10-01 17:38:32.136712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.848 [2024-10-01 17:38:32.136726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.848 [2024-10-01 17:38:32.136734] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.848 [2024-10-01 17:38:32.136741] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.848 [2024-10-01 17:38:32.136754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.848 qpair failed and we were unable to recover it. 00:38:33.848 [2024-10-01 17:38:32.146788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.848 [2024-10-01 17:38:32.146872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.848 [2024-10-01 17:38:32.146886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.848 [2024-10-01 17:38:32.146893] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.848 [2024-10-01 17:38:32.146900] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.848 [2024-10-01 17:38:32.146913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.848 qpair failed and we were unable to recover it. 00:38:33.848 [2024-10-01 17:38:32.156813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.848 [2024-10-01 17:38:32.156866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.848 [2024-10-01 17:38:32.156879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.848 [2024-10-01 17:38:32.156886] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.849 [2024-10-01 17:38:32.156893] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.849 [2024-10-01 17:38:32.156906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.849 qpair failed and we were unable to recover it. 00:38:33.849 [2024-10-01 17:38:32.166858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.849 [2024-10-01 17:38:32.166913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.849 [2024-10-01 17:38:32.166927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.849 [2024-10-01 17:38:32.166934] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.849 [2024-10-01 17:38:32.166940] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.849 [2024-10-01 17:38:32.166954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.849 qpair failed and we were unable to recover it. 00:38:33.849 [2024-10-01 17:38:32.176861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.849 [2024-10-01 17:38:32.176912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.849 [2024-10-01 17:38:32.176925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.849 [2024-10-01 17:38:32.176933] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.849 [2024-10-01 17:38:32.176939] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.849 [2024-10-01 17:38:32.176952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.849 qpair failed and we were unable to recover it. 00:38:33.849 [2024-10-01 17:38:32.186780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.849 [2024-10-01 17:38:32.186841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.849 [2024-10-01 17:38:32.186855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.849 [2024-10-01 17:38:32.186862] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.849 [2024-10-01 17:38:32.186869] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.849 [2024-10-01 17:38:32.186882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.849 qpair failed and we were unable to recover it. 00:38:33.849 [2024-10-01 17:38:32.196889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.849 [2024-10-01 17:38:32.196947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.849 [2024-10-01 17:38:32.196962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.849 [2024-10-01 17:38:32.196969] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.849 [2024-10-01 17:38:32.196976] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.849 [2024-10-01 17:38:32.196989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.849 qpair failed and we were unable to recover it. 00:38:33.849 [2024-10-01 17:38:32.206939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.849 [2024-10-01 17:38:32.207010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.849 [2024-10-01 17:38:32.207027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.849 [2024-10-01 17:38:32.207035] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.849 [2024-10-01 17:38:32.207041] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.849 [2024-10-01 17:38:32.207056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.849 qpair failed and we were unable to recover it. 00:38:33.849 [2024-10-01 17:38:32.216972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.849 [2024-10-01 17:38:32.217031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.849 [2024-10-01 17:38:32.217045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.849 [2024-10-01 17:38:32.217052] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.849 [2024-10-01 17:38:32.217059] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.849 [2024-10-01 17:38:32.217072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.849 qpair failed and we were unable to recover it. 00:38:33.849 [2024-10-01 17:38:32.226960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.849 [2024-10-01 17:38:32.227017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.849 [2024-10-01 17:38:32.227031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.849 [2024-10-01 17:38:32.227039] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.849 [2024-10-01 17:38:32.227045] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.849 [2024-10-01 17:38:32.227059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.849 qpair failed and we were unable to recover it. 00:38:33.849 [2024-10-01 17:38:32.236911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.849 [2024-10-01 17:38:32.236971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.849 [2024-10-01 17:38:32.236985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.849 [2024-10-01 17:38:32.236992] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.849 [2024-10-01 17:38:32.237004] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.849 [2024-10-01 17:38:32.237017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.849 qpair failed and we were unable to recover it. 00:38:33.849 [2024-10-01 17:38:32.247067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.849 [2024-10-01 17:38:32.247118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.849 [2024-10-01 17:38:32.247132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.849 [2024-10-01 17:38:32.247139] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.849 [2024-10-01 17:38:32.247146] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.849 [2024-10-01 17:38:32.247160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.849 qpair failed and we were unable to recover it. 00:38:33.849 [2024-10-01 17:38:32.257107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.849 [2024-10-01 17:38:32.257160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.849 [2024-10-01 17:38:32.257174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.849 [2024-10-01 17:38:32.257182] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.849 [2024-10-01 17:38:32.257188] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.849 [2024-10-01 17:38:32.257202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.849 qpair failed and we were unable to recover it. 00:38:33.849 [2024-10-01 17:38:32.267103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.849 [2024-10-01 17:38:32.267154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.849 [2024-10-01 17:38:32.267168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.849 [2024-10-01 17:38:32.267175] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.849 [2024-10-01 17:38:32.267182] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.849 [2024-10-01 17:38:32.267196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.849 qpair failed and we were unable to recover it. 00:38:33.849 [2024-10-01 17:38:32.277157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.849 [2024-10-01 17:38:32.277215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.849 [2024-10-01 17:38:32.277230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.849 [2024-10-01 17:38:32.277237] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.849 [2024-10-01 17:38:32.277244] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.849 [2024-10-01 17:38:32.277258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.849 qpair failed and we were unable to recover it. 00:38:33.849 [2024-10-01 17:38:32.287115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.849 [2024-10-01 17:38:32.287203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.849 [2024-10-01 17:38:32.287217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.849 [2024-10-01 17:38:32.287225] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.849 [2024-10-01 17:38:32.287232] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.849 [2024-10-01 17:38:32.287246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.849 qpair failed and we were unable to recover it. 00:38:33.849 [2024-10-01 17:38:32.297198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.850 [2024-10-01 17:38:32.297259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.850 [2024-10-01 17:38:32.297276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.850 [2024-10-01 17:38:32.297283] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.850 [2024-10-01 17:38:32.297290] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.850 [2024-10-01 17:38:32.297303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.850 qpair failed and we were unable to recover it. 00:38:33.850 [2024-10-01 17:38:32.307232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.850 [2024-10-01 17:38:32.307281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.850 [2024-10-01 17:38:32.307295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.850 [2024-10-01 17:38:32.307302] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.850 [2024-10-01 17:38:32.307308] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.850 [2024-10-01 17:38:32.307321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.850 qpair failed and we were unable to recover it. 00:38:33.850 [2024-10-01 17:38:32.317256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.850 [2024-10-01 17:38:32.317312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.850 [2024-10-01 17:38:32.317325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.850 [2024-10-01 17:38:32.317333] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.850 [2024-10-01 17:38:32.317339] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.850 [2024-10-01 17:38:32.317353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.850 qpair failed and we were unable to recover it. 00:38:33.850 [2024-10-01 17:38:32.327323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.850 [2024-10-01 17:38:32.327376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.850 [2024-10-01 17:38:32.327389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.850 [2024-10-01 17:38:32.327396] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.850 [2024-10-01 17:38:32.327403] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.850 [2024-10-01 17:38:32.327416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.850 qpair failed and we were unable to recover it. 00:38:33.850 [2024-10-01 17:38:32.337323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.850 [2024-10-01 17:38:32.337379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.850 [2024-10-01 17:38:32.337393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.850 [2024-10-01 17:38:32.337400] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.850 [2024-10-01 17:38:32.337407] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.850 [2024-10-01 17:38:32.337423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.850 qpair failed and we were unable to recover it. 00:38:33.850 [2024-10-01 17:38:32.347353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.850 [2024-10-01 17:38:32.347454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.850 [2024-10-01 17:38:32.347469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.850 [2024-10-01 17:38:32.347476] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.850 [2024-10-01 17:38:32.347483] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.850 [2024-10-01 17:38:32.347496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.850 qpair failed and we were unable to recover it. 00:38:33.850 [2024-10-01 17:38:32.357392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.850 [2024-10-01 17:38:32.357463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.850 [2024-10-01 17:38:32.357476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.850 [2024-10-01 17:38:32.357483] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.850 [2024-10-01 17:38:32.357490] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.850 [2024-10-01 17:38:32.357503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.850 qpair failed and we were unable to recover it. 00:38:33.850 [2024-10-01 17:38:32.367409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.850 [2024-10-01 17:38:32.367492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.850 [2024-10-01 17:38:32.367506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.850 [2024-10-01 17:38:32.367514] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.850 [2024-10-01 17:38:32.367522] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.850 [2024-10-01 17:38:32.367535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.850 qpair failed and we were unable to recover it. 00:38:33.850 [2024-10-01 17:38:32.377429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.850 [2024-10-01 17:38:32.377483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.850 [2024-10-01 17:38:32.377497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.850 [2024-10-01 17:38:32.377505] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.850 [2024-10-01 17:38:32.377511] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.850 [2024-10-01 17:38:32.377525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.850 qpair failed and we were unable to recover it. 00:38:33.850 [2024-10-01 17:38:32.387448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.850 [2024-10-01 17:38:32.387502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.850 [2024-10-01 17:38:32.387520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.850 [2024-10-01 17:38:32.387527] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.850 [2024-10-01 17:38:32.387534] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:33.850 [2024-10-01 17:38:32.387547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.850 qpair failed and we were unable to recover it. 00:38:34.112 [2024-10-01 17:38:32.397508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.112 [2024-10-01 17:38:32.397567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.112 [2024-10-01 17:38:32.397581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.112 [2024-10-01 17:38:32.397588] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.112 [2024-10-01 17:38:32.397594] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.112 [2024-10-01 17:38:32.397608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.112 qpair failed and we were unable to recover it. 00:38:34.112 [2024-10-01 17:38:32.407534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.112 [2024-10-01 17:38:32.407590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.112 [2024-10-01 17:38:32.407604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.112 [2024-10-01 17:38:32.407612] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.112 [2024-10-01 17:38:32.407618] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.112 [2024-10-01 17:38:32.407633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.112 qpair failed and we were unable to recover it. 00:38:34.112 [2024-10-01 17:38:32.417545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.112 [2024-10-01 17:38:32.417598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.112 [2024-10-01 17:38:32.417614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.112 [2024-10-01 17:38:32.417621] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.112 [2024-10-01 17:38:32.417628] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.112 [2024-10-01 17:38:32.417645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.112 qpair failed and we were unable to recover it. 00:38:34.112 [2024-10-01 17:38:32.427585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.112 [2024-10-01 17:38:32.427639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.112 [2024-10-01 17:38:32.427653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.112 [2024-10-01 17:38:32.427660] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.112 [2024-10-01 17:38:32.427667] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.112 [2024-10-01 17:38:32.427685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.112 qpair failed and we were unable to recover it. 00:38:34.112 [2024-10-01 17:38:32.437611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.112 [2024-10-01 17:38:32.437664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.112 [2024-10-01 17:38:32.437678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.112 [2024-10-01 17:38:32.437685] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.112 [2024-10-01 17:38:32.437692] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.112 [2024-10-01 17:38:32.437705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.112 qpair failed and we were unable to recover it. 00:38:34.112 [2024-10-01 17:38:32.447648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.112 [2024-10-01 17:38:32.447703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.112 [2024-10-01 17:38:32.447717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.112 [2024-10-01 17:38:32.447725] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.112 [2024-10-01 17:38:32.447731] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.112 [2024-10-01 17:38:32.447745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.112 qpair failed and we were unable to recover it. 00:38:34.112 [2024-10-01 17:38:32.457626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.112 [2024-10-01 17:38:32.457677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.112 [2024-10-01 17:38:32.457692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.112 [2024-10-01 17:38:32.457699] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.112 [2024-10-01 17:38:32.457705] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.112 [2024-10-01 17:38:32.457719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.112 qpair failed and we were unable to recover it. 00:38:34.112 [2024-10-01 17:38:32.467686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.112 [2024-10-01 17:38:32.467735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.112 [2024-10-01 17:38:32.467748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.112 [2024-10-01 17:38:32.467756] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.112 [2024-10-01 17:38:32.467762] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.112 [2024-10-01 17:38:32.467776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.112 qpair failed and we were unable to recover it. 00:38:34.112 [2024-10-01 17:38:32.477718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.112 [2024-10-01 17:38:32.477808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.112 [2024-10-01 17:38:32.477826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.112 [2024-10-01 17:38:32.477833] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.113 [2024-10-01 17:38:32.477840] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.113 [2024-10-01 17:38:32.477853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.113 qpair failed and we were unable to recover it. 00:38:34.113 [2024-10-01 17:38:32.487755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.113 [2024-10-01 17:38:32.487811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.113 [2024-10-01 17:38:32.487824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.113 [2024-10-01 17:38:32.487832] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.113 [2024-10-01 17:38:32.487838] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.113 [2024-10-01 17:38:32.487852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.113 qpair failed and we were unable to recover it. 00:38:34.113 [2024-10-01 17:38:32.497750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.113 [2024-10-01 17:38:32.497803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.113 [2024-10-01 17:38:32.497816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.113 [2024-10-01 17:38:32.497823] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.113 [2024-10-01 17:38:32.497830] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.113 [2024-10-01 17:38:32.497843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.113 qpair failed and we were unable to recover it. 00:38:34.113 [2024-10-01 17:38:32.507759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.113 [2024-10-01 17:38:32.507814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.113 [2024-10-01 17:38:32.507828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.113 [2024-10-01 17:38:32.507835] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.113 [2024-10-01 17:38:32.507842] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.113 [2024-10-01 17:38:32.507855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.113 qpair failed and we were unable to recover it. 00:38:34.113 [2024-10-01 17:38:32.517935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.113 [2024-10-01 17:38:32.518014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.113 [2024-10-01 17:38:32.518028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.113 [2024-10-01 17:38:32.518035] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.113 [2024-10-01 17:38:32.518041] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.113 [2024-10-01 17:38:32.518059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.113 qpair failed and we were unable to recover it. 00:38:34.113 [2024-10-01 17:38:32.527947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.113 [2024-10-01 17:38:32.528015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.113 [2024-10-01 17:38:32.528029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.113 [2024-10-01 17:38:32.528036] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.113 [2024-10-01 17:38:32.528043] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.113 [2024-10-01 17:38:32.528056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.113 qpair failed and we were unable to recover it. 00:38:34.113 [2024-10-01 17:38:32.537896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.113 [2024-10-01 17:38:32.538001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.113 [2024-10-01 17:38:32.538016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.113 [2024-10-01 17:38:32.538023] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.113 [2024-10-01 17:38:32.538030] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.113 [2024-10-01 17:38:32.538043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.113 qpair failed and we were unable to recover it. 00:38:34.113 [2024-10-01 17:38:32.547947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.113 [2024-10-01 17:38:32.548007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.113 [2024-10-01 17:38:32.548022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.113 [2024-10-01 17:38:32.548029] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.113 [2024-10-01 17:38:32.548036] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.113 [2024-10-01 17:38:32.548050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.113 qpair failed and we were unable to recover it. 00:38:34.113 [2024-10-01 17:38:32.557955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.113 [2024-10-01 17:38:32.558021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.113 [2024-10-01 17:38:32.558037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.113 [2024-10-01 17:38:32.558045] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.113 [2024-10-01 17:38:32.558055] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.113 [2024-10-01 17:38:32.558071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.113 qpair failed and we were unable to recover it. 00:38:34.113 [2024-10-01 17:38:32.567956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.113 [2024-10-01 17:38:32.568016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.113 [2024-10-01 17:38:32.568035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.113 [2024-10-01 17:38:32.568042] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.113 [2024-10-01 17:38:32.568049] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.113 [2024-10-01 17:38:32.568063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.113 qpair failed and we were unable to recover it. 00:38:34.113 [2024-10-01 17:38:32.578025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.113 [2024-10-01 17:38:32.578080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.113 [2024-10-01 17:38:32.578097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.113 [2024-10-01 17:38:32.578105] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.113 [2024-10-01 17:38:32.578112] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.113 [2024-10-01 17:38:32.578127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.113 qpair failed and we were unable to recover it. 00:38:34.113 [2024-10-01 17:38:32.587989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.113 [2024-10-01 17:38:32.588086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.113 [2024-10-01 17:38:32.588101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.113 [2024-10-01 17:38:32.588108] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.113 [2024-10-01 17:38:32.588115] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.113 [2024-10-01 17:38:32.588129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.113 qpair failed and we were unable to recover it. 00:38:34.113 [2024-10-01 17:38:32.598054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.113 [2024-10-01 17:38:32.598118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.113 [2024-10-01 17:38:32.598131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.113 [2024-10-01 17:38:32.598139] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.113 [2024-10-01 17:38:32.598148] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.113 [2024-10-01 17:38:32.598162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.113 qpair failed and we were unable to recover it. 00:38:34.113 [2024-10-01 17:38:32.608089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.113 [2024-10-01 17:38:32.608193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.113 [2024-10-01 17:38:32.608209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.113 [2024-10-01 17:38:32.608216] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.113 [2024-10-01 17:38:32.608223] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.114 [2024-10-01 17:38:32.608244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.114 qpair failed and we were unable to recover it. 00:38:34.114 [2024-10-01 17:38:32.618121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.114 [2024-10-01 17:38:32.618179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.114 [2024-10-01 17:38:32.618192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.114 [2024-10-01 17:38:32.618200] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.114 [2024-10-01 17:38:32.618206] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.114 [2024-10-01 17:38:32.618219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.114 qpair failed and we were unable to recover it. 00:38:34.114 [2024-10-01 17:38:32.628145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.114 [2024-10-01 17:38:32.628205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.114 [2024-10-01 17:38:32.628221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.114 [2024-10-01 17:38:32.628228] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.114 [2024-10-01 17:38:32.628235] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.114 [2024-10-01 17:38:32.628252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.114 qpair failed and we were unable to recover it. 00:38:34.114 [2024-10-01 17:38:32.638121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.114 [2024-10-01 17:38:32.638194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.114 [2024-10-01 17:38:32.638209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.114 [2024-10-01 17:38:32.638216] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.114 [2024-10-01 17:38:32.638223] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.114 [2024-10-01 17:38:32.638236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.114 qpair failed and we were unable to recover it. 00:38:34.114 [2024-10-01 17:38:32.648183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.114 [2024-10-01 17:38:32.648236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.114 [2024-10-01 17:38:32.648249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.114 [2024-10-01 17:38:32.648256] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.114 [2024-10-01 17:38:32.648263] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.114 [2024-10-01 17:38:32.648276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.114 qpair failed and we were unable to recover it. 00:38:34.375 [2024-10-01 17:38:32.658229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.375 [2024-10-01 17:38:32.658284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.375 [2024-10-01 17:38:32.658301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.375 [2024-10-01 17:38:32.658308] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.375 [2024-10-01 17:38:32.658315] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.375 [2024-10-01 17:38:32.658328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.375 qpair failed and we were unable to recover it. 00:38:34.375 [2024-10-01 17:38:32.668236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.375 [2024-10-01 17:38:32.668287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.375 [2024-10-01 17:38:32.668301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.375 [2024-10-01 17:38:32.668308] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.375 [2024-10-01 17:38:32.668315] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.375 [2024-10-01 17:38:32.668329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.375 qpair failed and we were unable to recover it. 00:38:34.375 [2024-10-01 17:38:32.678183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.375 [2024-10-01 17:38:32.678240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.375 [2024-10-01 17:38:32.678254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.375 [2024-10-01 17:38:32.678261] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.375 [2024-10-01 17:38:32.678268] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.375 [2024-10-01 17:38:32.678281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-10-01 17:38:32.688280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.376 [2024-10-01 17:38:32.688337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.376 [2024-10-01 17:38:32.688351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.376 [2024-10-01 17:38:32.688358] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.376 [2024-10-01 17:38:32.688365] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.376 [2024-10-01 17:38:32.688378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-10-01 17:38:32.698351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.376 [2024-10-01 17:38:32.698406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.376 [2024-10-01 17:38:32.698419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.376 [2024-10-01 17:38:32.698426] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.376 [2024-10-01 17:38:32.698433] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.376 [2024-10-01 17:38:32.698451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-10-01 17:38:32.708398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.376 [2024-10-01 17:38:32.708452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.376 [2024-10-01 17:38:32.708465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.376 [2024-10-01 17:38:32.708472] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.376 [2024-10-01 17:38:32.708479] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.376 [2024-10-01 17:38:32.708492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-10-01 17:38:32.718404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.376 [2024-10-01 17:38:32.718457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.376 [2024-10-01 17:38:32.718471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.376 [2024-10-01 17:38:32.718478] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.376 [2024-10-01 17:38:32.718485] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.376 [2024-10-01 17:38:32.718498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-10-01 17:38:32.728395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.376 [2024-10-01 17:38:32.728445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.376 [2024-10-01 17:38:32.728458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.376 [2024-10-01 17:38:32.728465] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.376 [2024-10-01 17:38:32.728471] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.376 [2024-10-01 17:38:32.728484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-10-01 17:38:32.738456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.376 [2024-10-01 17:38:32.738509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.376 [2024-10-01 17:38:32.738523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.376 [2024-10-01 17:38:32.738530] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.376 [2024-10-01 17:38:32.738537] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.376 [2024-10-01 17:38:32.738550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-10-01 17:38:32.748449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.376 [2024-10-01 17:38:32.748536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.376 [2024-10-01 17:38:32.748553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.376 [2024-10-01 17:38:32.748560] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.376 [2024-10-01 17:38:32.748567] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.376 [2024-10-01 17:38:32.748581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-10-01 17:38:32.758586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.376 [2024-10-01 17:38:32.758644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.376 [2024-10-01 17:38:32.758657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.376 [2024-10-01 17:38:32.758665] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.376 [2024-10-01 17:38:32.758671] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.376 [2024-10-01 17:38:32.758684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-10-01 17:38:32.768515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.376 [2024-10-01 17:38:32.768568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.376 [2024-10-01 17:38:32.768582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.376 [2024-10-01 17:38:32.768589] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.376 [2024-10-01 17:38:32.768596] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.376 [2024-10-01 17:38:32.768609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-10-01 17:38:32.778600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.376 [2024-10-01 17:38:32.778653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.376 [2024-10-01 17:38:32.778667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.376 [2024-10-01 17:38:32.778674] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.376 [2024-10-01 17:38:32.778681] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.376 [2024-10-01 17:38:32.778694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-10-01 17:38:32.788598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.376 [2024-10-01 17:38:32.788648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.376 [2024-10-01 17:38:32.788661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.376 [2024-10-01 17:38:32.788669] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.376 [2024-10-01 17:38:32.788678] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.376 [2024-10-01 17:38:32.788692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-10-01 17:38:32.798637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.376 [2024-10-01 17:38:32.798733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.376 [2024-10-01 17:38:32.798747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.376 [2024-10-01 17:38:32.798754] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.376 [2024-10-01 17:38:32.798761] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.376 [2024-10-01 17:38:32.798774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.376 qpair failed and we were unable to recover it. 00:38:34.376 [2024-10-01 17:38:32.808634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.376 [2024-10-01 17:38:32.808687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.376 [2024-10-01 17:38:32.808713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.376 [2024-10-01 17:38:32.808721] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.376 [2024-10-01 17:38:32.808728] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.377 [2024-10-01 17:38:32.808746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-10-01 17:38:32.818678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.377 [2024-10-01 17:38:32.818783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.377 [2024-10-01 17:38:32.818809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.377 [2024-10-01 17:38:32.818818] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.377 [2024-10-01 17:38:32.818825] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.377 [2024-10-01 17:38:32.818844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-10-01 17:38:32.828639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.377 [2024-10-01 17:38:32.828697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.377 [2024-10-01 17:38:32.828722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.377 [2024-10-01 17:38:32.828731] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.377 [2024-10-01 17:38:32.828738] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.377 [2024-10-01 17:38:32.828758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-10-01 17:38:32.838746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.377 [2024-10-01 17:38:32.838811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.377 [2024-10-01 17:38:32.838836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.377 [2024-10-01 17:38:32.838845] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.377 [2024-10-01 17:38:32.838852] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.377 [2024-10-01 17:38:32.838871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-10-01 17:38:32.848729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.377 [2024-10-01 17:38:32.848794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.377 [2024-10-01 17:38:32.848810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.377 [2024-10-01 17:38:32.848818] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.377 [2024-10-01 17:38:32.848825] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.377 [2024-10-01 17:38:32.848839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-10-01 17:38:32.858706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.377 [2024-10-01 17:38:32.858760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.377 [2024-10-01 17:38:32.858774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.377 [2024-10-01 17:38:32.858781] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.377 [2024-10-01 17:38:32.858788] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.377 [2024-10-01 17:38:32.858802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-10-01 17:38:32.868811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.377 [2024-10-01 17:38:32.868904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.377 [2024-10-01 17:38:32.868918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.377 [2024-10-01 17:38:32.868925] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.377 [2024-10-01 17:38:32.868932] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.377 [2024-10-01 17:38:32.868945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-10-01 17:38:32.878845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.377 [2024-10-01 17:38:32.878902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.377 [2024-10-01 17:38:32.878916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.377 [2024-10-01 17:38:32.878923] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.377 [2024-10-01 17:38:32.878934] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.377 [2024-10-01 17:38:32.878948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-10-01 17:38:32.888818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.377 [2024-10-01 17:38:32.888867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.377 [2024-10-01 17:38:32.888881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.377 [2024-10-01 17:38:32.888888] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.377 [2024-10-01 17:38:32.888895] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.377 [2024-10-01 17:38:32.888908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-10-01 17:38:32.898891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.377 [2024-10-01 17:38:32.898943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.377 [2024-10-01 17:38:32.898957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.377 [2024-10-01 17:38:32.898964] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.377 [2024-10-01 17:38:32.898971] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.377 [2024-10-01 17:38:32.898984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-10-01 17:38:32.908876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.377 [2024-10-01 17:38:32.908955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.377 [2024-10-01 17:38:32.908969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.377 [2024-10-01 17:38:32.908976] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.377 [2024-10-01 17:38:32.908983] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.377 [2024-10-01 17:38:32.909000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.377 [2024-10-01 17:38:32.918971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.377 [2024-10-01 17:38:32.919049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.377 [2024-10-01 17:38:32.919063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.377 [2024-10-01 17:38:32.919071] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.377 [2024-10-01 17:38:32.919077] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.377 [2024-10-01 17:38:32.919091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.377 qpair failed and we were unable to recover it. 00:38:34.639 [2024-10-01 17:38:32.928966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.639 [2024-10-01 17:38:32.929023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.639 [2024-10-01 17:38:32.929037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.639 [2024-10-01 17:38:32.929044] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.639 [2024-10-01 17:38:32.929050] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.639 [2024-10-01 17:38:32.929064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.639 qpair failed and we were unable to recover it. 00:38:34.639 [2024-10-01 17:38:32.939018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.639 [2024-10-01 17:38:32.939071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.639 [2024-10-01 17:38:32.939084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.639 [2024-10-01 17:38:32.939092] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.639 [2024-10-01 17:38:32.939098] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.639 [2024-10-01 17:38:32.939111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.639 qpair failed and we were unable to recover it. 00:38:34.639 [2024-10-01 17:38:32.949037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.639 [2024-10-01 17:38:32.949092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.639 [2024-10-01 17:38:32.949106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.639 [2024-10-01 17:38:32.949113] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.639 [2024-10-01 17:38:32.949120] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.639 [2024-10-01 17:38:32.949133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.639 qpair failed and we were unable to recover it. 00:38:34.639 [2024-10-01 17:38:32.959096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.639 [2024-10-01 17:38:32.959192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.639 [2024-10-01 17:38:32.959207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.639 [2024-10-01 17:38:32.959214] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.639 [2024-10-01 17:38:32.959221] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.639 [2024-10-01 17:38:32.959235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.639 qpair failed and we were unable to recover it. 00:38:34.639 [2024-10-01 17:38:32.968999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.639 [2024-10-01 17:38:32.969049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.639 [2024-10-01 17:38:32.969063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.639 [2024-10-01 17:38:32.969070] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.639 [2024-10-01 17:38:32.969080] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.639 [2024-10-01 17:38:32.969094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.639 qpair failed and we were unable to recover it. 00:38:34.639 [2024-10-01 17:38:32.979147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.639 [2024-10-01 17:38:32.979200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.639 [2024-10-01 17:38:32.979214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.639 [2024-10-01 17:38:32.979221] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.639 [2024-10-01 17:38:32.979228] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.639 [2024-10-01 17:38:32.979242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.639 qpair failed and we were unable to recover it. 00:38:34.639 [2024-10-01 17:38:32.989161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.639 [2024-10-01 17:38:32.989218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.639 [2024-10-01 17:38:32.989232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.639 [2024-10-01 17:38:32.989239] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.639 [2024-10-01 17:38:32.989246] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.639 [2024-10-01 17:38:32.989260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.639 qpair failed and we were unable to recover it. 00:38:34.639 [2024-10-01 17:38:32.999213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.639 [2024-10-01 17:38:32.999271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.639 [2024-10-01 17:38:32.999284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.639 [2024-10-01 17:38:32.999292] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.639 [2024-10-01 17:38:32.999298] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.639 [2024-10-01 17:38:32.999311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.639 qpair failed and we were unable to recover it. 00:38:34.639 [2024-10-01 17:38:33.009227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.639 [2024-10-01 17:38:33.009279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.639 [2024-10-01 17:38:33.009292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.639 [2024-10-01 17:38:33.009300] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.639 [2024-10-01 17:38:33.009306] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.639 [2024-10-01 17:38:33.009319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.639 qpair failed and we were unable to recover it. 00:38:34.639 [2024-10-01 17:38:33.019228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.639 [2024-10-01 17:38:33.019286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.639 [2024-10-01 17:38:33.019300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.639 [2024-10-01 17:38:33.019307] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.639 [2024-10-01 17:38:33.019314] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.639 [2024-10-01 17:38:33.019327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.639 qpair failed and we were unable to recover it. 00:38:34.639 [2024-10-01 17:38:33.029269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.640 [2024-10-01 17:38:33.029390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.640 [2024-10-01 17:38:33.029406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.640 [2024-10-01 17:38:33.029413] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.640 [2024-10-01 17:38:33.029420] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.640 [2024-10-01 17:38:33.029433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.640 qpair failed and we were unable to recover it. 00:38:34.640 [2024-10-01 17:38:33.039187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.640 [2024-10-01 17:38:33.039246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.640 [2024-10-01 17:38:33.039260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.640 [2024-10-01 17:38:33.039268] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.640 [2024-10-01 17:38:33.039276] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.640 [2024-10-01 17:38:33.039290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.640 qpair failed and we were unable to recover it. 00:38:34.640 [2024-10-01 17:38:33.049309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.640 [2024-10-01 17:38:33.049411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.640 [2024-10-01 17:38:33.049425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.640 [2024-10-01 17:38:33.049433] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.640 [2024-10-01 17:38:33.049439] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.640 [2024-10-01 17:38:33.049453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.640 qpair failed and we were unable to recover it. 00:38:34.640 [2024-10-01 17:38:33.059371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.640 [2024-10-01 17:38:33.059425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.640 [2024-10-01 17:38:33.059438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.640 [2024-10-01 17:38:33.059446] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.640 [2024-10-01 17:38:33.059456] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.640 [2024-10-01 17:38:33.059469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.640 qpair failed and we were unable to recover it. 00:38:34.640 [2024-10-01 17:38:33.069436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.640 [2024-10-01 17:38:33.069491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.640 [2024-10-01 17:38:33.069505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.640 [2024-10-01 17:38:33.069512] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.640 [2024-10-01 17:38:33.069519] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.640 [2024-10-01 17:38:33.069532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.640 qpair failed and we were unable to recover it. 00:38:34.640 [2024-10-01 17:38:33.079459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.640 [2024-10-01 17:38:33.079515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.640 [2024-10-01 17:38:33.079529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.640 [2024-10-01 17:38:33.079536] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.640 [2024-10-01 17:38:33.079543] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.640 [2024-10-01 17:38:33.079556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.640 qpair failed and we were unable to recover it. 00:38:34.640 [2024-10-01 17:38:33.089464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.640 [2024-10-01 17:38:33.089519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.640 [2024-10-01 17:38:33.089533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.640 [2024-10-01 17:38:33.089540] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.640 [2024-10-01 17:38:33.089547] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.640 [2024-10-01 17:38:33.089560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.640 qpair failed and we were unable to recover it. 00:38:34.640 [2024-10-01 17:38:33.099343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.640 [2024-10-01 17:38:33.099397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.640 [2024-10-01 17:38:33.099410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.640 [2024-10-01 17:38:33.099418] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.640 [2024-10-01 17:38:33.099425] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.640 [2024-10-01 17:38:33.099438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.640 qpair failed and we were unable to recover it. 00:38:34.640 [2024-10-01 17:38:33.109509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.640 [2024-10-01 17:38:33.109608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.640 [2024-10-01 17:38:33.109623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.640 [2024-10-01 17:38:33.109630] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.640 [2024-10-01 17:38:33.109637] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.640 [2024-10-01 17:38:33.109650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.640 qpair failed and we were unable to recover it. 00:38:34.640 [2024-10-01 17:38:33.119542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.640 [2024-10-01 17:38:33.119598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.640 [2024-10-01 17:38:33.119612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.640 [2024-10-01 17:38:33.119619] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.640 [2024-10-01 17:38:33.119625] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.640 [2024-10-01 17:38:33.119639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.640 qpair failed and we were unable to recover it. 00:38:34.640 [2024-10-01 17:38:33.129524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.640 [2024-10-01 17:38:33.129572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.640 [2024-10-01 17:38:33.129587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.640 [2024-10-01 17:38:33.129595] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.640 [2024-10-01 17:38:33.129603] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.640 [2024-10-01 17:38:33.129620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.640 qpair failed and we were unable to recover it. 00:38:34.640 [2024-10-01 17:38:33.139579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.640 [2024-10-01 17:38:33.139633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.640 [2024-10-01 17:38:33.139646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.640 [2024-10-01 17:38:33.139653] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.640 [2024-10-01 17:38:33.139660] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.640 [2024-10-01 17:38:33.139673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.640 qpair failed and we were unable to recover it. 00:38:34.640 [2024-10-01 17:38:33.149655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.640 [2024-10-01 17:38:33.149731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.640 [2024-10-01 17:38:33.149745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.640 [2024-10-01 17:38:33.149752] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.640 [2024-10-01 17:38:33.149765] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.640 [2024-10-01 17:38:33.149779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.640 qpair failed and we were unable to recover it. 00:38:34.640 [2024-10-01 17:38:33.159664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.640 [2024-10-01 17:38:33.159723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.640 [2024-10-01 17:38:33.159736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.640 [2024-10-01 17:38:33.159744] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.641 [2024-10-01 17:38:33.159750] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.641 [2024-10-01 17:38:33.159763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.641 qpair failed and we were unable to recover it. 00:38:34.641 [2024-10-01 17:38:33.169566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.641 [2024-10-01 17:38:33.169620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.641 [2024-10-01 17:38:33.169635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.641 [2024-10-01 17:38:33.169642] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.641 [2024-10-01 17:38:33.169649] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.641 [2024-10-01 17:38:33.169662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.641 qpair failed and we were unable to recover it. 00:38:34.641 [2024-10-01 17:38:33.179694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.641 [2024-10-01 17:38:33.179756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.641 [2024-10-01 17:38:33.179771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.641 [2024-10-01 17:38:33.179778] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.641 [2024-10-01 17:38:33.179786] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.641 [2024-10-01 17:38:33.179799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.641 qpair failed and we were unable to recover it. 00:38:34.902 [2024-10-01 17:38:33.189728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.902 [2024-10-01 17:38:33.189782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.902 [2024-10-01 17:38:33.189808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.902 [2024-10-01 17:38:33.189817] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.902 [2024-10-01 17:38:33.189824] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.902 [2024-10-01 17:38:33.189843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.902 qpair failed and we were unable to recover it. 00:38:34.902 [2024-10-01 17:38:33.199775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.902 [2024-10-01 17:38:33.199841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.902 [2024-10-01 17:38:33.199867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.902 [2024-10-01 17:38:33.199876] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.902 [2024-10-01 17:38:33.199883] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.902 [2024-10-01 17:38:33.199902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.902 qpair failed and we were unable to recover it. 00:38:34.902 [2024-10-01 17:38:33.209779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.902 [2024-10-01 17:38:33.209866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.902 [2024-10-01 17:38:33.209882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.902 [2024-10-01 17:38:33.209890] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.902 [2024-10-01 17:38:33.209897] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.902 [2024-10-01 17:38:33.209913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.902 qpair failed and we were unable to recover it. 00:38:34.902 [2024-10-01 17:38:33.219799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.902 [2024-10-01 17:38:33.219892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.902 [2024-10-01 17:38:33.219907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.902 [2024-10-01 17:38:33.219915] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.902 [2024-10-01 17:38:33.219922] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.902 [2024-10-01 17:38:33.219936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.902 qpair failed and we were unable to recover it. 00:38:34.902 [2024-10-01 17:38:33.229849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.902 [2024-10-01 17:38:33.229899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.902 [2024-10-01 17:38:33.229912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.902 [2024-10-01 17:38:33.229920] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.902 [2024-10-01 17:38:33.229926] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.902 [2024-10-01 17:38:33.229940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.903 qpair failed and we were unable to recover it. 00:38:34.903 [2024-10-01 17:38:33.239812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.903 [2024-10-01 17:38:33.239891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.903 [2024-10-01 17:38:33.239906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.903 [2024-10-01 17:38:33.239918] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.903 [2024-10-01 17:38:33.239925] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.903 [2024-10-01 17:38:33.239939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.903 qpair failed and we were unable to recover it. 00:38:34.903 [2024-10-01 17:38:33.249842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.903 [2024-10-01 17:38:33.249888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.903 [2024-10-01 17:38:33.249902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.903 [2024-10-01 17:38:33.249909] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.903 [2024-10-01 17:38:33.249916] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.903 [2024-10-01 17:38:33.249930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.903 qpair failed and we were unable to recover it. 00:38:34.903 [2024-10-01 17:38:33.259842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.903 [2024-10-01 17:38:33.259899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.903 [2024-10-01 17:38:33.259914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.903 [2024-10-01 17:38:33.259921] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.903 [2024-10-01 17:38:33.259928] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.903 [2024-10-01 17:38:33.259943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.903 qpair failed and we were unable to recover it. 00:38:34.903 [2024-10-01 17:38:33.269923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.903 [2024-10-01 17:38:33.269972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.903 [2024-10-01 17:38:33.269986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.903 [2024-10-01 17:38:33.269998] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.903 [2024-10-01 17:38:33.270005] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.903 [2024-10-01 17:38:33.270019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.903 qpair failed and we were unable to recover it. 00:38:34.903 [2024-10-01 17:38:33.279920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.903 [2024-10-01 17:38:33.279969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.903 [2024-10-01 17:38:33.279984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.903 [2024-10-01 17:38:33.279991] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.903 [2024-10-01 17:38:33.280001] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.903 [2024-10-01 17:38:33.280015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.903 qpair failed and we were unable to recover it. 00:38:34.903 [2024-10-01 17:38:33.289965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.903 [2024-10-01 17:38:33.290014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.903 [2024-10-01 17:38:33.290029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.903 [2024-10-01 17:38:33.290036] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.903 [2024-10-01 17:38:33.290044] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.903 [2024-10-01 17:38:33.290059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.903 qpair failed and we were unable to recover it. 00:38:34.903 [2024-10-01 17:38:33.299887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.903 [2024-10-01 17:38:33.299939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.903 [2024-10-01 17:38:33.299953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.903 [2024-10-01 17:38:33.299960] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.903 [2024-10-01 17:38:33.299967] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.903 [2024-10-01 17:38:33.299980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.903 qpair failed and we were unable to recover it. 00:38:34.903 [2024-10-01 17:38:33.310044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.903 [2024-10-01 17:38:33.310096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.903 [2024-10-01 17:38:33.310110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.903 [2024-10-01 17:38:33.310117] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.903 [2024-10-01 17:38:33.310124] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.903 [2024-10-01 17:38:33.310137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.903 qpair failed and we were unable to recover it. 00:38:34.903 [2024-10-01 17:38:33.320039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.903 [2024-10-01 17:38:33.320092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.903 [2024-10-01 17:38:33.320105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.903 [2024-10-01 17:38:33.320113] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.903 [2024-10-01 17:38:33.320120] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.903 [2024-10-01 17:38:33.320134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.903 qpair failed and we were unable to recover it. 00:38:34.903 [2024-10-01 17:38:33.330121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.903 [2024-10-01 17:38:33.330200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.903 [2024-10-01 17:38:33.330213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.903 [2024-10-01 17:38:33.330225] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.903 [2024-10-01 17:38:33.330233] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.903 [2024-10-01 17:38:33.330247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.903 qpair failed and we were unable to recover it. 00:38:34.903 [2024-10-01 17:38:33.340101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.903 [2024-10-01 17:38:33.340153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.903 [2024-10-01 17:38:33.340166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.903 [2024-10-01 17:38:33.340174] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.903 [2024-10-01 17:38:33.340180] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.903 [2024-10-01 17:38:33.340194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.903 qpair failed and we were unable to recover it. 00:38:34.903 [2024-10-01 17:38:33.350145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.903 [2024-10-01 17:38:33.350195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.903 [2024-10-01 17:38:33.350208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.903 [2024-10-01 17:38:33.350216] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.903 [2024-10-01 17:38:33.350223] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.903 [2024-10-01 17:38:33.350237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.903 qpair failed and we were unable to recover it. 00:38:34.903 [2024-10-01 17:38:33.360127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.903 [2024-10-01 17:38:33.360173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.903 [2024-10-01 17:38:33.360187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.903 [2024-10-01 17:38:33.360194] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.903 [2024-10-01 17:38:33.360201] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.903 [2024-10-01 17:38:33.360214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.903 qpair failed and we were unable to recover it. 00:38:34.903 [2024-10-01 17:38:33.370163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.903 [2024-10-01 17:38:33.370258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.903 [2024-10-01 17:38:33.370272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.903 [2024-10-01 17:38:33.370279] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.904 [2024-10-01 17:38:33.370286] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.904 [2024-10-01 17:38:33.370299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.904 qpair failed and we were unable to recover it. 00:38:34.904 [2024-10-01 17:38:33.380200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.904 [2024-10-01 17:38:33.380251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.904 [2024-10-01 17:38:33.380265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.904 [2024-10-01 17:38:33.380273] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.904 [2024-10-01 17:38:33.380280] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.904 [2024-10-01 17:38:33.380293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.904 qpair failed and we were unable to recover it. 00:38:34.904 [2024-10-01 17:38:33.390250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.904 [2024-10-01 17:38:33.390317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.904 [2024-10-01 17:38:33.390330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.904 [2024-10-01 17:38:33.390338] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.904 [2024-10-01 17:38:33.390345] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.904 [2024-10-01 17:38:33.390358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.904 qpair failed and we were unable to recover it. 00:38:34.904 [2024-10-01 17:38:33.400244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.904 [2024-10-01 17:38:33.400294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.904 [2024-10-01 17:38:33.400307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.904 [2024-10-01 17:38:33.400314] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.904 [2024-10-01 17:38:33.400321] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.904 [2024-10-01 17:38:33.400334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.904 qpair failed and we were unable to recover it. 00:38:34.904 [2024-10-01 17:38:33.410286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.904 [2024-10-01 17:38:33.410371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.904 [2024-10-01 17:38:33.410384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.904 [2024-10-01 17:38:33.410392] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.904 [2024-10-01 17:38:33.410399] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.904 [2024-10-01 17:38:33.410412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.904 qpair failed and we were unable to recover it. 00:38:34.904 [2024-10-01 17:38:33.420325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.904 [2024-10-01 17:38:33.420374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.904 [2024-10-01 17:38:33.420388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.904 [2024-10-01 17:38:33.420399] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.904 [2024-10-01 17:38:33.420406] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.904 [2024-10-01 17:38:33.420419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.904 qpair failed and we were unable to recover it. 00:38:34.904 [2024-10-01 17:38:33.430316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.904 [2024-10-01 17:38:33.430376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.904 [2024-10-01 17:38:33.430390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.904 [2024-10-01 17:38:33.430398] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.904 [2024-10-01 17:38:33.430404] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.904 [2024-10-01 17:38:33.430418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.904 qpair failed and we were unable to recover it. 00:38:34.904 [2024-10-01 17:38:33.440303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:34.904 [2024-10-01 17:38:33.440354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:34.904 [2024-10-01 17:38:33.440369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:34.904 [2024-10-01 17:38:33.440377] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:34.904 [2024-10-01 17:38:33.440383] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:34.904 [2024-10-01 17:38:33.440397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:34.904 qpair failed and we were unable to recover it. 00:38:35.165 [2024-10-01 17:38:33.450391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.165 [2024-10-01 17:38:33.450443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.165 [2024-10-01 17:38:33.450457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.165 [2024-10-01 17:38:33.450464] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.165 [2024-10-01 17:38:33.450471] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.165 [2024-10-01 17:38:33.450484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.165 qpair failed and we were unable to recover it. 00:38:35.165 [2024-10-01 17:38:33.460456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.166 [2024-10-01 17:38:33.460540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.166 [2024-10-01 17:38:33.460553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.166 [2024-10-01 17:38:33.460561] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.166 [2024-10-01 17:38:33.460568] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.166 [2024-10-01 17:38:33.460581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.166 qpair failed and we were unable to recover it. 00:38:35.166 [2024-10-01 17:38:33.470438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.166 [2024-10-01 17:38:33.470531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.166 [2024-10-01 17:38:33.470545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.166 [2024-10-01 17:38:33.470552] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.166 [2024-10-01 17:38:33.470559] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.166 [2024-10-01 17:38:33.470572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.166 qpair failed and we were unable to recover it. 00:38:35.166 [2024-10-01 17:38:33.480461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.166 [2024-10-01 17:38:33.480522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.166 [2024-10-01 17:38:33.480536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.166 [2024-10-01 17:38:33.480543] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.166 [2024-10-01 17:38:33.480550] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.166 [2024-10-01 17:38:33.480563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.166 qpair failed and we were unable to recover it. 00:38:35.166 [2024-10-01 17:38:33.490511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.166 [2024-10-01 17:38:33.490561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.166 [2024-10-01 17:38:33.490574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.166 [2024-10-01 17:38:33.490582] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.166 [2024-10-01 17:38:33.490589] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.166 [2024-10-01 17:38:33.490602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.166 qpair failed and we were unable to recover it. 00:38:35.166 [2024-10-01 17:38:33.500558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.166 [2024-10-01 17:38:33.500607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.166 [2024-10-01 17:38:33.500621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.166 [2024-10-01 17:38:33.500628] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.166 [2024-10-01 17:38:33.500634] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.166 [2024-10-01 17:38:33.500648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.166 qpair failed and we were unable to recover it. 00:38:35.166 [2024-10-01 17:38:33.510506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.166 [2024-10-01 17:38:33.510552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.166 [2024-10-01 17:38:33.510566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.166 [2024-10-01 17:38:33.510580] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.166 [2024-10-01 17:38:33.510587] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.166 [2024-10-01 17:38:33.510600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.166 qpair failed and we were unable to recover it. 00:38:35.166 [2024-10-01 17:38:33.520605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.166 [2024-10-01 17:38:33.520651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.166 [2024-10-01 17:38:33.520664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.166 [2024-10-01 17:38:33.520672] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.166 [2024-10-01 17:38:33.520678] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.166 [2024-10-01 17:38:33.520692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.166 qpair failed and we were unable to recover it. 00:38:35.166 [2024-10-01 17:38:33.530518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.166 [2024-10-01 17:38:33.530565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.166 [2024-10-01 17:38:33.530579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.166 [2024-10-01 17:38:33.530586] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.166 [2024-10-01 17:38:33.530593] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.166 [2024-10-01 17:38:33.530606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.166 qpair failed and we were unable to recover it. 00:38:35.166 [2024-10-01 17:38:33.540595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.166 [2024-10-01 17:38:33.540690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.166 [2024-10-01 17:38:33.540706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.166 [2024-10-01 17:38:33.540715] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.166 [2024-10-01 17:38:33.540722] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.166 [2024-10-01 17:38:33.540736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.166 qpair failed and we were unable to recover it. 00:38:35.166 [2024-10-01 17:38:33.550550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.166 [2024-10-01 17:38:33.550605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.166 [2024-10-01 17:38:33.550619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.166 [2024-10-01 17:38:33.550627] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.166 [2024-10-01 17:38:33.550633] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.166 [2024-10-01 17:38:33.550647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.166 qpair failed and we were unable to recover it. 00:38:35.166 [2024-10-01 17:38:33.560690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.166 [2024-10-01 17:38:33.560739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.166 [2024-10-01 17:38:33.560753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.166 [2024-10-01 17:38:33.560760] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.166 [2024-10-01 17:38:33.560767] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.166 [2024-10-01 17:38:33.560781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.166 qpair failed and we were unable to recover it. 00:38:35.166 [2024-10-01 17:38:33.570723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.166 [2024-10-01 17:38:33.570807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.166 [2024-10-01 17:38:33.570833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.166 [2024-10-01 17:38:33.570842] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.166 [2024-10-01 17:38:33.570850] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.166 [2024-10-01 17:38:33.570868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.166 qpair failed and we were unable to recover it. 00:38:35.166 [2024-10-01 17:38:33.580770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.166 [2024-10-01 17:38:33.580827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.166 [2024-10-01 17:38:33.580845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.166 [2024-10-01 17:38:33.580853] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.166 [2024-10-01 17:38:33.580860] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.166 [2024-10-01 17:38:33.580875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.166 qpair failed and we were unable to recover it. 00:38:35.166 [2024-10-01 17:38:33.590772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.166 [2024-10-01 17:38:33.590815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.166 [2024-10-01 17:38:33.590831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.166 [2024-10-01 17:38:33.590838] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.167 [2024-10-01 17:38:33.590845] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.167 [2024-10-01 17:38:33.590859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.167 qpair failed and we were unable to recover it. 00:38:35.167 [2024-10-01 17:38:33.600721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.167 [2024-10-01 17:38:33.600772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.167 [2024-10-01 17:38:33.600787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.167 [2024-10-01 17:38:33.600799] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.167 [2024-10-01 17:38:33.600806] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.167 [2024-10-01 17:38:33.600821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.167 qpair failed and we were unable to recover it. 00:38:35.167 [2024-10-01 17:38:33.610831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.167 [2024-10-01 17:38:33.610884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.167 [2024-10-01 17:38:33.610899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.167 [2024-10-01 17:38:33.610906] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.167 [2024-10-01 17:38:33.610913] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.167 [2024-10-01 17:38:33.610926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.167 qpair failed and we were unable to recover it. 00:38:35.167 [2024-10-01 17:38:33.620894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.167 [2024-10-01 17:38:33.620990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.167 [2024-10-01 17:38:33.621008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.167 [2024-10-01 17:38:33.621016] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.167 [2024-10-01 17:38:33.621023] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.167 [2024-10-01 17:38:33.621037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.167 qpair failed and we were unable to recover it. 00:38:35.167 [2024-10-01 17:38:33.630875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.167 [2024-10-01 17:38:33.630927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.167 [2024-10-01 17:38:33.630941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.167 [2024-10-01 17:38:33.630948] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.167 [2024-10-01 17:38:33.630955] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.167 [2024-10-01 17:38:33.630969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.167 qpair failed and we were unable to recover it. 00:38:35.167 [2024-10-01 17:38:33.640875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.167 [2024-10-01 17:38:33.640921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.167 [2024-10-01 17:38:33.640936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.167 [2024-10-01 17:38:33.640943] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.167 [2024-10-01 17:38:33.640950] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.167 [2024-10-01 17:38:33.640964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.167 qpair failed and we were unable to recover it. 00:38:35.167 [2024-10-01 17:38:33.650935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.167 [2024-10-01 17:38:33.650985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.167 [2024-10-01 17:38:33.651005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.167 [2024-10-01 17:38:33.651012] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.167 [2024-10-01 17:38:33.651019] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.167 [2024-10-01 17:38:33.651033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.167 qpair failed and we were unable to recover it. 00:38:35.167 [2024-10-01 17:38:33.660987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.167 [2024-10-01 17:38:33.661037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.167 [2024-10-01 17:38:33.661050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.167 [2024-10-01 17:38:33.661058] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.167 [2024-10-01 17:38:33.661064] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.167 [2024-10-01 17:38:33.661078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.167 qpair failed and we were unable to recover it. 00:38:35.167 [2024-10-01 17:38:33.670988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.167 [2024-10-01 17:38:33.671032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.167 [2024-10-01 17:38:33.671046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.167 [2024-10-01 17:38:33.671053] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.167 [2024-10-01 17:38:33.671060] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.167 [2024-10-01 17:38:33.671075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.167 qpair failed and we were unable to recover it. 00:38:35.167 [2024-10-01 17:38:33.681022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.167 [2024-10-01 17:38:33.681071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.167 [2024-10-01 17:38:33.681085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.167 [2024-10-01 17:38:33.681092] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.167 [2024-10-01 17:38:33.681099] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.167 [2024-10-01 17:38:33.681112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.167 qpair failed and we were unable to recover it. 00:38:35.167 [2024-10-01 17:38:33.691054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.167 [2024-10-01 17:38:33.691115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.167 [2024-10-01 17:38:33.691129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.167 [2024-10-01 17:38:33.691139] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.167 [2024-10-01 17:38:33.691146] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.167 [2024-10-01 17:38:33.691160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.167 qpair failed and we were unable to recover it. 00:38:35.167 [2024-10-01 17:38:33.701067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.167 [2024-10-01 17:38:33.701118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.167 [2024-10-01 17:38:33.701132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.167 [2024-10-01 17:38:33.701139] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.167 [2024-10-01 17:38:33.701146] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.167 [2024-10-01 17:38:33.701160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.167 qpair failed and we were unable to recover it. 00:38:35.167 [2024-10-01 17:38:33.710956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.167 [2024-10-01 17:38:33.711001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.167 [2024-10-01 17:38:33.711015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.167 [2024-10-01 17:38:33.711022] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.167 [2024-10-01 17:38:33.711029] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.167 [2024-10-01 17:38:33.711042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.167 qpair failed and we were unable to recover it. 00:38:35.429 [2024-10-01 17:38:33.721134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.429 [2024-10-01 17:38:33.721182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.429 [2024-10-01 17:38:33.721196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.429 [2024-10-01 17:38:33.721203] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.429 [2024-10-01 17:38:33.721210] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.429 [2024-10-01 17:38:33.721223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.429 qpair failed and we were unable to recover it. 00:38:35.429 [2024-10-01 17:38:33.731162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.429 [2024-10-01 17:38:33.731258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.429 [2024-10-01 17:38:33.731272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.429 [2024-10-01 17:38:33.731279] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.429 [2024-10-01 17:38:33.731286] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.429 [2024-10-01 17:38:33.731299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.429 qpair failed and we were unable to recover it. 00:38:35.429 [2024-10-01 17:38:33.741154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.429 [2024-10-01 17:38:33.741203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.429 [2024-10-01 17:38:33.741216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.429 [2024-10-01 17:38:33.741224] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.429 [2024-10-01 17:38:33.741230] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.429 [2024-10-01 17:38:33.741244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.429 qpair failed and we were unable to recover it. 00:38:35.429 [2024-10-01 17:38:33.751210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.429 [2024-10-01 17:38:33.751256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.429 [2024-10-01 17:38:33.751269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.429 [2024-10-01 17:38:33.751276] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.429 [2024-10-01 17:38:33.751283] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.429 [2024-10-01 17:38:33.751296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.429 qpair failed and we were unable to recover it. 00:38:35.429 [2024-10-01 17:38:33.761282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.429 [2024-10-01 17:38:33.761351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.429 [2024-10-01 17:38:33.761365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.429 [2024-10-01 17:38:33.761372] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.429 [2024-10-01 17:38:33.761378] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.429 [2024-10-01 17:38:33.761391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.429 qpair failed and we were unable to recover it. 00:38:35.429 [2024-10-01 17:38:33.771277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.429 [2024-10-01 17:38:33.771328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.429 [2024-10-01 17:38:33.771342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.429 [2024-10-01 17:38:33.771349] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.429 [2024-10-01 17:38:33.771355] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.429 [2024-10-01 17:38:33.771368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.429 qpair failed and we were unable to recover it. 00:38:35.429 [2024-10-01 17:38:33.781324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.430 [2024-10-01 17:38:33.781376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.430 [2024-10-01 17:38:33.781393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.430 [2024-10-01 17:38:33.781400] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.430 [2024-10-01 17:38:33.781406] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.430 [2024-10-01 17:38:33.781420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.430 qpair failed and we were unable to recover it. 00:38:35.430 [2024-10-01 17:38:33.791316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.430 [2024-10-01 17:38:33.791359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.430 [2024-10-01 17:38:33.791374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.430 [2024-10-01 17:38:33.791382] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.430 [2024-10-01 17:38:33.791390] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.430 [2024-10-01 17:38:33.791404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.430 qpair failed and we were unable to recover it. 00:38:35.430 [2024-10-01 17:38:33.801337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.430 [2024-10-01 17:38:33.801387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.430 [2024-10-01 17:38:33.801400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.430 [2024-10-01 17:38:33.801408] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.430 [2024-10-01 17:38:33.801414] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.430 [2024-10-01 17:38:33.801427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.430 qpair failed and we were unable to recover it. 00:38:35.430 [2024-10-01 17:38:33.811360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.430 [2024-10-01 17:38:33.811410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.430 [2024-10-01 17:38:33.811423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.430 [2024-10-01 17:38:33.811431] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.430 [2024-10-01 17:38:33.811438] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.430 [2024-10-01 17:38:33.811451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.430 qpair failed and we were unable to recover it. 00:38:35.430 [2024-10-01 17:38:33.821417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.430 [2024-10-01 17:38:33.821503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.430 [2024-10-01 17:38:33.821517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.430 [2024-10-01 17:38:33.821525] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.430 [2024-10-01 17:38:33.821532] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.430 [2024-10-01 17:38:33.821545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.430 qpair failed and we were unable to recover it. 00:38:35.430 [2024-10-01 17:38:33.831400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.430 [2024-10-01 17:38:33.831446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.430 [2024-10-01 17:38:33.831459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.430 [2024-10-01 17:38:33.831466] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.430 [2024-10-01 17:38:33.831473] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.430 [2024-10-01 17:38:33.831486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.430 qpair failed and we were unable to recover it. 00:38:35.430 [2024-10-01 17:38:33.841454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.430 [2024-10-01 17:38:33.841503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.430 [2024-10-01 17:38:33.841516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.430 [2024-10-01 17:38:33.841524] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.430 [2024-10-01 17:38:33.841531] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.430 [2024-10-01 17:38:33.841544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.430 qpair failed and we were unable to recover it. 00:38:35.430 [2024-10-01 17:38:33.851478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.430 [2024-10-01 17:38:33.851530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.430 [2024-10-01 17:38:33.851543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.430 [2024-10-01 17:38:33.851551] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.430 [2024-10-01 17:38:33.851557] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.430 [2024-10-01 17:38:33.851570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.430 qpair failed and we were unable to recover it. 00:38:35.430 [2024-10-01 17:38:33.861523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.430 [2024-10-01 17:38:33.861571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.430 [2024-10-01 17:38:33.861586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.430 [2024-10-01 17:38:33.861593] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.430 [2024-10-01 17:38:33.861600] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.430 [2024-10-01 17:38:33.861614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.430 qpair failed and we were unable to recover it. 00:38:35.430 [2024-10-01 17:38:33.871513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.430 [2024-10-01 17:38:33.871556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.430 [2024-10-01 17:38:33.871573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.430 [2024-10-01 17:38:33.871581] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.430 [2024-10-01 17:38:33.871587] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.430 [2024-10-01 17:38:33.871601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.430 qpair failed and we were unable to recover it. 00:38:35.430 [2024-10-01 17:38:33.881550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.430 [2024-10-01 17:38:33.881597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.430 [2024-10-01 17:38:33.881611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.430 [2024-10-01 17:38:33.881618] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.430 [2024-10-01 17:38:33.881625] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.430 [2024-10-01 17:38:33.881638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.430 qpair failed and we were unable to recover it. 00:38:35.430 [2024-10-01 17:38:33.891566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.430 [2024-10-01 17:38:33.891616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.430 [2024-10-01 17:38:33.891630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.430 [2024-10-01 17:38:33.891638] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.430 [2024-10-01 17:38:33.891645] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.430 [2024-10-01 17:38:33.891658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.430 qpair failed and we were unable to recover it. 00:38:35.430 [2024-10-01 17:38:33.901607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.430 [2024-10-01 17:38:33.901652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.430 [2024-10-01 17:38:33.901666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.430 [2024-10-01 17:38:33.901673] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.430 [2024-10-01 17:38:33.901680] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.430 [2024-10-01 17:38:33.901693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.430 qpair failed and we were unable to recover it. 00:38:35.430 [2024-10-01 17:38:33.911635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.430 [2024-10-01 17:38:33.911707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.430 [2024-10-01 17:38:33.911720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.430 [2024-10-01 17:38:33.911728] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.430 [2024-10-01 17:38:33.911734] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.430 [2024-10-01 17:38:33.911748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.431 qpair failed and we were unable to recover it. 00:38:35.431 [2024-10-01 17:38:33.921623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.431 [2024-10-01 17:38:33.921705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.431 [2024-10-01 17:38:33.921731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.431 [2024-10-01 17:38:33.921740] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.431 [2024-10-01 17:38:33.921747] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.431 [2024-10-01 17:38:33.921766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.431 qpair failed and we were unable to recover it. 00:38:35.431 [2024-10-01 17:38:33.931650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.431 [2024-10-01 17:38:33.931701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.431 [2024-10-01 17:38:33.931717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.431 [2024-10-01 17:38:33.931724] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.431 [2024-10-01 17:38:33.931731] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.431 [2024-10-01 17:38:33.931744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.431 qpair failed and we were unable to recover it. 00:38:35.431 [2024-10-01 17:38:33.941742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.431 [2024-10-01 17:38:33.941796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.431 [2024-10-01 17:38:33.941822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.431 [2024-10-01 17:38:33.941831] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.431 [2024-10-01 17:38:33.941838] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.431 [2024-10-01 17:38:33.941856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.431 qpair failed and we were unable to recover it. 00:38:35.431 [2024-10-01 17:38:33.951741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.431 [2024-10-01 17:38:33.951788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.431 [2024-10-01 17:38:33.951803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.431 [2024-10-01 17:38:33.951811] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.431 [2024-10-01 17:38:33.951817] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.431 [2024-10-01 17:38:33.951832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.431 qpair failed and we were unable to recover it. 00:38:35.431 [2024-10-01 17:38:33.961778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.431 [2024-10-01 17:38:33.961825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.431 [2024-10-01 17:38:33.961844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.431 [2024-10-01 17:38:33.961851] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.431 [2024-10-01 17:38:33.961858] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.431 [2024-10-01 17:38:33.961871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.431 qpair failed and we were unable to recover it. 00:38:35.431 [2024-10-01 17:38:33.971801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.431 [2024-10-01 17:38:33.971855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.431 [2024-10-01 17:38:33.971870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.431 [2024-10-01 17:38:33.971877] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.431 [2024-10-01 17:38:33.971884] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.431 [2024-10-01 17:38:33.971897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.431 qpair failed and we were unable to recover it. 00:38:35.694 [2024-10-01 17:38:33.981876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.694 [2024-10-01 17:38:33.981967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.694 [2024-10-01 17:38:33.981981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.694 [2024-10-01 17:38:33.981988] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.694 [2024-10-01 17:38:33.982077] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.694 [2024-10-01 17:38:33.982093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.694 qpair failed and we were unable to recover it. 00:38:35.694 [2024-10-01 17:38:33.991836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.694 [2024-10-01 17:38:33.991903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.694 [2024-10-01 17:38:33.991916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.694 [2024-10-01 17:38:33.991924] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.694 [2024-10-01 17:38:33.991930] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.694 [2024-10-01 17:38:33.991944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.694 qpair failed and we were unable to recover it. 00:38:35.694 [2024-10-01 17:38:34.001883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.694 [2024-10-01 17:38:34.001928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.694 [2024-10-01 17:38:34.001942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.694 [2024-10-01 17:38:34.001949] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.694 [2024-10-01 17:38:34.001956] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.694 [2024-10-01 17:38:34.001974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.694 qpair failed and we were unable to recover it. 00:38:35.694 [2024-10-01 17:38:34.011915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.694 [2024-10-01 17:38:34.011960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.694 [2024-10-01 17:38:34.011974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.694 [2024-10-01 17:38:34.011982] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.694 [2024-10-01 17:38:34.011988] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.694 [2024-10-01 17:38:34.012005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.694 qpair failed and we were unable to recover it. 00:38:35.694 [2024-10-01 17:38:34.022007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.694 [2024-10-01 17:38:34.022080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.694 [2024-10-01 17:38:34.022094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.694 [2024-10-01 17:38:34.022101] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.694 [2024-10-01 17:38:34.022108] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.694 [2024-10-01 17:38:34.022121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.694 qpair failed and we were unable to recover it. 00:38:35.694 [2024-10-01 17:38:34.031953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.694 [2024-10-01 17:38:34.032004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.694 [2024-10-01 17:38:34.032018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.694 [2024-10-01 17:38:34.032025] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.694 [2024-10-01 17:38:34.032032] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.694 [2024-10-01 17:38:34.032046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.694 qpair failed and we were unable to recover it. 00:38:35.694 [2024-10-01 17:38:34.041984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.694 [2024-10-01 17:38:34.042036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.694 [2024-10-01 17:38:34.042050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.694 [2024-10-01 17:38:34.042057] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.694 [2024-10-01 17:38:34.042064] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.694 [2024-10-01 17:38:34.042077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.694 qpair failed and we were unable to recover it. 00:38:35.694 [2024-10-01 17:38:34.051897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.694 [2024-10-01 17:38:34.051943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.694 [2024-10-01 17:38:34.051961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.694 [2024-10-01 17:38:34.051968] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.694 [2024-10-01 17:38:34.051975] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.694 [2024-10-01 17:38:34.051988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.694 qpair failed and we were unable to recover it. 00:38:35.694 [2024-10-01 17:38:34.062115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.694 [2024-10-01 17:38:34.062166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.694 [2024-10-01 17:38:34.062181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.694 [2024-10-01 17:38:34.062188] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.694 [2024-10-01 17:38:34.062195] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.694 [2024-10-01 17:38:34.062209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.694 qpair failed and we were unable to recover it. 00:38:35.694 [2024-10-01 17:38:34.071996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.694 [2024-10-01 17:38:34.072041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.694 [2024-10-01 17:38:34.072057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.694 [2024-10-01 17:38:34.072064] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.694 [2024-10-01 17:38:34.072071] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.694 [2024-10-01 17:38:34.072085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.694 qpair failed and we were unable to recover it. 00:38:35.694 [2024-10-01 17:38:34.082032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.694 [2024-10-01 17:38:34.082081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.694 [2024-10-01 17:38:34.082095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.694 [2024-10-01 17:38:34.082102] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.694 [2024-10-01 17:38:34.082109] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.694 [2024-10-01 17:38:34.082122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.694 qpair failed and we were unable to recover it. 00:38:35.694 [2024-10-01 17:38:34.092131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.694 [2024-10-01 17:38:34.092185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.694 [2024-10-01 17:38:34.092201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.694 [2024-10-01 17:38:34.092209] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.694 [2024-10-01 17:38:34.092216] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.694 [2024-10-01 17:38:34.092236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.694 qpair failed and we were unable to recover it. 00:38:35.694 [2024-10-01 17:38:34.102153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.694 [2024-10-01 17:38:34.102245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.694 [2024-10-01 17:38:34.102261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.694 [2024-10-01 17:38:34.102268] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.694 [2024-10-01 17:38:34.102275] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.694 [2024-10-01 17:38:34.102288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.694 qpair failed and we were unable to recover it. 00:38:35.694 [2024-10-01 17:38:34.112185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.694 [2024-10-01 17:38:34.112234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.695 [2024-10-01 17:38:34.112248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.695 [2024-10-01 17:38:34.112255] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.695 [2024-10-01 17:38:34.112261] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.695 [2024-10-01 17:38:34.112274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.695 qpair failed and we were unable to recover it. 00:38:35.695 [2024-10-01 17:38:34.122135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.695 [2024-10-01 17:38:34.122224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.695 [2024-10-01 17:38:34.122237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.695 [2024-10-01 17:38:34.122244] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.695 [2024-10-01 17:38:34.122251] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.695 [2024-10-01 17:38:34.122264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.695 qpair failed and we were unable to recover it. 00:38:35.695 [2024-10-01 17:38:34.132241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.695 [2024-10-01 17:38:34.132311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.695 [2024-10-01 17:38:34.132324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.695 [2024-10-01 17:38:34.132331] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.695 [2024-10-01 17:38:34.132338] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.695 [2024-10-01 17:38:34.132351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.695 qpair failed and we were unable to recover it. 00:38:35.695 [2024-10-01 17:38:34.142289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.695 [2024-10-01 17:38:34.142335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.695 [2024-10-01 17:38:34.142351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.695 [2024-10-01 17:38:34.142358] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.695 [2024-10-01 17:38:34.142365] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.695 [2024-10-01 17:38:34.142378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.695 qpair failed and we were unable to recover it. 00:38:35.695 [2024-10-01 17:38:34.152278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.695 [2024-10-01 17:38:34.152324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.695 [2024-10-01 17:38:34.152338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.695 [2024-10-01 17:38:34.152345] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.695 [2024-10-01 17:38:34.152352] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.695 [2024-10-01 17:38:34.152365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.695 qpair failed and we were unable to recover it. 00:38:35.695 [2024-10-01 17:38:34.162313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.695 [2024-10-01 17:38:34.162357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.695 [2024-10-01 17:38:34.162371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.695 [2024-10-01 17:38:34.162378] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.695 [2024-10-01 17:38:34.162385] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.695 [2024-10-01 17:38:34.162398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.695 qpair failed and we were unable to recover it. 00:38:35.695 [2024-10-01 17:38:34.172352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.695 [2024-10-01 17:38:34.172401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.695 [2024-10-01 17:38:34.172414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.695 [2024-10-01 17:38:34.172422] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.695 [2024-10-01 17:38:34.172428] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.695 [2024-10-01 17:38:34.172441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.695 qpair failed and we were unable to recover it. 00:38:35.695 [2024-10-01 17:38:34.182405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.695 [2024-10-01 17:38:34.182454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.695 [2024-10-01 17:38:34.182469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.695 [2024-10-01 17:38:34.182476] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.695 [2024-10-01 17:38:34.182483] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.695 [2024-10-01 17:38:34.182506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.695 qpair failed and we were unable to recover it. 00:38:35.695 [2024-10-01 17:38:34.192390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.695 [2024-10-01 17:38:34.192440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.695 [2024-10-01 17:38:34.192454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.695 [2024-10-01 17:38:34.192461] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.695 [2024-10-01 17:38:34.192468] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.695 [2024-10-01 17:38:34.192481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.695 qpair failed and we were unable to recover it. 00:38:35.695 [2024-10-01 17:38:34.202422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.695 [2024-10-01 17:38:34.202475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.695 [2024-10-01 17:38:34.202489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.695 [2024-10-01 17:38:34.202496] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.695 [2024-10-01 17:38:34.202503] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.695 [2024-10-01 17:38:34.202517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.695 qpair failed and we were unable to recover it. 00:38:35.695 [2024-10-01 17:38:34.212451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.695 [2024-10-01 17:38:34.212539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.695 [2024-10-01 17:38:34.212553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.695 [2024-10-01 17:38:34.212560] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.695 [2024-10-01 17:38:34.212566] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.695 [2024-10-01 17:38:34.212579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.695 qpair failed and we were unable to recover it. 00:38:35.695 [2024-10-01 17:38:34.222515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.695 [2024-10-01 17:38:34.222561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.695 [2024-10-01 17:38:34.222575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.695 [2024-10-01 17:38:34.222582] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.695 [2024-10-01 17:38:34.222588] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.695 [2024-10-01 17:38:34.222602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.695 qpair failed and we were unable to recover it. 00:38:35.695 [2024-10-01 17:38:34.232496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.695 [2024-10-01 17:38:34.232549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.695 [2024-10-01 17:38:34.232566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.695 [2024-10-01 17:38:34.232574] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.695 [2024-10-01 17:38:34.232580] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.695 [2024-10-01 17:38:34.232594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.695 qpair failed and we were unable to recover it. 00:38:35.957 [2024-10-01 17:38:34.242409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.957 [2024-10-01 17:38:34.242455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.957 [2024-10-01 17:38:34.242469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.957 [2024-10-01 17:38:34.242476] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.957 [2024-10-01 17:38:34.242483] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.957 [2024-10-01 17:38:34.242496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.957 qpair failed and we were unable to recover it. 00:38:35.957 [2024-10-01 17:38:34.252571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.957 [2024-10-01 17:38:34.252624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.957 [2024-10-01 17:38:34.252637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.957 [2024-10-01 17:38:34.252644] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.957 [2024-10-01 17:38:34.252651] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.957 [2024-10-01 17:38:34.252664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.957 qpair failed and we were unable to recover it. 00:38:35.957 [2024-10-01 17:38:34.262623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.957 [2024-10-01 17:38:34.262673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.957 [2024-10-01 17:38:34.262687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.958 [2024-10-01 17:38:34.262694] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.958 [2024-10-01 17:38:34.262701] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.958 [2024-10-01 17:38:34.262714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.958 qpair failed and we were unable to recover it. 00:38:35.958 [2024-10-01 17:38:34.272612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.958 [2024-10-01 17:38:34.272657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.958 [2024-10-01 17:38:34.272671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.958 [2024-10-01 17:38:34.272679] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.958 [2024-10-01 17:38:34.272685] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.958 [2024-10-01 17:38:34.272703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.958 qpair failed and we were unable to recover it. 00:38:35.958 [2024-10-01 17:38:34.282648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.958 [2024-10-01 17:38:34.282710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.958 [2024-10-01 17:38:34.282725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.958 [2024-10-01 17:38:34.282732] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.958 [2024-10-01 17:38:34.282742] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.958 [2024-10-01 17:38:34.282756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.958 qpair failed and we were unable to recover it. 00:38:35.958 [2024-10-01 17:38:34.292687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.958 [2024-10-01 17:38:34.292739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.958 [2024-10-01 17:38:34.292754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.958 [2024-10-01 17:38:34.292762] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.958 [2024-10-01 17:38:34.292768] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.958 [2024-10-01 17:38:34.292782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.958 qpair failed and we were unable to recover it. 00:38:35.958 [2024-10-01 17:38:34.302629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.958 [2024-10-01 17:38:34.302676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.958 [2024-10-01 17:38:34.302689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.958 [2024-10-01 17:38:34.302696] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.958 [2024-10-01 17:38:34.302703] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.958 [2024-10-01 17:38:34.302716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.958 qpair failed and we were unable to recover it. 00:38:35.958 [2024-10-01 17:38:34.312722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.958 [2024-10-01 17:38:34.312777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.958 [2024-10-01 17:38:34.312791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.958 [2024-10-01 17:38:34.312798] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.958 [2024-10-01 17:38:34.312805] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.958 [2024-10-01 17:38:34.312818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.958 qpair failed and we were unable to recover it. 00:38:35.958 [2024-10-01 17:38:34.322773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.958 [2024-10-01 17:38:34.322828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.958 [2024-10-01 17:38:34.322858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.958 [2024-10-01 17:38:34.322867] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.958 [2024-10-01 17:38:34.322874] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.958 [2024-10-01 17:38:34.322893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.958 qpair failed and we were unable to recover it. 00:38:35.958 [2024-10-01 17:38:34.332794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.958 [2024-10-01 17:38:34.332855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.958 [2024-10-01 17:38:34.332870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.958 [2024-10-01 17:38:34.332878] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.958 [2024-10-01 17:38:34.332884] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.958 [2024-10-01 17:38:34.332898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.958 qpair failed and we were unable to recover it. 00:38:35.958 [2024-10-01 17:38:34.342805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.958 [2024-10-01 17:38:34.342856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.958 [2024-10-01 17:38:34.342870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.958 [2024-10-01 17:38:34.342877] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.958 [2024-10-01 17:38:34.342884] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.958 [2024-10-01 17:38:34.342897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.958 qpair failed and we were unable to recover it. 00:38:35.958 [2024-10-01 17:38:34.352798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.958 [2024-10-01 17:38:34.352846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.958 [2024-10-01 17:38:34.352860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.958 [2024-10-01 17:38:34.352867] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.958 [2024-10-01 17:38:34.352874] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.958 [2024-10-01 17:38:34.352888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.958 qpair failed and we were unable to recover it. 00:38:35.958 [2024-10-01 17:38:34.362856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.958 [2024-10-01 17:38:34.362906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.958 [2024-10-01 17:38:34.362919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.958 [2024-10-01 17:38:34.362927] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.958 [2024-10-01 17:38:34.362933] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.958 [2024-10-01 17:38:34.362950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.958 qpair failed and we were unable to recover it. 00:38:35.958 [2024-10-01 17:38:34.372897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.958 [2024-10-01 17:38:34.372947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.958 [2024-10-01 17:38:34.372961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.958 [2024-10-01 17:38:34.372968] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.958 [2024-10-01 17:38:34.372974] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.958 [2024-10-01 17:38:34.372988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.958 qpair failed and we were unable to recover it. 00:38:35.958 [2024-10-01 17:38:34.382974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.958 [2024-10-01 17:38:34.383029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.958 [2024-10-01 17:38:34.383044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.958 [2024-10-01 17:38:34.383051] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.958 [2024-10-01 17:38:34.383057] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.958 [2024-10-01 17:38:34.383071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.958 qpair failed and we were unable to recover it. 00:38:35.958 [2024-10-01 17:38:34.392955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.958 [2024-10-01 17:38:34.393011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.958 [2024-10-01 17:38:34.393025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.958 [2024-10-01 17:38:34.393033] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.958 [2024-10-01 17:38:34.393040] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.958 [2024-10-01 17:38:34.393053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.958 qpair failed and we were unable to recover it. 00:38:35.958 [2024-10-01 17:38:34.402861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.959 [2024-10-01 17:38:34.402912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.959 [2024-10-01 17:38:34.402926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.959 [2024-10-01 17:38:34.402933] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.959 [2024-10-01 17:38:34.402940] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.959 [2024-10-01 17:38:34.402953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.959 qpair failed and we were unable to recover it. 00:38:35.959 [2024-10-01 17:38:34.413005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.959 [2024-10-01 17:38:34.413059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.959 [2024-10-01 17:38:34.413077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.959 [2024-10-01 17:38:34.413084] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.959 [2024-10-01 17:38:34.413091] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.959 [2024-10-01 17:38:34.413104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.959 qpair failed and we were unable to recover it. 00:38:35.959 [2024-10-01 17:38:34.423065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.959 [2024-10-01 17:38:34.423118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.959 [2024-10-01 17:38:34.423132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.959 [2024-10-01 17:38:34.423140] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.959 [2024-10-01 17:38:34.423146] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.959 [2024-10-01 17:38:34.423160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.959 qpair failed and we were unable to recover it. 00:38:35.959 [2024-10-01 17:38:34.433098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.959 [2024-10-01 17:38:34.433179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.959 [2024-10-01 17:38:34.433193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.959 [2024-10-01 17:38:34.433201] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.959 [2024-10-01 17:38:34.433208] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.959 [2024-10-01 17:38:34.433221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.959 qpair failed and we were unable to recover it. 00:38:35.959 [2024-10-01 17:38:34.443096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.959 [2024-10-01 17:38:34.443146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.959 [2024-10-01 17:38:34.443160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.959 [2024-10-01 17:38:34.443167] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.959 [2024-10-01 17:38:34.443174] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.959 [2024-10-01 17:38:34.443187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.959 qpair failed and we were unable to recover it. 00:38:35.959 [2024-10-01 17:38:34.453031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.959 [2024-10-01 17:38:34.453078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.959 [2024-10-01 17:38:34.453091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.959 [2024-10-01 17:38:34.453098] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.959 [2024-10-01 17:38:34.453108] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.959 [2024-10-01 17:38:34.453122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.959 qpair failed and we were unable to recover it. 00:38:35.959 [2024-10-01 17:38:34.463147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.959 [2024-10-01 17:38:34.463192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.959 [2024-10-01 17:38:34.463206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.959 [2024-10-01 17:38:34.463213] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.959 [2024-10-01 17:38:34.463219] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.959 [2024-10-01 17:38:34.463233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.959 qpair failed and we were unable to recover it. 00:38:35.959 [2024-10-01 17:38:34.473167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.959 [2024-10-01 17:38:34.473213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.959 [2024-10-01 17:38:34.473226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.959 [2024-10-01 17:38:34.473233] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.959 [2024-10-01 17:38:34.473240] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.959 [2024-10-01 17:38:34.473253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.959 qpair failed and we were unable to recover it. 00:38:35.959 [2024-10-01 17:38:34.483217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.959 [2024-10-01 17:38:34.483268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.959 [2024-10-01 17:38:34.483283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.959 [2024-10-01 17:38:34.483290] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.959 [2024-10-01 17:38:34.483297] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.959 [2024-10-01 17:38:34.483311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.959 qpair failed and we were unable to recover it. 00:38:35.959 [2024-10-01 17:38:34.493183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.959 [2024-10-01 17:38:34.493229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.959 [2024-10-01 17:38:34.493244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.959 [2024-10-01 17:38:34.493251] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.959 [2024-10-01 17:38:34.493257] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:35.959 [2024-10-01 17:38:34.493270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:35.959 qpair failed and we were unable to recover it. 00:38:36.220 [2024-10-01 17:38:34.503213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.220 [2024-10-01 17:38:34.503267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.220 [2024-10-01 17:38:34.503281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.221 [2024-10-01 17:38:34.503288] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.221 [2024-10-01 17:38:34.503296] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.221 [2024-10-01 17:38:34.503309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.221 qpair failed and we were unable to recover it. 00:38:36.221 [2024-10-01 17:38:34.513275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.221 [2024-10-01 17:38:34.513324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.221 [2024-10-01 17:38:34.513337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.221 [2024-10-01 17:38:34.513345] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.221 [2024-10-01 17:38:34.513351] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.221 [2024-10-01 17:38:34.513364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.221 qpair failed and we were unable to recover it. 00:38:36.221 [2024-10-01 17:38:34.523299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.221 [2024-10-01 17:38:34.523424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.221 [2024-10-01 17:38:34.523436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.221 [2024-10-01 17:38:34.523444] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.221 [2024-10-01 17:38:34.523451] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.221 [2024-10-01 17:38:34.523464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.221 qpair failed and we were unable to recover it. 00:38:36.221 [2024-10-01 17:38:34.533309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.221 [2024-10-01 17:38:34.533355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.221 [2024-10-01 17:38:34.533369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.221 [2024-10-01 17:38:34.533376] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.221 [2024-10-01 17:38:34.533383] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.221 [2024-10-01 17:38:34.533396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.221 qpair failed and we were unable to recover it. 00:38:36.221 [2024-10-01 17:38:34.543380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.221 [2024-10-01 17:38:34.543439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.221 [2024-10-01 17:38:34.543453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.221 [2024-10-01 17:38:34.543462] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.221 [2024-10-01 17:38:34.543472] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.221 [2024-10-01 17:38:34.543485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.221 qpair failed and we were unable to recover it. 00:38:36.221 [2024-10-01 17:38:34.553369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.221 [2024-10-01 17:38:34.553415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.221 [2024-10-01 17:38:34.553429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.221 [2024-10-01 17:38:34.553436] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.221 [2024-10-01 17:38:34.553443] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.221 [2024-10-01 17:38:34.553456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.221 qpair failed and we were unable to recover it. 00:38:36.221 [2024-10-01 17:38:34.563386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.221 [2024-10-01 17:38:34.563450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.221 [2024-10-01 17:38:34.563464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.221 [2024-10-01 17:38:34.563471] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.221 [2024-10-01 17:38:34.563478] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.221 [2024-10-01 17:38:34.563491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.221 qpair failed and we were unable to recover it. 00:38:36.221 [2024-10-01 17:38:34.573431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.221 [2024-10-01 17:38:34.573496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.221 [2024-10-01 17:38:34.573509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.221 [2024-10-01 17:38:34.573517] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.221 [2024-10-01 17:38:34.573523] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.221 [2024-10-01 17:38:34.573537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.221 qpair failed and we were unable to recover it. 00:38:36.221 [2024-10-01 17:38:34.583485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.221 [2024-10-01 17:38:34.583533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.221 [2024-10-01 17:38:34.583548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.221 [2024-10-01 17:38:34.583555] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.221 [2024-10-01 17:38:34.583562] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.221 [2024-10-01 17:38:34.583576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.221 qpair failed and we were unable to recover it. 00:38:36.221 [2024-10-01 17:38:34.593472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.221 [2024-10-01 17:38:34.593522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.221 [2024-10-01 17:38:34.593536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.221 [2024-10-01 17:38:34.593544] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.221 [2024-10-01 17:38:34.593551] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.221 [2024-10-01 17:38:34.593564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.221 qpair failed and we were unable to recover it. 00:38:36.221 [2024-10-01 17:38:34.603391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.221 [2024-10-01 17:38:34.603448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.221 [2024-10-01 17:38:34.603462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.221 [2024-10-01 17:38:34.603469] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.221 [2024-10-01 17:38:34.603476] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.221 [2024-10-01 17:38:34.603489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.221 qpair failed and we were unable to recover it. 00:38:36.221 [2024-10-01 17:38:34.613528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.221 [2024-10-01 17:38:34.613584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.221 [2024-10-01 17:38:34.613597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.221 [2024-10-01 17:38:34.613604] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.221 [2024-10-01 17:38:34.613611] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.221 [2024-10-01 17:38:34.613624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.221 qpair failed and we were unable to recover it. 00:38:36.221 [2024-10-01 17:38:34.623596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.221 [2024-10-01 17:38:34.623646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.221 [2024-10-01 17:38:34.623659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.221 [2024-10-01 17:38:34.623666] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.221 [2024-10-01 17:38:34.623673] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.221 [2024-10-01 17:38:34.623686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.221 qpair failed and we were unable to recover it. 00:38:36.221 [2024-10-01 17:38:34.633580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.221 [2024-10-01 17:38:34.633668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.221 [2024-10-01 17:38:34.633682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.221 [2024-10-01 17:38:34.633689] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.222 [2024-10-01 17:38:34.633699] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.222 [2024-10-01 17:38:34.633713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.222 qpair failed and we were unable to recover it. 00:38:36.222 [2024-10-01 17:38:34.643631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.222 [2024-10-01 17:38:34.643680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.222 [2024-10-01 17:38:34.643693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.222 [2024-10-01 17:38:34.643700] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.222 [2024-10-01 17:38:34.643707] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.222 [2024-10-01 17:38:34.643720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.222 qpair failed and we were unable to recover it. 00:38:36.222 [2024-10-01 17:38:34.653648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.222 [2024-10-01 17:38:34.653695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.222 [2024-10-01 17:38:34.653712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.222 [2024-10-01 17:38:34.653720] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.222 [2024-10-01 17:38:34.653726] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.222 [2024-10-01 17:38:34.653741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.222 qpair failed and we were unable to recover it. 00:38:36.222 [2024-10-01 17:38:34.663710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.222 [2024-10-01 17:38:34.663800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.222 [2024-10-01 17:38:34.663813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.222 [2024-10-01 17:38:34.663821] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.222 [2024-10-01 17:38:34.663828] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.222 [2024-10-01 17:38:34.663841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.222 qpair failed and we were unable to recover it. 00:38:36.222 [2024-10-01 17:38:34.673672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.222 [2024-10-01 17:38:34.673716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.222 [2024-10-01 17:38:34.673730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.222 [2024-10-01 17:38:34.673737] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.222 [2024-10-01 17:38:34.673743] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.222 [2024-10-01 17:38:34.673757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.222 qpair failed and we were unable to recover it. 00:38:36.222 [2024-10-01 17:38:34.683659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.222 [2024-10-01 17:38:34.683711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.222 [2024-10-01 17:38:34.683726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.222 [2024-10-01 17:38:34.683733] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.222 [2024-10-01 17:38:34.683740] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.222 [2024-10-01 17:38:34.683754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.222 qpair failed and we were unable to recover it. 00:38:36.222 [2024-10-01 17:38:34.693772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.222 [2024-10-01 17:38:34.693825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.222 [2024-10-01 17:38:34.693851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.222 [2024-10-01 17:38:34.693860] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.222 [2024-10-01 17:38:34.693867] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.222 [2024-10-01 17:38:34.693885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.222 qpair failed and we were unable to recover it. 00:38:36.222 [2024-10-01 17:38:34.703822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.222 [2024-10-01 17:38:34.703874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.222 [2024-10-01 17:38:34.703890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.222 [2024-10-01 17:38:34.703898] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.222 [2024-10-01 17:38:34.703905] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.222 [2024-10-01 17:38:34.703919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.222 qpair failed and we were unable to recover it. 00:38:36.222 [2024-10-01 17:38:34.713869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.222 [2024-10-01 17:38:34.713921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.222 [2024-10-01 17:38:34.713935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.222 [2024-10-01 17:38:34.713942] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.222 [2024-10-01 17:38:34.713949] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.222 [2024-10-01 17:38:34.713963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.222 qpair failed and we were unable to recover it. 00:38:36.222 [2024-10-01 17:38:34.723878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.222 [2024-10-01 17:38:34.723928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.222 [2024-10-01 17:38:34.723942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.222 [2024-10-01 17:38:34.723949] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.222 [2024-10-01 17:38:34.723960] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.222 [2024-10-01 17:38:34.723973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.222 qpair failed and we were unable to recover it. 00:38:36.222 [2024-10-01 17:38:34.733889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.222 [2024-10-01 17:38:34.733937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.222 [2024-10-01 17:38:34.733951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.222 [2024-10-01 17:38:34.733958] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.222 [2024-10-01 17:38:34.733965] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.222 [2024-10-01 17:38:34.733978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.222 qpair failed and we were unable to recover it. 00:38:36.222 [2024-10-01 17:38:34.743936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.222 [2024-10-01 17:38:34.743983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.222 [2024-10-01 17:38:34.744000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.222 [2024-10-01 17:38:34.744008] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.222 [2024-10-01 17:38:34.744015] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.222 [2024-10-01 17:38:34.744028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.222 qpair failed and we were unable to recover it. 00:38:36.222 [2024-10-01 17:38:34.753901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.222 [2024-10-01 17:38:34.753949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.222 [2024-10-01 17:38:34.753963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.222 [2024-10-01 17:38:34.753970] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.222 [2024-10-01 17:38:34.753977] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.222 [2024-10-01 17:38:34.753990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.222 qpair failed and we were unable to recover it. 00:38:36.222 [2024-10-01 17:38:34.763992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.222 [2024-10-01 17:38:34.764043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.222 [2024-10-01 17:38:34.764056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.222 [2024-10-01 17:38:34.764063] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.222 [2024-10-01 17:38:34.764070] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.222 [2024-10-01 17:38:34.764083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.222 qpair failed and we were unable to recover it. 00:38:36.485 [2024-10-01 17:38:34.773977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.485 [2024-10-01 17:38:34.774034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.485 [2024-10-01 17:38:34.774048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.485 [2024-10-01 17:38:34.774055] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.485 [2024-10-01 17:38:34.774062] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.485 [2024-10-01 17:38:34.774075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.485 qpair failed and we were unable to recover it. 00:38:36.485 [2024-10-01 17:38:34.784043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.485 [2024-10-01 17:38:34.784096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.485 [2024-10-01 17:38:34.784110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.485 [2024-10-01 17:38:34.784117] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.485 [2024-10-01 17:38:34.784124] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.485 [2024-10-01 17:38:34.784137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.485 qpair failed and we were unable to recover it. 00:38:36.485 [2024-10-01 17:38:34.794031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.485 [2024-10-01 17:38:34.794077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.485 [2024-10-01 17:38:34.794091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.485 [2024-10-01 17:38:34.794099] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.485 [2024-10-01 17:38:34.794106] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.485 [2024-10-01 17:38:34.794119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.485 qpair failed and we were unable to recover it. 00:38:36.485 [2024-10-01 17:38:34.804035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.485 [2024-10-01 17:38:34.804082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.485 [2024-10-01 17:38:34.804095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.485 [2024-10-01 17:38:34.804102] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.485 [2024-10-01 17:38:34.804109] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.485 [2024-10-01 17:38:34.804122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.485 qpair failed and we were unable to recover it. 00:38:36.485 [2024-10-01 17:38:34.814063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.485 [2024-10-01 17:38:34.814115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.485 [2024-10-01 17:38:34.814128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.485 [2024-10-01 17:38:34.814136] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.485 [2024-10-01 17:38:34.814146] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.485 [2024-10-01 17:38:34.814159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.485 qpair failed and we were unable to recover it. 00:38:36.485 [2024-10-01 17:38:34.824147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.485 [2024-10-01 17:38:34.824196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.485 [2024-10-01 17:38:34.824210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.485 [2024-10-01 17:38:34.824217] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.485 [2024-10-01 17:38:34.824224] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.485 [2024-10-01 17:38:34.824237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.485 qpair failed and we were unable to recover it. 00:38:36.485 [2024-10-01 17:38:34.834116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.485 [2024-10-01 17:38:34.834160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.485 [2024-10-01 17:38:34.834173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.485 [2024-10-01 17:38:34.834180] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.485 [2024-10-01 17:38:34.834187] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.485 [2024-10-01 17:38:34.834201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.485 qpair failed and we were unable to recover it. 00:38:36.485 [2024-10-01 17:38:34.844189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.485 [2024-10-01 17:38:34.844239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.485 [2024-10-01 17:38:34.844252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.485 [2024-10-01 17:38:34.844259] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.485 [2024-10-01 17:38:34.844266] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.485 [2024-10-01 17:38:34.844279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.485 qpair failed and we were unable to recover it. 00:38:36.485 [2024-10-01 17:38:34.854229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.485 [2024-10-01 17:38:34.854275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.485 [2024-10-01 17:38:34.854288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.485 [2024-10-01 17:38:34.854295] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.485 [2024-10-01 17:38:34.854302] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.485 [2024-10-01 17:38:34.854315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.485 qpair failed and we were unable to recover it. 00:38:36.485 [2024-10-01 17:38:34.864264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.485 [2024-10-01 17:38:34.864310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.485 [2024-10-01 17:38:34.864324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.485 [2024-10-01 17:38:34.864331] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.485 [2024-10-01 17:38:34.864338] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.485 [2024-10-01 17:38:34.864351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.485 qpair failed and we were unable to recover it. 00:38:36.486 [2024-10-01 17:38:34.874256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.486 [2024-10-01 17:38:34.874299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.486 [2024-10-01 17:38:34.874312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.486 [2024-10-01 17:38:34.874319] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.486 [2024-10-01 17:38:34.874326] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.486 [2024-10-01 17:38:34.874339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.486 qpair failed and we were unable to recover it. 00:38:36.486 [2024-10-01 17:38:34.884289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.486 [2024-10-01 17:38:34.884335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.486 [2024-10-01 17:38:34.884348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.486 [2024-10-01 17:38:34.884355] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.486 [2024-10-01 17:38:34.884362] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.486 [2024-10-01 17:38:34.884375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.486 qpair failed and we were unable to recover it. 00:38:36.486 [2024-10-01 17:38:34.894304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.486 [2024-10-01 17:38:34.894351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.486 [2024-10-01 17:38:34.894365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.486 [2024-10-01 17:38:34.894372] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.486 [2024-10-01 17:38:34.894379] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.486 [2024-10-01 17:38:34.894392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.486 qpair failed and we were unable to recover it. 00:38:36.486 [2024-10-01 17:38:34.904360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.486 [2024-10-01 17:38:34.904423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.486 [2024-10-01 17:38:34.904436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.486 [2024-10-01 17:38:34.904447] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.486 [2024-10-01 17:38:34.904453] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.486 [2024-10-01 17:38:34.904467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.486 qpair failed and we were unable to recover it. 00:38:36.486 [2024-10-01 17:38:34.914354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.486 [2024-10-01 17:38:34.914400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.486 [2024-10-01 17:38:34.914414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.486 [2024-10-01 17:38:34.914421] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.486 [2024-10-01 17:38:34.914428] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.486 [2024-10-01 17:38:34.914441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.486 qpair failed and we were unable to recover it. 00:38:36.486 [2024-10-01 17:38:34.924399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.486 [2024-10-01 17:38:34.924448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.486 [2024-10-01 17:38:34.924461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.486 [2024-10-01 17:38:34.924469] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.486 [2024-10-01 17:38:34.924475] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.486 [2024-10-01 17:38:34.924488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.486 qpair failed and we were unable to recover it. 00:38:36.486 [2024-10-01 17:38:34.934341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.486 [2024-10-01 17:38:34.934391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.486 [2024-10-01 17:38:34.934405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.486 [2024-10-01 17:38:34.934412] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.486 [2024-10-01 17:38:34.934419] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.486 [2024-10-01 17:38:34.934431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.486 qpair failed and we were unable to recover it. 00:38:36.486 [2024-10-01 17:38:34.944483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.486 [2024-10-01 17:38:34.944548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.486 [2024-10-01 17:38:34.944561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.486 [2024-10-01 17:38:34.944569] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.486 [2024-10-01 17:38:34.944576] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.486 [2024-10-01 17:38:34.944589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.486 qpair failed and we were unable to recover it. 00:38:36.486 [2024-10-01 17:38:34.954479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.486 [2024-10-01 17:38:34.954525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.486 [2024-10-01 17:38:34.954539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.486 [2024-10-01 17:38:34.954546] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.486 [2024-10-01 17:38:34.954553] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.486 [2024-10-01 17:38:34.954566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.486 qpair failed and we were unable to recover it. 00:38:36.486 [2024-10-01 17:38:34.964510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.486 [2024-10-01 17:38:34.964560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.486 [2024-10-01 17:38:34.964573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.486 [2024-10-01 17:38:34.964580] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.486 [2024-10-01 17:38:34.964587] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.486 [2024-10-01 17:38:34.964600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.486 qpair failed and we were unable to recover it. 00:38:36.486 [2024-10-01 17:38:34.974532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.486 [2024-10-01 17:38:34.974585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.486 [2024-10-01 17:38:34.974599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.486 [2024-10-01 17:38:34.974606] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.486 [2024-10-01 17:38:34.974613] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.486 [2024-10-01 17:38:34.974626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.486 qpair failed and we were unable to recover it. 00:38:36.486 [2024-10-01 17:38:34.984587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.486 [2024-10-01 17:38:34.984683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.486 [2024-10-01 17:38:34.984699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.486 [2024-10-01 17:38:34.984710] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.486 [2024-10-01 17:38:34.984720] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.486 [2024-10-01 17:38:34.984736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.486 qpair failed and we were unable to recover it. 00:38:36.487 [2024-10-01 17:38:34.994581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.487 [2024-10-01 17:38:34.994629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.487 [2024-10-01 17:38:34.994644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.487 [2024-10-01 17:38:34.994654] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.487 [2024-10-01 17:38:34.994661] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.487 [2024-10-01 17:38:34.994674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.487 qpair failed and we were unable to recover it. 00:38:36.487 [2024-10-01 17:38:35.004584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.487 [2024-10-01 17:38:35.004676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.487 [2024-10-01 17:38:35.004691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.487 [2024-10-01 17:38:35.004698] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.487 [2024-10-01 17:38:35.004705] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.487 [2024-10-01 17:38:35.004718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.487 qpair failed and we were unable to recover it. 00:38:36.487 [2024-10-01 17:38:35.014619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.487 [2024-10-01 17:38:35.014664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.487 [2024-10-01 17:38:35.014679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.487 [2024-10-01 17:38:35.014686] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.487 [2024-10-01 17:38:35.014692] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.487 [2024-10-01 17:38:35.014706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.487 qpair failed and we were unable to recover it. 00:38:36.487 [2024-10-01 17:38:35.024738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.487 [2024-10-01 17:38:35.024811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.487 [2024-10-01 17:38:35.024825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.487 [2024-10-01 17:38:35.024832] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.487 [2024-10-01 17:38:35.024838] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.487 [2024-10-01 17:38:35.024852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.487 qpair failed and we were unable to recover it. 00:38:36.749 [2024-10-01 17:38:35.034662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.749 [2024-10-01 17:38:35.034709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.749 [2024-10-01 17:38:35.034722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.749 [2024-10-01 17:38:35.034729] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.749 [2024-10-01 17:38:35.034736] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.749 [2024-10-01 17:38:35.034749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.749 qpair failed and we were unable to recover it. 00:38:36.749 [2024-10-01 17:38:35.044709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.749 [2024-10-01 17:38:35.044756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.749 [2024-10-01 17:38:35.044770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.749 [2024-10-01 17:38:35.044777] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.749 [2024-10-01 17:38:35.044784] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.749 [2024-10-01 17:38:35.044798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.749 qpair failed and we were unable to recover it. 00:38:36.749 [2024-10-01 17:38:35.054724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.749 [2024-10-01 17:38:35.054780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.749 [2024-10-01 17:38:35.054793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.750 [2024-10-01 17:38:35.054800] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.750 [2024-10-01 17:38:35.054807] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.750 [2024-10-01 17:38:35.054820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-10-01 17:38:35.064779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.750 [2024-10-01 17:38:35.064832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.750 [2024-10-01 17:38:35.064845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.750 [2024-10-01 17:38:35.064853] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.750 [2024-10-01 17:38:35.064859] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.750 [2024-10-01 17:38:35.064872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-10-01 17:38:35.074696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.750 [2024-10-01 17:38:35.074743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.750 [2024-10-01 17:38:35.074756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.750 [2024-10-01 17:38:35.074764] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.750 [2024-10-01 17:38:35.074770] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.750 [2024-10-01 17:38:35.074783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-10-01 17:38:35.084823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.750 [2024-10-01 17:38:35.084904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.750 [2024-10-01 17:38:35.084918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.750 [2024-10-01 17:38:35.084930] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.750 [2024-10-01 17:38:35.084936] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.750 [2024-10-01 17:38:35.084950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-10-01 17:38:35.094863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.750 [2024-10-01 17:38:35.094954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.750 [2024-10-01 17:38:35.094968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.750 [2024-10-01 17:38:35.094976] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.750 [2024-10-01 17:38:35.094983] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.750 [2024-10-01 17:38:35.095000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-10-01 17:38:35.104916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.750 [2024-10-01 17:38:35.104990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.750 [2024-10-01 17:38:35.105009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.750 [2024-10-01 17:38:35.105016] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.750 [2024-10-01 17:38:35.105022] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.750 [2024-10-01 17:38:35.105036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-10-01 17:38:35.114911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.750 [2024-10-01 17:38:35.114969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.750 [2024-10-01 17:38:35.114982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.750 [2024-10-01 17:38:35.114990] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.750 [2024-10-01 17:38:35.115000] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.750 [2024-10-01 17:38:35.115014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-10-01 17:38:35.124936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.750 [2024-10-01 17:38:35.124986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.750 [2024-10-01 17:38:35.125004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.750 [2024-10-01 17:38:35.125012] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.750 [2024-10-01 17:38:35.125018] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.750 [2024-10-01 17:38:35.125032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-10-01 17:38:35.134965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.750 [2024-10-01 17:38:35.135013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.750 [2024-10-01 17:38:35.135027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.750 [2024-10-01 17:38:35.135034] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.750 [2024-10-01 17:38:35.135041] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.750 [2024-10-01 17:38:35.135054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-10-01 17:38:35.145035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.750 [2024-10-01 17:38:35.145087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.750 [2024-10-01 17:38:35.145101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.750 [2024-10-01 17:38:35.145108] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.750 [2024-10-01 17:38:35.145114] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.750 [2024-10-01 17:38:35.145128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-10-01 17:38:35.154897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.750 [2024-10-01 17:38:35.154983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.750 [2024-10-01 17:38:35.155000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.750 [2024-10-01 17:38:35.155008] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.750 [2024-10-01 17:38:35.155014] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.750 [2024-10-01 17:38:35.155027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-10-01 17:38:35.165056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.750 [2024-10-01 17:38:35.165155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.750 [2024-10-01 17:38:35.165169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.750 [2024-10-01 17:38:35.165176] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.750 [2024-10-01 17:38:35.165184] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.750 [2024-10-01 17:38:35.165198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-10-01 17:38:35.175042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.750 [2024-10-01 17:38:35.175093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.750 [2024-10-01 17:38:35.175107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.750 [2024-10-01 17:38:35.175117] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.750 [2024-10-01 17:38:35.175124] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.750 [2024-10-01 17:38:35.175137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-10-01 17:38:35.185105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.750 [2024-10-01 17:38:35.185154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.750 [2024-10-01 17:38:35.185168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.750 [2024-10-01 17:38:35.185175] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.750 [2024-10-01 17:38:35.185182] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.750 [2024-10-01 17:38:35.185195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.750 qpair failed and we were unable to recover it. 00:38:36.750 [2024-10-01 17:38:35.195094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.751 [2024-10-01 17:38:35.195139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.751 [2024-10-01 17:38:35.195152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.751 [2024-10-01 17:38:35.195160] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.751 [2024-10-01 17:38:35.195166] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.751 [2024-10-01 17:38:35.195179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.751 qpair failed and we were unable to recover it. 00:38:36.751 [2024-10-01 17:38:35.205161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.751 [2024-10-01 17:38:35.205215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.751 [2024-10-01 17:38:35.205229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.751 [2024-10-01 17:38:35.205236] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.751 [2024-10-01 17:38:35.205242] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.751 [2024-10-01 17:38:35.205256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.751 qpair failed and we were unable to recover it. 00:38:36.751 [2024-10-01 17:38:35.215160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.751 [2024-10-01 17:38:35.215208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.751 [2024-10-01 17:38:35.215221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.751 [2024-10-01 17:38:35.215228] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.751 [2024-10-01 17:38:35.215234] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.751 [2024-10-01 17:38:35.215248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.751 qpair failed and we were unable to recover it. 00:38:36.751 [2024-10-01 17:38:35.225235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.751 [2024-10-01 17:38:35.225294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.751 [2024-10-01 17:38:35.225307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.751 [2024-10-01 17:38:35.225315] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.751 [2024-10-01 17:38:35.225322] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.751 [2024-10-01 17:38:35.225335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.751 qpair failed and we were unable to recover it. 00:38:36.751 [2024-10-01 17:38:35.235212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.751 [2024-10-01 17:38:35.235264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.751 [2024-10-01 17:38:35.235278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.751 [2024-10-01 17:38:35.235286] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.751 [2024-10-01 17:38:35.235292] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.751 [2024-10-01 17:38:35.235306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.751 qpair failed and we were unable to recover it. 00:38:36.751 [2024-10-01 17:38:35.245238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.751 [2024-10-01 17:38:35.245310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.751 [2024-10-01 17:38:35.245324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.751 [2024-10-01 17:38:35.245331] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.751 [2024-10-01 17:38:35.245338] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.751 [2024-10-01 17:38:35.245351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.751 qpair failed and we were unable to recover it. 00:38:36.751 [2024-10-01 17:38:35.255279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.751 [2024-10-01 17:38:35.255327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.751 [2024-10-01 17:38:35.255341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.751 [2024-10-01 17:38:35.255348] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.751 [2024-10-01 17:38:35.255355] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.751 [2024-10-01 17:38:35.255368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.751 qpair failed and we were unable to recover it. 00:38:36.751 [2024-10-01 17:38:35.265316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.751 [2024-10-01 17:38:35.265367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.751 [2024-10-01 17:38:35.265380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.751 [2024-10-01 17:38:35.265391] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.751 [2024-10-01 17:38:35.265398] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.751 [2024-10-01 17:38:35.265412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.751 qpair failed and we were unable to recover it. 00:38:36.751 [2024-10-01 17:38:35.275334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.751 [2024-10-01 17:38:35.275382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.751 [2024-10-01 17:38:35.275395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.751 [2024-10-01 17:38:35.275402] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.751 [2024-10-01 17:38:35.275409] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.751 [2024-10-01 17:38:35.275423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.751 qpair failed and we were unable to recover it. 00:38:36.751 [2024-10-01 17:38:35.285365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.751 [2024-10-01 17:38:35.285416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.751 [2024-10-01 17:38:35.285430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.751 [2024-10-01 17:38:35.285437] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.751 [2024-10-01 17:38:35.285443] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:36.751 [2024-10-01 17:38:35.285457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:36.751 qpair failed and we were unable to recover it. 00:38:37.015 [2024-10-01 17:38:35.295463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.015 [2024-10-01 17:38:35.295515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.015 [2024-10-01 17:38:35.295528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.015 [2024-10-01 17:38:35.295535] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.015 [2024-10-01 17:38:35.295542] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:37.015 [2024-10-01 17:38:35.295556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.015 qpair failed and we were unable to recover it. 00:38:37.015 [2024-10-01 17:38:35.305454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.015 [2024-10-01 17:38:35.305509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.015 [2024-10-01 17:38:35.305523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.015 [2024-10-01 17:38:35.305530] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.015 [2024-10-01 17:38:35.305537] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:37.015 [2024-10-01 17:38:35.305550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.015 qpair failed and we were unable to recover it. 00:38:37.015 [2024-10-01 17:38:35.315452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.015 [2024-10-01 17:38:35.315496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.015 [2024-10-01 17:38:35.315511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.015 [2024-10-01 17:38:35.315518] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.015 [2024-10-01 17:38:35.315525] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:37.015 [2024-10-01 17:38:35.315539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.015 qpair failed and we were unable to recover it. 00:38:37.015 [2024-10-01 17:38:35.325481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.015 [2024-10-01 17:38:35.325530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.015 [2024-10-01 17:38:35.325544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.015 [2024-10-01 17:38:35.325551] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.015 [2024-10-01 17:38:35.325558] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:37.015 [2024-10-01 17:38:35.325571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.015 qpair failed and we were unable to recover it. 00:38:37.015 [2024-10-01 17:38:35.335501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.015 [2024-10-01 17:38:35.335551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.015 [2024-10-01 17:38:35.335565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.015 [2024-10-01 17:38:35.335572] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.015 [2024-10-01 17:38:35.335579] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:37.015 [2024-10-01 17:38:35.335592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.015 qpair failed and we were unable to recover it. 00:38:37.015 [2024-10-01 17:38:35.345556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.015 [2024-10-01 17:38:35.345603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.015 [2024-10-01 17:38:35.345617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.015 [2024-10-01 17:38:35.345624] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.015 [2024-10-01 17:38:35.345630] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:37.015 [2024-10-01 17:38:35.345644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.015 qpair failed and we were unable to recover it. 00:38:37.015 [2024-10-01 17:38:35.355595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.015 [2024-10-01 17:38:35.355645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.015 [2024-10-01 17:38:35.355658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.015 [2024-10-01 17:38:35.355668] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.015 [2024-10-01 17:38:35.355675] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:37.015 [2024-10-01 17:38:35.355688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.015 qpair failed and we were unable to recover it. 00:38:37.015 [2024-10-01 17:38:35.365604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.015 [2024-10-01 17:38:35.365654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.015 [2024-10-01 17:38:35.365668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.015 [2024-10-01 17:38:35.365675] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.015 [2024-10-01 17:38:35.365681] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:37.015 [2024-10-01 17:38:35.365695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.015 qpair failed and we were unable to recover it. 00:38:37.015 [2024-10-01 17:38:35.375599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.015 [2024-10-01 17:38:35.375647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.015 [2024-10-01 17:38:35.375661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.015 [2024-10-01 17:38:35.375668] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.015 [2024-10-01 17:38:35.375675] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:37.015 [2024-10-01 17:38:35.375688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.015 qpair failed and we were unable to recover it. 00:38:37.015 [2024-10-01 17:38:35.385653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.015 [2024-10-01 17:38:35.385708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.015 [2024-10-01 17:38:35.385726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.015 [2024-10-01 17:38:35.385733] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.015 [2024-10-01 17:38:35.385740] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:37.015 [2024-10-01 17:38:35.385756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.015 qpair failed and we were unable to recover it. 00:38:37.015 [2024-10-01 17:38:35.395667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.015 [2024-10-01 17:38:35.395763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.015 [2024-10-01 17:38:35.395778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.015 [2024-10-01 17:38:35.395785] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.015 [2024-10-01 17:38:35.395792] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:37.015 [2024-10-01 17:38:35.395806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.015 qpair failed and we were unable to recover it. 00:38:37.015 [2024-10-01 17:38:35.405702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.015 [2024-10-01 17:38:35.405753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.015 [2024-10-01 17:38:35.405779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.016 [2024-10-01 17:38:35.405788] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.016 [2024-10-01 17:38:35.405795] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:37.016 [2024-10-01 17:38:35.405813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.016 qpair failed and we were unable to recover it. 00:38:37.016 [2024-10-01 17:38:35.415725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.016 [2024-10-01 17:38:35.415777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.016 [2024-10-01 17:38:35.415794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.016 [2024-10-01 17:38:35.415801] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.016 [2024-10-01 17:38:35.415808] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:37.016 [2024-10-01 17:38:35.415823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.016 qpair failed and we were unable to recover it. 00:38:37.016 [2024-10-01 17:38:35.425839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.016 [2024-10-01 17:38:35.425907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.016 [2024-10-01 17:38:35.425921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.016 [2024-10-01 17:38:35.425929] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.016 [2024-10-01 17:38:35.425935] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:37.016 [2024-10-01 17:38:35.425949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.016 qpair failed and we were unable to recover it. 00:38:37.016 [2024-10-01 17:38:35.435762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.016 [2024-10-01 17:38:35.435807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.016 [2024-10-01 17:38:35.435821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.016 [2024-10-01 17:38:35.435828] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.016 [2024-10-01 17:38:35.435835] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:37.016 [2024-10-01 17:38:35.435848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.016 qpair failed and we were unable to recover it. 00:38:37.016 [2024-10-01 17:38:35.445802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.016 [2024-10-01 17:38:35.445865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.016 [2024-10-01 17:38:35.445884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.016 [2024-10-01 17:38:35.445891] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.016 [2024-10-01 17:38:35.445898] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:37.016 [2024-10-01 17:38:35.445911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.016 qpair failed and we were unable to recover it. 00:38:37.016 [2024-10-01 17:38:35.455838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.016 [2024-10-01 17:38:35.455892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.016 [2024-10-01 17:38:35.455905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.016 [2024-10-01 17:38:35.455913] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.016 [2024-10-01 17:38:35.455919] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:37.016 [2024-10-01 17:38:35.455933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.016 qpair failed and we were unable to recover it. 00:38:37.016 [2024-10-01 17:38:35.465828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.016 [2024-10-01 17:38:35.465885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.016 [2024-10-01 17:38:35.465898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.016 [2024-10-01 17:38:35.465906] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.016 [2024-10-01 17:38:35.465913] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:37.016 [2024-10-01 17:38:35.465926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.016 qpair failed and we were unable to recover it. 00:38:37.016 [2024-10-01 17:38:35.475871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.016 [2024-10-01 17:38:35.475921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.016 [2024-10-01 17:38:35.475935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.016 [2024-10-01 17:38:35.475942] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.016 [2024-10-01 17:38:35.475949] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:37.016 [2024-10-01 17:38:35.475963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.016 qpair failed and we were unable to recover it. 00:38:37.016 [2024-10-01 17:38:35.485911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.016 [2024-10-01 17:38:35.485961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.016 [2024-10-01 17:38:35.485975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.016 [2024-10-01 17:38:35.485983] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.016 [2024-10-01 17:38:35.485990] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:37.016 [2024-10-01 17:38:35.486008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.016 qpair failed and we were unable to recover it. 00:38:37.016 [2024-10-01 17:38:35.495941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.016 [2024-10-01 17:38:35.495985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.016 [2024-10-01 17:38:35.496003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.016 [2024-10-01 17:38:35.496011] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.016 [2024-10-01 17:38:35.496018] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:37.016 [2024-10-01 17:38:35.496033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.016 qpair failed and we were unable to recover it. 00:38:37.016 [2024-10-01 17:38:35.505973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.016 [2024-10-01 17:38:35.506045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.016 [2024-10-01 17:38:35.506059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.016 [2024-10-01 17:38:35.506066] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.016 [2024-10-01 17:38:35.506073] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:37.016 [2024-10-01 17:38:35.506087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.016 qpair failed and we were unable to recover it. 00:38:37.016 [2024-10-01 17:38:35.515972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.016 [2024-10-01 17:38:35.516022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.016 [2024-10-01 17:38:35.516036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.016 [2024-10-01 17:38:35.516043] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.016 [2024-10-01 17:38:35.516050] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:37.016 [2024-10-01 17:38:35.516063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.016 qpair failed and we were unable to recover it. 00:38:37.016 [2024-10-01 17:38:35.525961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.016 [2024-10-01 17:38:35.526038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.016 [2024-10-01 17:38:35.526062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.016 [2024-10-01 17:38:35.526070] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.016 [2024-10-01 17:38:35.526077] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:37.016 [2024-10-01 17:38:35.526092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.016 qpair failed and we were unable to recover it. 00:38:37.016 [2024-10-01 17:38:35.536018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.016 [2024-10-01 17:38:35.536108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.016 [2024-10-01 17:38:35.536125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.016 [2024-10-01 17:38:35.536132] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.016 [2024-10-01 17:38:35.536139] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:37.016 [2024-10-01 17:38:35.536152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.016 qpair failed and we were unable to recover it. 00:38:37.016 [2024-10-01 17:38:35.546098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.017 [2024-10-01 17:38:35.546148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.017 [2024-10-01 17:38:35.546162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.017 [2024-10-01 17:38:35.546169] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.017 [2024-10-01 17:38:35.546176] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:37.017 [2024-10-01 17:38:35.546189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.017 qpair failed and we were unable to recover it. 00:38:37.017 [2024-10-01 17:38:35.556131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.017 [2024-10-01 17:38:35.556208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.017 [2024-10-01 17:38:35.556221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.017 [2024-10-01 17:38:35.556229] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.017 [2024-10-01 17:38:35.556235] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:37.017 [2024-10-01 17:38:35.556249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.017 qpair failed and we were unable to recover it. 00:38:37.278 [2024-10-01 17:38:35.566118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.278 [2024-10-01 17:38:35.566164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.278 [2024-10-01 17:38:35.566179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.278 [2024-10-01 17:38:35.566187] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.278 [2024-10-01 17:38:35.566193] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:37.278 [2024-10-01 17:38:35.566207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.278 qpair failed and we were unable to recover it. 00:38:37.278 [2024-10-01 17:38:35.576140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.278 [2024-10-01 17:38:35.576191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.278 [2024-10-01 17:38:35.576205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.278 [2024-10-01 17:38:35.576214] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.278 [2024-10-01 17:38:35.576221] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:37.278 [2024-10-01 17:38:35.576235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.278 qpair failed and we were unable to recover it. 00:38:37.278 [2024-10-01 17:38:35.586207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.278 [2024-10-01 17:38:35.586254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.278 [2024-10-01 17:38:35.586269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.278 [2024-10-01 17:38:35.586277] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.278 [2024-10-01 17:38:35.586283] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:37.278 [2024-10-01 17:38:35.586298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.279 qpair failed and we were unable to recover it. 00:38:37.279 [2024-10-01 17:38:35.596226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.279 [2024-10-01 17:38:35.596319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.279 [2024-10-01 17:38:35.596333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.279 [2024-10-01 17:38:35.596341] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.279 [2024-10-01 17:38:35.596348] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:37.279 [2024-10-01 17:38:35.596362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.279 qpair failed and we were unable to recover it. 00:38:37.279 [2024-10-01 17:38:35.606237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.279 [2024-10-01 17:38:35.606315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.279 [2024-10-01 17:38:35.606329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.279 [2024-10-01 17:38:35.606336] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.279 [2024-10-01 17:38:35.606343] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:37.279 [2024-10-01 17:38:35.606357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.279 qpair failed and we were unable to recover it. 00:38:37.279 [2024-10-01 17:38:35.616281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.279 [2024-10-01 17:38:35.616351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.279 [2024-10-01 17:38:35.616364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.279 [2024-10-01 17:38:35.616371] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.279 [2024-10-01 17:38:35.616378] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:37.279 [2024-10-01 17:38:35.616391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.279 qpair failed and we were unable to recover it. 00:38:37.279 [2024-10-01 17:38:35.626322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.279 [2024-10-01 17:38:35.626369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.279 [2024-10-01 17:38:35.626387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.279 [2024-10-01 17:38:35.626394] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.279 [2024-10-01 17:38:35.626401] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:37.279 [2024-10-01 17:38:35.626414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.279 qpair failed and we were unable to recover it. 00:38:37.279 [2024-10-01 17:38:35.636346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.279 [2024-10-01 17:38:35.636402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.279 [2024-10-01 17:38:35.636415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.279 [2024-10-01 17:38:35.636423] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.279 [2024-10-01 17:38:35.636430] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:37.279 [2024-10-01 17:38:35.636443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.279 qpair failed and we were unable to recover it. 00:38:37.279 [2024-10-01 17:38:35.646367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.279 [2024-10-01 17:38:35.646461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.279 [2024-10-01 17:38:35.646475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.279 [2024-10-01 17:38:35.646482] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.279 [2024-10-01 17:38:35.646489] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:37.279 [2024-10-01 17:38:35.646502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.279 qpair failed and we were unable to recover it. 00:38:37.279 [2024-10-01 17:38:35.656353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.279 [2024-10-01 17:38:35.656404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.279 [2024-10-01 17:38:35.656417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.279 [2024-10-01 17:38:35.656425] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.279 [2024-10-01 17:38:35.656432] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6ff1f0 00:38:37.279 [2024-10-01 17:38:35.656445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:37.279 qpair failed and we were unable to recover it. 00:38:37.279 [2024-10-01 17:38:35.666372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.279 [2024-10-01 17:38:35.666472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.279 [2024-10-01 17:38:35.666536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.279 [2024-10-01 17:38:35.666561] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.279 [2024-10-01 17:38:35.666583] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f956c000b90 00:38:37.279 [2024-10-01 17:38:35.666649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.279 qpair failed and we were unable to recover it. 00:38:37.279 [2024-10-01 17:38:35.676391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.279 [2024-10-01 17:38:35.676463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.279 [2024-10-01 17:38:35.676495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.279 [2024-10-01 17:38:35.676510] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.279 [2024-10-01 17:38:35.676525] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f956c000b90 00:38:37.279 [2024-10-01 17:38:35.676556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:37.279 qpair failed and we were unable to recover it. 00:38:37.279 [2024-10-01 17:38:35.676738] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:38:37.279 A controller has encountered a failure and is being reset. 00:38:37.279 [2024-10-01 17:38:35.676848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x70d0b0 (9): Bad file descriptor 00:38:37.279 Controller properly reset. 00:38:37.279 Initializing NVMe Controllers 00:38:37.279 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:37.279 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:37.279 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:38:37.279 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:38:37.279 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:38:37.279 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:38:37.279 Initialization complete. Launching workers. 00:38:37.279 Starting thread on core 1 00:38:37.279 Starting thread on core 2 00:38:37.279 Starting thread on core 3 00:38:37.279 Starting thread on core 0 00:38:37.279 17:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:38:37.279 00:38:37.279 real 0m11.444s 00:38:37.279 user 0m21.638s 00:38:37.279 sys 0m3.534s 00:38:37.279 17:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:37.279 17:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:37.279 ************************************ 00:38:37.279 END TEST nvmf_target_disconnect_tc2 00:38:37.279 ************************************ 00:38:37.279 17:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:38:37.279 17:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:38:37.279 17:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:38:37.279 17:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:37.279 17:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:38:37.279 17:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:37.279 17:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:38:37.279 17:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:37.279 17:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:37.279 rmmod nvme_tcp 00:38:37.540 rmmod nvme_fabrics 00:38:37.540 rmmod nvme_keyring 00:38:37.540 17:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:37.540 17:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:38:37.540 17:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:38:37.540 17:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@515 -- # '[' -n 3298373 ']' 00:38:37.540 17:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # killprocess 3298373 00:38:37.540 17:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 3298373 ']' 00:38:37.540 17:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 3298373 00:38:37.540 17:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:38:37.540 17:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:37.540 17:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3298373 00:38:37.540 17:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:38:37.540 17:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:38:37.540 17:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3298373' 00:38:37.540 killing process with pid 3298373 00:38:37.540 17:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 3298373 00:38:37.540 17:38:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 3298373 00:38:37.540 17:38:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:37.540 17:38:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:37.540 17:38:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:37.540 17:38:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:38:37.540 17:38:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:38:37.540 17:38:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:37.540 17:38:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:38:37.540 17:38:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:37.540 17:38:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:37.540 17:38:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:37.540 17:38:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:37.540 17:38:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:40.086 17:38:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:40.087 00:38:40.087 real 0m21.489s 00:38:40.087 user 0m49.814s 00:38:40.087 sys 0m9.424s 00:38:40.087 17:38:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:40.087 17:38:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:40.087 ************************************ 00:38:40.087 END TEST nvmf_target_disconnect 00:38:40.087 ************************************ 00:38:40.087 17:38:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:38:40.087 00:38:40.087 real 7m48.281s 00:38:40.087 user 17m11.177s 00:38:40.087 sys 2m19.016s 00:38:40.087 17:38:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:40.087 17:38:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:40.087 ************************************ 00:38:40.087 END TEST nvmf_host 00:38:40.087 ************************************ 00:38:40.087 17:38:38 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:38:40.087 17:38:38 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:38:40.087 17:38:38 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:38:40.087 17:38:38 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:40.087 17:38:38 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:40.087 17:38:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:40.087 ************************************ 00:38:40.087 START TEST nvmf_target_core_interrupt_mode 00:38:40.087 ************************************ 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:38:40.087 * Looking for test storage... 00:38:40.087 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lcov --version 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:40.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:40.087 --rc genhtml_branch_coverage=1 00:38:40.087 --rc genhtml_function_coverage=1 00:38:40.087 --rc genhtml_legend=1 00:38:40.087 --rc geninfo_all_blocks=1 00:38:40.087 --rc geninfo_unexecuted_blocks=1 00:38:40.087 00:38:40.087 ' 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:40.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:40.087 --rc genhtml_branch_coverage=1 00:38:40.087 --rc genhtml_function_coverage=1 00:38:40.087 --rc genhtml_legend=1 00:38:40.087 --rc geninfo_all_blocks=1 00:38:40.087 --rc geninfo_unexecuted_blocks=1 00:38:40.087 00:38:40.087 ' 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:40.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:40.087 --rc genhtml_branch_coverage=1 00:38:40.087 --rc genhtml_function_coverage=1 00:38:40.087 --rc genhtml_legend=1 00:38:40.087 --rc geninfo_all_blocks=1 00:38:40.087 --rc geninfo_unexecuted_blocks=1 00:38:40.087 00:38:40.087 ' 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:40.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:40.087 --rc genhtml_branch_coverage=1 00:38:40.087 --rc genhtml_function_coverage=1 00:38:40.087 --rc genhtml_legend=1 00:38:40.087 --rc geninfo_all_blocks=1 00:38:40.087 --rc geninfo_unexecuted_blocks=1 00:38:40.087 00:38:40.087 ' 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:40.087 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:38:40.088 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:40.088 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:38:40.088 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:40.088 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:40.088 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:40.088 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:40.088 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:40.088 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:40.088 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:40.088 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:40.088 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:40.088 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:40.088 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:38:40.088 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:38:40.088 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:38:40.088 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:38:40.088 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:40.088 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:40.088 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:40.088 ************************************ 00:38:40.088 START TEST nvmf_abort 00:38:40.088 ************************************ 00:38:40.088 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:38:40.351 * Looking for test storage... 00:38:40.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:40.351 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:40.351 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:38:40.351 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:40.351 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:40.351 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:40.351 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:40.351 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:40.351 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:38:40.351 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:38:40.351 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:38:40.351 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:38:40.351 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:38:40.351 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:38:40.351 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:38:40.351 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:40.351 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:38:40.351 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:38:40.351 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:40.351 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:40.351 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:38:40.351 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:38:40.351 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:40.351 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:38:40.351 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:38:40.351 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:38:40.351 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:38:40.351 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:40.351 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:38:40.351 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:38:40.351 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:40.351 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:40.351 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:38:40.351 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:40.351 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:40.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:40.351 --rc genhtml_branch_coverage=1 00:38:40.351 --rc genhtml_function_coverage=1 00:38:40.351 --rc genhtml_legend=1 00:38:40.351 --rc geninfo_all_blocks=1 00:38:40.351 --rc geninfo_unexecuted_blocks=1 00:38:40.351 00:38:40.351 ' 00:38:40.351 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:40.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:40.351 --rc genhtml_branch_coverage=1 00:38:40.351 --rc genhtml_function_coverage=1 00:38:40.351 --rc genhtml_legend=1 00:38:40.351 --rc geninfo_all_blocks=1 00:38:40.351 --rc geninfo_unexecuted_blocks=1 00:38:40.351 00:38:40.351 ' 00:38:40.351 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:40.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:40.351 --rc genhtml_branch_coverage=1 00:38:40.351 --rc genhtml_function_coverage=1 00:38:40.351 --rc genhtml_legend=1 00:38:40.351 --rc geninfo_all_blocks=1 00:38:40.351 --rc geninfo_unexecuted_blocks=1 00:38:40.351 00:38:40.351 ' 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:40.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:40.352 --rc genhtml_branch_coverage=1 00:38:40.352 --rc genhtml_function_coverage=1 00:38:40.352 --rc genhtml_legend=1 00:38:40.352 --rc geninfo_all_blocks=1 00:38:40.352 --rc geninfo_unexecuted_blocks=1 00:38:40.352 00:38:40.352 ' 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:38:40.352 17:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:46.942 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:46.943 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:46.943 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:46.943 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:46.943 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:46.943 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:47.203 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:47.203 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:47.203 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:47.203 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:47.203 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:47.203 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:47.203 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:47.203 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:47.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:47.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.568 ms 00:38:47.203 00:38:47.203 --- 10.0.0.2 ping statistics --- 00:38:47.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:47.204 rtt min/avg/max/mdev = 0.568/0.568/0.568/0.000 ms 00:38:47.204 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:47.204 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:47.204 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:38:47.204 00:38:47.204 --- 10.0.0.1 ping statistics --- 00:38:47.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:47.204 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:38:47.204 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:47.204 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:38:47.204 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:47.204 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:47.204 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:47.204 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:47.204 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:47.204 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:47.204 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:47.204 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:38:47.204 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:47.204 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:47.204 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:47.204 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=3303814 00:38:47.204 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 3303814 00:38:47.204 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 3303814 ']' 00:38:47.204 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:47.204 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:47.204 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:47.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:47.204 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:47.204 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:47.204 17:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:38:47.204 [2024-10-01 17:38:45.740469] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:47.204 [2024-10-01 17:38:45.741469] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:38:47.204 [2024-10-01 17:38:45.741515] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:47.463 [2024-10-01 17:38:45.826955] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:47.463 [2024-10-01 17:38:45.871079] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:47.463 [2024-10-01 17:38:45.871134] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:47.463 [2024-10-01 17:38:45.871142] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:47.463 [2024-10-01 17:38:45.871149] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:47.463 [2024-10-01 17:38:45.871156] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:47.463 [2024-10-01 17:38:45.871279] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:38:47.463 [2024-10-01 17:38:45.871443] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:47.463 [2024-10-01 17:38:45.871444] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:38:47.463 [2024-10-01 17:38:45.954453] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:47.463 [2024-10-01 17:38:45.954455] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:47.463 [2024-10-01 17:38:45.955118] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:47.463 [2024-10-01 17:38:45.955409] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:48.034 17:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:48.034 17:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:38:48.034 17:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:48.034 17:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:48.034 17:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:48.295 17:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:48.295 17:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:38:48.295 17:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:48.295 17:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:48.295 [2024-10-01 17:38:46.600439] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:48.295 17:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:48.295 17:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:38:48.295 17:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:48.295 17:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:48.295 Malloc0 00:38:48.295 17:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:48.295 17:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:48.295 17:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:48.295 17:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:48.295 Delay0 00:38:48.295 17:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:48.295 17:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:48.295 17:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:48.295 17:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:48.295 17:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:48.295 17:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:38:48.295 17:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:48.295 17:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:48.295 17:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:48.295 17:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:48.295 17:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:48.295 17:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:48.295 [2024-10-01 17:38:46.676390] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:48.295 17:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:48.295 17:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:48.295 17:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:48.295 17:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:48.295 17:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:48.295 17:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:38:48.295 [2024-10-01 17:38:46.829049] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:38:50.835 Initializing NVMe Controllers 00:38:50.835 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:38:50.835 controller IO queue size 128 less than required 00:38:50.835 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:38:50.835 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:38:50.835 Initialization complete. Launching workers. 00:38:50.835 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29152 00:38:50.835 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29209, failed to submit 66 00:38:50.835 success 29152, unsuccessful 57, failed 0 00:38:50.835 17:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:50.835 17:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:50.835 17:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:50.835 17:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:50.835 17:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:38:50.835 17:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:38:50.835 17:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:50.835 17:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:38:50.835 17:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:50.835 17:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:38:50.835 17:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:50.835 17:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:50.835 rmmod nvme_tcp 00:38:50.835 rmmod nvme_fabrics 00:38:50.835 rmmod nvme_keyring 00:38:50.835 17:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:50.835 17:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:38:50.835 17:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:38:50.835 17:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 3303814 ']' 00:38:50.835 17:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 3303814 00:38:50.835 17:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 3303814 ']' 00:38:50.835 17:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 3303814 00:38:50.835 17:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:38:50.835 17:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:50.835 17:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3303814 00:38:50.835 17:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:50.835 17:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:50.835 17:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3303814' 00:38:50.835 killing process with pid 3303814 00:38:50.835 17:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 3303814 00:38:50.835 17:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 3303814 00:38:50.835 17:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:50.835 17:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:50.835 17:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:50.835 17:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:38:50.835 17:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:38:50.835 17:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:50.835 17:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:38:50.835 17:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:50.835 17:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:50.835 17:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:50.835 17:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:50.835 17:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:52.748 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:52.749 00:38:52.749 real 0m12.708s 00:38:52.749 user 0m10.499s 00:38:52.749 sys 0m6.572s 00:38:52.749 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:52.749 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:52.749 ************************************ 00:38:52.749 END TEST nvmf_abort 00:38:52.749 ************************************ 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:53.013 ************************************ 00:38:53.013 START TEST nvmf_ns_hotplug_stress 00:38:53.013 ************************************ 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:38:53.013 * Looking for test storage... 00:38:53.013 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:53.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:53.013 --rc genhtml_branch_coverage=1 00:38:53.013 --rc genhtml_function_coverage=1 00:38:53.013 --rc genhtml_legend=1 00:38:53.013 --rc geninfo_all_blocks=1 00:38:53.013 --rc geninfo_unexecuted_blocks=1 00:38:53.013 00:38:53.013 ' 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:53.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:53.013 --rc genhtml_branch_coverage=1 00:38:53.013 --rc genhtml_function_coverage=1 00:38:53.013 --rc genhtml_legend=1 00:38:53.013 --rc geninfo_all_blocks=1 00:38:53.013 --rc geninfo_unexecuted_blocks=1 00:38:53.013 00:38:53.013 ' 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:53.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:53.013 --rc genhtml_branch_coverage=1 00:38:53.013 --rc genhtml_function_coverage=1 00:38:53.013 --rc genhtml_legend=1 00:38:53.013 --rc geninfo_all_blocks=1 00:38:53.013 --rc geninfo_unexecuted_blocks=1 00:38:53.013 00:38:53.013 ' 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:53.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:53.013 --rc genhtml_branch_coverage=1 00:38:53.013 --rc genhtml_function_coverage=1 00:38:53.013 --rc genhtml_legend=1 00:38:53.013 --rc geninfo_all_blocks=1 00:38:53.013 --rc geninfo_unexecuted_blocks=1 00:38:53.013 00:38:53.013 ' 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:53.013 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:53.275 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:53.275 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:53.275 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:53.275 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:53.275 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:53.275 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:53.275 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:53.275 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:53.275 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:38:53.275 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:53.275 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:53.275 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:53.275 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:53.275 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:53.275 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:53.275 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:38:53.275 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:53.275 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:38:53.275 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:53.275 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:53.275 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:53.275 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:53.275 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:53.275 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:53.275 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:53.275 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:53.275 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:53.275 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:53.275 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:53.275 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:38:53.275 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:53.275 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:53.275 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:53.276 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:53.276 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:53.276 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:53.276 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:53.276 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:53.276 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:38:53.276 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:38:53.276 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:38:53.276 17:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:01.414 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:01.414 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:39:01.414 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:01.414 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:01.414 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:01.414 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:01.414 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:01.414 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:39:01.414 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:01.414 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:39:01.414 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:39:01.414 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:39:01.414 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:39:01.414 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:39:01.414 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:39:01.414 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:01.414 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:01.414 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:01.414 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:01.414 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:01.414 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:01.414 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:39:01.415 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:39:01.415 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:39:01.415 Found net devices under 0000:4b:00.0: cvl_0_0 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:39:01.415 Found net devices under 0000:4b:00.1: cvl_0_1 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:01.415 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:01.416 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:01.416 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:01.416 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:01.416 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:01.416 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:01.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:01.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:39:01.416 00:39:01.416 --- 10.0.0.2 ping statistics --- 00:39:01.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:01.416 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:39:01.416 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:01.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:01.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:39:01.416 00:39:01.416 --- 10.0.0.1 ping statistics --- 00:39:01.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:01.416 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:39:01.416 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:01.416 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:39:01.416 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:39:01.416 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:01.416 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:39:01.416 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:39:01.416 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:01.416 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:39:01.416 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:39:01.416 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:39:01.416 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:39:01.416 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:01.416 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:01.416 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=3308502 00:39:01.416 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 3308502 00:39:01.416 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:39:01.416 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 3308502 ']' 00:39:01.416 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:01.416 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:01.416 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:01.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:01.416 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:01.416 17:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:01.416 [2024-10-01 17:38:59.034042] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:01.416 [2024-10-01 17:38:59.035193] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:39:01.416 [2024-10-01 17:38:59.035250] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:01.416 [2024-10-01 17:38:59.125791] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:01.416 [2024-10-01 17:38:59.173417] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:01.416 [2024-10-01 17:38:59.173473] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:01.416 [2024-10-01 17:38:59.173481] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:01.416 [2024-10-01 17:38:59.173489] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:01.416 [2024-10-01 17:38:59.173495] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:01.416 [2024-10-01 17:38:59.173630] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:39:01.416 [2024-10-01 17:38:59.173793] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:01.416 [2024-10-01 17:38:59.173795] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:39:01.416 [2024-10-01 17:38:59.253649] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:01.416 [2024-10-01 17:38:59.253732] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:01.416 [2024-10-01 17:38:59.254351] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:01.416 [2024-10-01 17:38:59.254657] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:01.416 17:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:01.416 17:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:39:01.416 17:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:39:01.416 17:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:01.416 17:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:01.416 17:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:01.416 17:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:39:01.416 17:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:01.677 [2024-10-01 17:39:00.062760] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:01.677 17:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:01.938 17:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:01.938 [2024-10-01 17:39:00.443532] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:01.938 17:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:02.199 17:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:39:02.461 Malloc0 00:39:02.461 17:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:02.722 Delay0 00:39:02.722 17:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:02.722 17:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:39:02.984 NULL1 00:39:02.984 17:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:39:03.244 17:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3309225 00:39:03.244 17:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3309225 00:39:03.244 17:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:39:03.244 17:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:03.244 17:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:03.505 17:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:39:03.505 17:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:39:03.765 true 00:39:03.765 17:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3309225 00:39:03.765 17:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:03.765 17:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:04.025 17:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:39:04.025 17:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:39:04.285 true 00:39:04.285 17:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3309225 00:39:04.285 17:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:05.668 Read completed with error (sct=0, sc=11) 00:39:05.668 17:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:05.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:05.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:05.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:05.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:05.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:05.668 17:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:39:05.668 17:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:39:05.928 true 00:39:05.928 17:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3309225 00:39:05.928 17:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:06.868 17:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:06.868 17:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:39:06.868 17:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:39:06.868 true 00:39:07.129 17:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3309225 00:39:07.129 17:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:07.129 17:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:07.390 17:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:39:07.390 17:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:39:07.650 true 00:39:07.650 17:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3309225 00:39:07.650 17:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:08.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:08.592 17:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:08.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:08.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:08.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:08.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:08.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:08.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:08.853 17:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:39:08.853 17:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:39:09.113 true 00:39:09.113 17:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3309225 00:39:09.113 17:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:10.053 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:10.053 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:39:10.053 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:39:10.314 true 00:39:10.315 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3309225 00:39:10.315 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:10.315 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:10.575 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:39:10.575 17:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:39:10.575 true 00:39:10.837 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3309225 00:39:10.837 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:10.837 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:11.097 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:39:11.097 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:39:11.358 true 00:39:11.358 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3309225 00:39:11.358 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:11.358 17:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:11.617 17:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:39:11.617 17:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:39:11.876 true 00:39:11.876 17:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3309225 00:39:11.876 17:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:11.876 17:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:11.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:12.166 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:12.166 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:12.166 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:12.166 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:12.166 [2024-10-01 17:39:10.532463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.166 [2024-10-01 17:39:10.532513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.166 [2024-10-01 17:39:10.532542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.166 [2024-10-01 17:39:10.532575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.166 [2024-10-01 17:39:10.532603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.166 [2024-10-01 17:39:10.532639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.166 [2024-10-01 17:39:10.532668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.166 [2024-10-01 17:39:10.532699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.166 [2024-10-01 17:39:10.532725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.166 [2024-10-01 17:39:10.532755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.166 [2024-10-01 17:39:10.532784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.166 [2024-10-01 17:39:10.532812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.166 [2024-10-01 17:39:10.532839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.166 [2024-10-01 17:39:10.532866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.166 [2024-10-01 17:39:10.532894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.166 [2024-10-01 17:39:10.532922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.166 [2024-10-01 17:39:10.532949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.166 [2024-10-01 17:39:10.532978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.166 [2024-10-01 17:39:10.533011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.166 [2024-10-01 17:39:10.533039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.166 [2024-10-01 17:39:10.533066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.166 [2024-10-01 17:39:10.533092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.166 [2024-10-01 17:39:10.533119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.166 [2024-10-01 17:39:10.533147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.166 [2024-10-01 17:39:10.533176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.166 [2024-10-01 17:39:10.533204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.166 [2024-10-01 17:39:10.533229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.166 [2024-10-01 17:39:10.533262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.166 [2024-10-01 17:39:10.533296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.166 [2024-10-01 17:39:10.533325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.166 [2024-10-01 17:39:10.533353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.166 [2024-10-01 17:39:10.533378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.166 [2024-10-01 17:39:10.533407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.166 [2024-10-01 17:39:10.533440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.166 [2024-10-01 17:39:10.533467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.533494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.533524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.533554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.533582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.533610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.533639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.533669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.533700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.533731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.533762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.533793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.533822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.533853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.533884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.533916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.533950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.533978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.534011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.534039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.534073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.534100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.534131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.534157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.534188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.534217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.534403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.534433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.534457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.534487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.534512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.534543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.534573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.534604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.534662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.534693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.534723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.534751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.534782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.534811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.534841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.534870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.534907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.534934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.534968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.535002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.535032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.535067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.535098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.535133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.535163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.535216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.535247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.535275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.535303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.535334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.535363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.535402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.535431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.535467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.535495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.535526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.535555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.535583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.535610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.535639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.535667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.535699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.535727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.535757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.535786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.535815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.535842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.535870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.535897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.535923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.535950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.535984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.536017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.536044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.536074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.536102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.536131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.536160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.536184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.536217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.536251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.536284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.536314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.536349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.537128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.537161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.537187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.537217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.537244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.537268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.537297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.537328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.537357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.537385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.537415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.537444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.537468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.537496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.537525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.537553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.537582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.537609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.537637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.537672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.537700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.537734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.537762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.537789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.537816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.537849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.537879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.537911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.537940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.537967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.538001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.538029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.538061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.538097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.538136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.538171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.538209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.538245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.538273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.538311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.538340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.538371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.538400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.538422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.538450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.538481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.538508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.538534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.538560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.538588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.538617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.167 [2024-10-01 17:39:10.538646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.538675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.538704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.538734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.538762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.538791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.538819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.538845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.538879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.538905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.538931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.538959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.539114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.539145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.539172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.539201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.539229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.539258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.539284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.539309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.539338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.539368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.539396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.539425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.539453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.539486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.539514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.539546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.539573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.539616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.539647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.539677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.539706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.539736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.539763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.539795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.539825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.539856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.539885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.539921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.539950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.539979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.540012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.540048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.540081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.540114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.540152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.540197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.540233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.540260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.540288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.540316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.540341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.540368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.540395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.540426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.540453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.540478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.540505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.540534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.540562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.540588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.540614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.540642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.540671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.540699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.540728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.540756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.540785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.540812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.540843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.540871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.540899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.540929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.540953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.540983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.541638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.541671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.541701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.541732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.541759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.541788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.541817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.541853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.541891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.541928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.541964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.542004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.542033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.542059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.542086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.542114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.542144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.542171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.542200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.542233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.542259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.542288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.542318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.542348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.542378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.542408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.542437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.542465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.542494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.542523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.542549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.542580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.542606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.542629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.542660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.542689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.542713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.542736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.542759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.542783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.542806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.542830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.542853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.542876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.542901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.542924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.542948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.542971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.542999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.543022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.543047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.543071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.543094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.543118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.543141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.543165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.543193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.543223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.543255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.543285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.543314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.543344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.543381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.543531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.168 [2024-10-01 17:39:10.543562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.543622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.543653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.543695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.543723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.543782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.543813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.543854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.543885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.543921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.543951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.543983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.544016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.544046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.544075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.544103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.544131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.544160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.544186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.544214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.544242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.544270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.544298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.544330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.544360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.544388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.544417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.544445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.544475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.544510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.544539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.544567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.544595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.544622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.544650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.544680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.544707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.544736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.544763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.544791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.544816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.544843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.544881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.544922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.544954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.544990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.545022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.545052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.545077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.545109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.545137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.545175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.545212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.545250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.545290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.545326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.545358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.545384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.545411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.545438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.545466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.545491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.545519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.545892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.545926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.545956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.545986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.546019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.546052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.546081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.546124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.546155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.546192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.546218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.546248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.546279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.546312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.546344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.546375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.546406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.546434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.546465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.546496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.546525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.546552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.546582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.546609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.546639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.546668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.546696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.546726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.546764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.546792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.546819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.546849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.546879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.546906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.546936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.546963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.546991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.547025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.547052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.547079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.547108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.547135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.547160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.547186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.547214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.547240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.547268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.547291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.547323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.547353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.547380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.547406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.547435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.547462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.547490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.169 [2024-10-01 17:39:10.547518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.547547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.547573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.547600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.547627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.547657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.547686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.547716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.548108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.548139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.548169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.548198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.548231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.548262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.548291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.548318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.548354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.548383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.548413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.548441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.548468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.548497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.548524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.548550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.548578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.548606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.548635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.548664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.548694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.548722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.548750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.548778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.548805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.548833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.548859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.548888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.548914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.548945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.548975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.549008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.549040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.549073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.549103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.549134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.549161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.549191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.549222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.549252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.549279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.549301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.549330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.549362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.549390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.549417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.549447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.549470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.549498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.549523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.549546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.549570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.549594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.549618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.549642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.549669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.549697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.549724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.549753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.549784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.549812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.549843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.549875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.549902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.550477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.550510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.550542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.550570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.550600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.550628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.550663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.550695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.550730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.550768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.550803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.550833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.550860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.550889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.550915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.550939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.550971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.551006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.551035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.551064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.551095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.551130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.551158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.551191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.551220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.551256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.551287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.551324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.551350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.551379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.551410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.551440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.551469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.551501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.551532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.551562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.551590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.551617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.551647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.551675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.551703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.551733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.551760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.551790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.551819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.551849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.551876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.551905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.551934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.551968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.552001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.552029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.552060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.552090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.552124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.552153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.552185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.552212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.552265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.552294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.552326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.552355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.552383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.552530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.552564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.552594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.170 [2024-10-01 17:39:10.552626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.552654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.552683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.552717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.552740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.552767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.552804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.552838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.552877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.552914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.552949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.552984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.553022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.553048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.553074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.553102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.553125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.553154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.553182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.553213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.553241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.553273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.553304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.553333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.553360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.553383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.553435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.553464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.553499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.553531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.553558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.553587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.553618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.553653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.553685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.553715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.553747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.553776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.553803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.553833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.553859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.553889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.553920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.553946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.553972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.554001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.554030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.554059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.554088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.554117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.554142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.554169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.554195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.554226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.554255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.554282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.554310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.554340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.554368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.554396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.554426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.554813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.554847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.554875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.554903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.554930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.554958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.555003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.555032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.555062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.555090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.555118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:39:12.171 [2024-10-01 17:39:10.555145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.555178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.555206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.555237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.555267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.555297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.555325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.555352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.555379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.555408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.555436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.555464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.555496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.555523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.555551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.555578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.555610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.555642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.555672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.555703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.555731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.555764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.555793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.555822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.555851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.555877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.555905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.555935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.555963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.555991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.556027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.556058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.556085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.556114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.556145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.556172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.556203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.556231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.556259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.556287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.556313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.556344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.556369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.556392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.556416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.556441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.556464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.556488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.556511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.556534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.556558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.556581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.557012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.557046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.557073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.557100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.557130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.557159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.557190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.557230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.557259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.557291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.557319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.557350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.557378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.557407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.171 [2024-10-01 17:39:10.557435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.557466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.557495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.557532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.557562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.557589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.557617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.557647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.557675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.557702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.557730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.557759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.557795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.557823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.557851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.557880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.557914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.557945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.557977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.558014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.558044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.558073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.558102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.558133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.558173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.558211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.558241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.558272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.558300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.558327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.558355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.558382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.558413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.558439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.558467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.558495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.558525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.558552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.558581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.558611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.558637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.558664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.558695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.558722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.558750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.558777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.558802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.558833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.558862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.558890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.559464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.559497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.559525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.559559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.559586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.559618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.559645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.559672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.559700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.559734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.559762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.559795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.559824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.559865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.559893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.559922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.559952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.559984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.560016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.560044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.560078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.560106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.560145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.560174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.560210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.560237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.560267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.560298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.560328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.560353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.560377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.560407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.560438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.560468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.560495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.560522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.560549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.560582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.560617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.560649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.560688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.560724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.560760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.560799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.560829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.560860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.560888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.560922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.560961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.560999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.561026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.561055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.561084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.561110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.561139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.561169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.561198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.561223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.561253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.561281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.561312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.561340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.561369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.561527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.561559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.561589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.561628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.561659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.561693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.561721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.561751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.561780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.561810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.561838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.561868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.561896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.561926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.561953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.561980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.562016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.562043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.562071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.562099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.562131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.562159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.562189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.562219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.562251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.562279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.562310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.562335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.172 [2024-10-01 17:39:10.562366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.562393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.562422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.562456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.562487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.562517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.562544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.562593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.562622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.562653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 17:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:39:12.173 [2024-10-01 17:39:10.562681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.562714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.562749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.562778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.562808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.562838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.562866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.562895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.562921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.562953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.562985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.563015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.563042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 17:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:39:12.173 [2024-10-01 17:39:10.563069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.563095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.563122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.563153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.563183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.563212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.563239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.563267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.563294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.563320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.563350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.563390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.563427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.563790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.563825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.563857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.563884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.563916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.563948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.563978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.564011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.564050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.564078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.564107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.564137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.564167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.564197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.564238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.564267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.564298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.564329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.564357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.564390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.564421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.564450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.564478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.564507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.564531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.564563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.564591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.564619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.564647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.564685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.564722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.564753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.564779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.564807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.564831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.564862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.564892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.564923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.564950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.564979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.565012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.565039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.565067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.565095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.565121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.565150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.565181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.565210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.565240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.565267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.565297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.565327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.565362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.565390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.565427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.565457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.565517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.565548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.565576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.565607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.565656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.565688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.565724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.566082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.566117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.566146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.566178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.566205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.566236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.566262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.566289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.566317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.566346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.566374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.566405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.566432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.566459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.566486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.566514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.566543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.566571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.566598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.566629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.566658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.566682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.566708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.566735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.566762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.566789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.566818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.173 [2024-10-01 17:39:10.566848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.566879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.566905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.566937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.566967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.567001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.567029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.567059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.567089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.567117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.567145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.567173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.567200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.567229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.567258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.567286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.567316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.567344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.567382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.567411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.567447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.567475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.567510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.567541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.567572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.567600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.567630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.567661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.567689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.567718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.567746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.567776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.567806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.567858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.567888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.567923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.567951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.568309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.568343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.568373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.568402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.568429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.568456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.568483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.568512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.568539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.568566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.568591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.568620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.568650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.568679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.568708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.568736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.568763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.568790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.568819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.568850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.568881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.568909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.568937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.568968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.569004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.569033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.569061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.569094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.569126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.569154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.569192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.569221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.569257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.569287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.569318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.569344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.569372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.569400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.569428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.569457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.569492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.569532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.569561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.569593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.569620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.569647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.569677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.569707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.569737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.569761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.569789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.569817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.569846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.569875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.569905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.569934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.569962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.570000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.570031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.570059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.570094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.570126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.570162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.570536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.570569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.570598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.570649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.570678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.570717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.570744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.570777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.570808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.570838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.570869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.570905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.570931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.570963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.570991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.571027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.571062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.571093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.571121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.571150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.571178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.571207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.571239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.571267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.571299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.571329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.571362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.571390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.571420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.571461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.571490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.571522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.571555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.571583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.571620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.571651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.571681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.571711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.571743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.571771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.174 [2024-10-01 17:39:10.571800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.571829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.571860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.571889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.571921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.571950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.571982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.572017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.572047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.572074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.572102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.572141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.572170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.572200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.572227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.572257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.572286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.572314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.572341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.572371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.572403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.572432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.572461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.572492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.572843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.572876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.572905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.572935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.572974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.573012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.573044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.573076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.573110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.573139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.573168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.573199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.573232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.573263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.573293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.573325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.573356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.573388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.573416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.573445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.573473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.573502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.573542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.573569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.573599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.573628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.573656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.573690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.573719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.573749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.573780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.573809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.573858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.573888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.573931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.573965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.574003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.574034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.574087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.574118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.574142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.574174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.574209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.574243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.574271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.574298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.574339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.574369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.574401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.574428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.574458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.574495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.574521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.574549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.574580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.574609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.574640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.574668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.574701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.574732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.574765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.574796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.574826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.575221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.575255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.575290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.575319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.575351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.575400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.575429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.575461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.575498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.575528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.575557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.575587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.575617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.575652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.575683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.575715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.575747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.575775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.575812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.575845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.575876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.575906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.575935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.575968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.576004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.576033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.576065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.576094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.576128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.576158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.576195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.576223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.576254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.576283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.576310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.576341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.576370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.576410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.576438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.576473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.576505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.576533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.576563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.576603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.576632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.576662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.576691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.576722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.576809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.576839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.576870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.576902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.576930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.175 [2024-10-01 17:39:10.576960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.577038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.577070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.577099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.577129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.577159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.577209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.577237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.577269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.577300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.577329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.577696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.577728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.577758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.577790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.577822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.577852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.577880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.577910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.577960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.577989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.578026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.578058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.578088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.578120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.578173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.578202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.578231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.578260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.578296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.578325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.578356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.578386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.578413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.578450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.578491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.578520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.578553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.578583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.578613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.578647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.578676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.578706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.578739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.578770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.578799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.578828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.578867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.578899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.578927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.578954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.578981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.579018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.579052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.579083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.579112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.579145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.579174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.579210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.579242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.579273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.579303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.579649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.579684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.579716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.579745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.579782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.579815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.579848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.579877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.579909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.579940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.579973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.580012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.580045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.580074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.580104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.580129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.580157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.580187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.580212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.580236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.580261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.580286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.580311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.580340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.580364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.580389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.580414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.580439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.580465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.580489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.580514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.580539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.580564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.580588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.580613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.580638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.580662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.580687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.580711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.580736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.580759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.580784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.580815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.580845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.580873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.580904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.580934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.580964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.580999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.581036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.581066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.581097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.581128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.581157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.581194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.581225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.581257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.581292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.581319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.581349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.581379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.581408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.581442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.581475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.581619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.581651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.581679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.581714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.176 [2024-10-01 17:39:10.581744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.581774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.581806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.581838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.581868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.581895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.581924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.581976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.582436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.582465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.582492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.582521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.582561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.582590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.582615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.582645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.582675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.582704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.582734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.582762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.582798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.582828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.582857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.582887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.582915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.582956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.582982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.583021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.583055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.583088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.583117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.583144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.583174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.583204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.583238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.583268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.583300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.583330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.583359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.583388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.583419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.583449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.583480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.583509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.583540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.583572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.583610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.583640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.583667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.583697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.583727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.583760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.583791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.583821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.583875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.583903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.583934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.583971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.584004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.584179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.584213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.584244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.584274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.584301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.584332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.584364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.584395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.584425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.584456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.584498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.584531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.584561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.584590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.584620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.584651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.584681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.584721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.584746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.584780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.584809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.584838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.584867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.584898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.584934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.584975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.585008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.585043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.585079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.585109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.585140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.585170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.585200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.585231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.585262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.585293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.585323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.585354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.585384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.585413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.585444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.585473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.585505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.585535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.585565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.585591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.585622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.585649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.585680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.585710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.585741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.585772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.585809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.585838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.585867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.585898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.585928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.585958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.585991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.586034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.586066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.586096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.586126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.586163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.586295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.586325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.586355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.586386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.586416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.586446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.586475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.586508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.586539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.586570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.586602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.586632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.587090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.587123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.587152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.587183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.587212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.177 [2024-10-01 17:39:10.587240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.587288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.587317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.587348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.587377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.587409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.587451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.587481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.587511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.587550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.587582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.587614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.587643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.587674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.587710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.587739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.587770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.587801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.587835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.587864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.587894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.587920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.587950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.587979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.588017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.588049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.588078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.588109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.588137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.588171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.588210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.588242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.588273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.588303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.588336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.588367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.588395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.588434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.588465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.588498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.588526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.588554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.588583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.588613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.588644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.588675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.588837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.588874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.588905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.588934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.588963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.588998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.589030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.589060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.589091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.589121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.589154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.589184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.589213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.589246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.589276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.589309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.589334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.589363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.589392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.589424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.589464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.589494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.589523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.589553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.589582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.589615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.589654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.589683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.589708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.589741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.589769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.589805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.589836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.589868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.589897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.589932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.589963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.590002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.590032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.590064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.590106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.590137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.590168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.590203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.590233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.590265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.590296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.590324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.590352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.590382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.590411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.590440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.590472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.590502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.590528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.590559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.590592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.590629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.590657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.590685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.590714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.590740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.590769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.590798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.591148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.591180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.591208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.591239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.591269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.591299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.591327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.591362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.591394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.591432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:39:12.178 [2024-10-01 17:39:10.591460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.591491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.591526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.591555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.591586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.591620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.591650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.591703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.591734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.591765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.591798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.591828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.591863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.591893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.591921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.591954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.591983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.592019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.592051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.592081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.178 [2024-10-01 17:39:10.592114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.592144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.592176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.592206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.592235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.592267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.592296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.592326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.592356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.592387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.592416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.592446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.592476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.592505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.592536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.592565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.592602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.592632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.592662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.592690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.592719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.592749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.592781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.592810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.592849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.592885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.592914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.592941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.592968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.593004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.593034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.593069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.593096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.593539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.593579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.593610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.593640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.593672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.593701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.593731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.593767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.593798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.593849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.593879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.593911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.593944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.593978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.594014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.594043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.594074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.594107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.594138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.594168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.594201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.594232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.594262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.594292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.594321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.594356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.594385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.594414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.594446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.594476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.594531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.594561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.594591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.594625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.594655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.594686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.594718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.594748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.594781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.594808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.594837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.594869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.594901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.594930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.594957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.594985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.595018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.595053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.595085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.595112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.595143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.595171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.595200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.595230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.595262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.595289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.595316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.595342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.595371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.595401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.595435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.595476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.595503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.595534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.596267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.596299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.596331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.596363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.596393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.596429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.596457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.596490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.596519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.596549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.596579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.596611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.596642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.596674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.596703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.596764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.596794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.596825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.596858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.596890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.596919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.596950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.596980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.597015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.597047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.597086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.597119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.597151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.597182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.597213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.179 [2024-10-01 17:39:10.597245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.597275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.597304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.597334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.597365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.597396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.597426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.597456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.597482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.597514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.597545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.597576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.597606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.597637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.597669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.597696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.597728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.597759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.597787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.597818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.597846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.597874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.597902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.597934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.597961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.597991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.598033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.598068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.598098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.598130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.598160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.598188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.598222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.598376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.598409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.598441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.598470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.598498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.598531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.598564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.598593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.598625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.598653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.598684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.598709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.598736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.598770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.598799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.598831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.598860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.598896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.598926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.598963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.598992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.599030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.599079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.599110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.599142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.599170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.599200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.599239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.599268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.599297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.599328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.599356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.599385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.599412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.599443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.599474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.599504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.599539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.599569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.599600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.599630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.599659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.599695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.599724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.599763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.599794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.599823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.599853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.599881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.599911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.599940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.599971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.600011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.600043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.600071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.600099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.600126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.600154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.600180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.600210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.600238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.600266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.600303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.600333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.601164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.601197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.601225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.601259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.601287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.601322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.601352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.601386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.601415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.601444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.601472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.601508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.601536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.601570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.601599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.601629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.601657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.601687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.601717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.601749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.601786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.601816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.601846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.601879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.601909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.601940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.601975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.602009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.602048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.602076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.602109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.602174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.602203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.602232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.602262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.602292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.602326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.602357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.602386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.602438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.602469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.602500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.602531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.180 [2024-10-01 17:39:10.602568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.602595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.602624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.602653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.602683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.602711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.602740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.602769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.602799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.602827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.602853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.602884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.602914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.602950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.602979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.603013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.603042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.603069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.603103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.603147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.603316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.603349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.603381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.603412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.603442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.603472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.603502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.603529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.603562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.603596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.603625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.603657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.603688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.603717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.603747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.603778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.603813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.603841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.603876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.603904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.603933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.603964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.604000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.604030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.604067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.604096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.604131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.604163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.604191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.604221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.604252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.604288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.604319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.604351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.604381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.604413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.604449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.604479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.604510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.604539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.604564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.604593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.604627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.604657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.604687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.604714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.604744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.604772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.604801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.604840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.604877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.604907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.604935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.604962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.604992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.605028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.605056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.605090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.605121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.605152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.605184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.605213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.605246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.605275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.605866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.605899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.605926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.605953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.605992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.606029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.606061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.606092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.606122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.606150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.606177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.606207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.606238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.606270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.606299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.606332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.606362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.606392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.606423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.606452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.606501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.606532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.606559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.606597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.606627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.606660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.606690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.606721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.606754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.606783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.606816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.606845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.606873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.606909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.606937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.606967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.607001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.607032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.607064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.607095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.607122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.607152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.607182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.607213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.607245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.607279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.607311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.607343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.607370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.607399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.607444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.607473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.607508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.607536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.607567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.607601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.181 [2024-10-01 17:39:10.607630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.607658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.607685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.607718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.607751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.607780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.607808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.608030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.608062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.608095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.608131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.608161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.608188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.608218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.608246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.608274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.608298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.608329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.608362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.608396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.608428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.608460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.608491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.608519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.608548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.608578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.608609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.608635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.608661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.608692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.608725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.608756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.608787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.608819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.608847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.608879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.608909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.608947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.608977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.609016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.609053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.609082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.609111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.609141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.609172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.609207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.609238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.609266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.609302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.609333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.609368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.609401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.609438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.609474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.609506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.609539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.609568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.609597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.609624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.609656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.609684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.609713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.609752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.609788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.609816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.609847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.609875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.609902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.609936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.609965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.609997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.610746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.610781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.610810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.610841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.610873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.610911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.610941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.610971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.611007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.611037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.611079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.611109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.611138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.611166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.611197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.611237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.611267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.611298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.611333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.611365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.611399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.611429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.611458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.611493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.611524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.611571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.611601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.611631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.611659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.611687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.611719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.611752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.611782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.611815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.611860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.611890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.611920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.611950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.611976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.612008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.612045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.612077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.612105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.612133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.612159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.612190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.612220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.612251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.612295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.612325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.612354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.612383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.612413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.612446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.612476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.612507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.612537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.182 [2024-10-01 17:39:10.612568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.612599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.612632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.612662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.612693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.612721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.612950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.612987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.613029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.613059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.613089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.613136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.613165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.613193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.613223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.613253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.613310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.613339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.613367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.613397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.613426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.613482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.613512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.613542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.613575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.613605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.613639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.613667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.613699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.613747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.613777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.613805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.613834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.613864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.613907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.613936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.613965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.614008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.614040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.614068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.614099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.614129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.614161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.614192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.614224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.614252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.614282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.614310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.614345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.614381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.614410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.614438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.614470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.614499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.614536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.614563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.614593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.614625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.614652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.614683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.614713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.614752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.614782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.614812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.614839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.614868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.614897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.614925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.614950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.614984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.615611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.615646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.615676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.615709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.615740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.615777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.615807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.615836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.615870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.615901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.615930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.615960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.615991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.616032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.616061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.616091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.616122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.616154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.616183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.616229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.616260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.616290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.616327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.616356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.616385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.616416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.616446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.616473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.616505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.616540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.616569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.616598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.616623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.616654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.616686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.616724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.616756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.616783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.616813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.616847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.616878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.616909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.616939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.616970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.617008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.617038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.617071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.617101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.617132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.617157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.617190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.617224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.617259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.617296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.617327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.617356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.617387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.617418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.617449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.617482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.617518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.617548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.617579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.617721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.617755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.617786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.617815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.617848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.617880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.183 [2024-10-01 17:39:10.617910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.617939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.617976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.618012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.618041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.618074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.618104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.618138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.618167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.618198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.618250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.618282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.618309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.618346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.618381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.618408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.618440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.618478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.618508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.618536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.618567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.618596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.618631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.618657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.618688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.618719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.618748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.618777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.618811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.618848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.618876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.618903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.618933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.618964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.619003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.619031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.619062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.619093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.619124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.619155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.619185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.619218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.619243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.619276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.619304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.619335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.619370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.619400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.619434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.619466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.619496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.619527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.619559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.619588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.619625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.619655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.619690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.619720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.620075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.620117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.620148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.620179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.620235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.620266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.620296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.620327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.620360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.620411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.620440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.620469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.620503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.620534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.620572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.620604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.620631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.620660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.620691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.620722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.620751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.620781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.620810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.620845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.620873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.620901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.620929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.620958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.620987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.621028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.621064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.621090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.621119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.621147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.621176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.621203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.621232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.621272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.621305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.621334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.621362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.621395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.621426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.621457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.621491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.621522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.621553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.621587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.621618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.621645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.621673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.621700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.621729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.621761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.621794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.621819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.621852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.621880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.621910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.621935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.621964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.622000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.622030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.622394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.622432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.622464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.622493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.622522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.622553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.622585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.622618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.622647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.622681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.622711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.622740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.622768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.622797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.622824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.622853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.622882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.622913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.622949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.622979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.184 [2024-10-01 17:39:10.623014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.623051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.623080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.623109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.623144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.623172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.623204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.623235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.623264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.623293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.623324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.623356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.623386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.623418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.623451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.623484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.623515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.623545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.623576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.623606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.623634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.623667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.623696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.623731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.623763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.623794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.623825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.623857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.623890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.623920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.623950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.623984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.624019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.624050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.624080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.624111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.624140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.624171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.624214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.624245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.624274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.624313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.624341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.624369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.624737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.624770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.624835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.624868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.624898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.624931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.624964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.625000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.625036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.625065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.625101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.625132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.625161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.625191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.625223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.625253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.625281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.625311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.625348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.625376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.625428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.625458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.625489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.625521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.625551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.625579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.625610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.625641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.625672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.625704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.625732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.625764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.625795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.625836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.625874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.625912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.625940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.625969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.626003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.626043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.626073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.626100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.626130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.626158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.626188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.626217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.626259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.626291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.626321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.626350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.626378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.626409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.626438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.626469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.626500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.626529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.626559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.626591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.626620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.626651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.626681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.626712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.626737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.627397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.627430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.627461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.627493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.627524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.627559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.627587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.627621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.627659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.627688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.627716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.627744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.627773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.627801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.627831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.627864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.627896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.627922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.627952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.627981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.628014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.185 [2024-10-01 17:39:10.628049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.628079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.628110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.628142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.628172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.628203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.628234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.628264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.628294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.628327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.628358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.628383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.628415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.628448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.628473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.628498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.628522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.628547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.628570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.628594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.628618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.628642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.628667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.628692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.628718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.628743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.628768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.628798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.628829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.628861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.628892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.628924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.628953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.628984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.629023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.629053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.629083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.629113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.629143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.629174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.629206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.629235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.629278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.629446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.629479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.629508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.629545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.629576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.629605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.629636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.629668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.629713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.629743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.629773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.629802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.629832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.629861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.629889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.629921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:39:12.186 [2024-10-01 17:39:10.629953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.629982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.630022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.630051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.630083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.630111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.630140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.630166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.630196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.630225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.630254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.630283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.630316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.630358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.630387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.630416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.630450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.630486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.630512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.630545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.630575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.630604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.630631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.630661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.630690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.630727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.630756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.630783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.630813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.630843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.630870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.630897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.630925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.630957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.630987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.631022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.631052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.631082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.631112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.631144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.631174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.631202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.631231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.631259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.631287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.631317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.631347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.631705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.631744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.631775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.631810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.631841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.631872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.631902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.631931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.631962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.631990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.632025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.632059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.632088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.632118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.632148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.632177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.632209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.632240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.632268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.632306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.632340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.632369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.632396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.632423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.632451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.632486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.632513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.632541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.632581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.632614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.632644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.632674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.632703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.186 [2024-10-01 17:39:10.632731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.632763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.632803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.632831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.632863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.632894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.632923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.632950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.632977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.633012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.633041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.633069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.633101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.633131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.633160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.633193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.633225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.633256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.633284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.633310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.633341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.633368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.633396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.633423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.633449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.633481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.633516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.633546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.633591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.633621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.633649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.634015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.634052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.634084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.634115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.634145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.634177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.634206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.634239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.634268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.634294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.634324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.634357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.634386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.634423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.634459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.634492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.634523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.634552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.634583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.634610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.634642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.634675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.634703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.634732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.634764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.634794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.634825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.634852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.634886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.634913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.634946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.634971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.635001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.635025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.635050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.635078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.635102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.635127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.635152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.635177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.635203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.635257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.635288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.635317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.635345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.635377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.635420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.635449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.635480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.635513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.635541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.635580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.635610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.635639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.635671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.635702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.635732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.635764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.635792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.635823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.635850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.635881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.635910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.636260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.636302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.636331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.636359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.636388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.636429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.636468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.636495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.636524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.636553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.636583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.636612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.636641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.636668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.636697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.636729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.636760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.636789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.636820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.636852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.636881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.636912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.636941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.636970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.637005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.637036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.637065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.637099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.637130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.637160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.637189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.637218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.637246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.637276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.637308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.637337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.637371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.637400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.637432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.637463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.637492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.637524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.637553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.637587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.187 [2024-10-01 17:39:10.637616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.637642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.637674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.637703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.637749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.637780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.637810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.637842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.637871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.637908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.637938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.637969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.638005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.638035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.638068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.638099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.638128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.638159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.638188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.638227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.638787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.638819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.638849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.638877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.638905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.638935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.638967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.639001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.639028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.639057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.639089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.639119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.639151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.639179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.639209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.639239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.639264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.639297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.639323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.639347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.639371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.639395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.639420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.639444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.639468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.639497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.639529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.639560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.639589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.639618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.639649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.639679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.639705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.639735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.639765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.639795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.639826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.639856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.639893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.639925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.639956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.640005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.640033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.640063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.640093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.640122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.640160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.640188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.640217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.640245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.640273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.640303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.640333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.640364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.640396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.640425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.640458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.640485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.640520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.640550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.640581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.640617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.640647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.640807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.640841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.640870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.640898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.640927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.640967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.641002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.641034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.641066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.641102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.641131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.641161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.641194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.641224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.641254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.641282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.641314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.641341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.641371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.641411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.641441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.641469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.641499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.641528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.641558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.641583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.641615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.641643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.641675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.641711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.641738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.641769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.641795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.641822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.641848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.641877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.641905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.641930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.641958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.641986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.642022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.642052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.642083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.642113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.642143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.188 [2024-10-01 17:39:10.642169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.642193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.642218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.642248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.642277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.642307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.642340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.642369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.642399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.642426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.642458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.642491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.642525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.642560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.642589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.642619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.642645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.642676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.642706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.643262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.643295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.643323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.643352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.643382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.643418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.643448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.643476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.643509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.643540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.643572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.643605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.643637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.643670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.643701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.643730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.643762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.643791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.643818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.643856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.643883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.643911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.643940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.643969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.644004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.644042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.644068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.644098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.644125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.644151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.644183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.644212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.644241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.644270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.644309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.644338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.644368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.644398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.644430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.644458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.644488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.644514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.644543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.644575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.644609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.644637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.644667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.644698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.644726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.644762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.644792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.644834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.644861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.644892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.644920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.644951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.644982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.645021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.645054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.645098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.645126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.645153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.645181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.645330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.645363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.645392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.645421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.645452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.645482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.645514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.645546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.645576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.645604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.645636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.645667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.645698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.645731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.645764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.645792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.645822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.645853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.645881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.645913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.645943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.645973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.646008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.646036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.646064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.646096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.646126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.646154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.646194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.646231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.646262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.646289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.646317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.646344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.646371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.646400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.646424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.646455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.646483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.646515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.646555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.646585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.646618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.646645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.646673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.646702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.646731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.646761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.646787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.646817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.646847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.646876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.646908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.646940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.646973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.647010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.647038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.647069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.189 [2024-10-01 17:39:10.647097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.647126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.647163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.647194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.647225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.647250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.647651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.647682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.647724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.647754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.647787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.647818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.647847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.647880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.647910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.647948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.647977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.648012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.648039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.648065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.648095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.648125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.648155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.648189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.648217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.648255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.648284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.648317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.648346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.648374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.648404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.648433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.648464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.648491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.648524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.648552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.648585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.648614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.648645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.648675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.648702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.648731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.648758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.648788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.648831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.648870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.648900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.648929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.648954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.648986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.649025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.649056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.649085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.649115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.649143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.649175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.649212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.649241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.649270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.649298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.649323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.649353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.649385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.649416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.649450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.649482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.649513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.649544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.649573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.649951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.649983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.650019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.650050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.650078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.650110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.650142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.650172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.650202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.650231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.650267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.650297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.650327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.650356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.650386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.650417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.650446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.650478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.650507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.650537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.650575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.650608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.650640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.650669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.650698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.650730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.650761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.650791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.650818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.650847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.650879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.650910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.650942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.650971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.651004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.651033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.651061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.651101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.651126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.651157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.651187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.651215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.651243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.651272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.651301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.651340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.651374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.651402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.651428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.651457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.651486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.651514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.651545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.651572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.651602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.651628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.651659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.651693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.651722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.651751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.651781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.651811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.651841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.651871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.652234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.652268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.652298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.652328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.652362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.652400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.652428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.190 [2024-10-01 17:39:10.652458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.652487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.652516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.652545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.652577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.652639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.652669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.652703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.652731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.652767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.652801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.652828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.652876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.652906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.652935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.652965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.653000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.653031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.653056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.653085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.653113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.653142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.653174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.653204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.653229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.653253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.653277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.653301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.653326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.653351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.653376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.653400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.653425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.653451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.653481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.653511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.653542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.653570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.653602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.653634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.653668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.653700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.653730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.653759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.653789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.653817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.653849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.653878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.653909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.653939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.653970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.654009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.654038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.654065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.654100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.654130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.654501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.654533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.654568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.654597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.654631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.654658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.654715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.654747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.654777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.654807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.654837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.654872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.654904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.654933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.654966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.655003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.655036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.655074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.655103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.655148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.655178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.655210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.655242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.655272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.655306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.655337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.655368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.655398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.655429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.655467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.655496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.655526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.655557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.655587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.655623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.655655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.655686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.655714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.655745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.655782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.655809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.655839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.655870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.655899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.655930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.655961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.655989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.656020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.656051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.656081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.656112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.656138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.656168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.656200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.656230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.656264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.656297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.656326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.656357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.656383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.656412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.656441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.656470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.656499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.657124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.657166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.657195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.657227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.657263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.657293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.657326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.657356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.657389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.657431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.657460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.657489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.657526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.657557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.657588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.657618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.657647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.657681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.657709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.657751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.191 [2024-10-01 17:39:10.657780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.657813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.657845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.657876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.657932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.657963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.658002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.658032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.658062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.658095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.658127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.658155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.658184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.658212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.658241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.658272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.658304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.658333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.658362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.658392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.658421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.658444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.658476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.658504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.658533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.658571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.658598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.658626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.658654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.658682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.658712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.658744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.658771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.658797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.658826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.658858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.658886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.658917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.658945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.658972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.659010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.659048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.659076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.659219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.659252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.659284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.659315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.659344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.659373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.659405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.659435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.659467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.659503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.659537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.659569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.659600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.659632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.659662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.659689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.659717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.659748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.659779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.659807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.659845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.659875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.659905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.659937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.659967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.660009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.660038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.660067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.660105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.660135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.660171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.660201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.660230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.660267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.660297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.660330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.660362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.660391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.660421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.660452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.660486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.660516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.660545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.660575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.660604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.660634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.660665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.660696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.660727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.660757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.660791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.660822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.660847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.660878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.660908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.660938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.660967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.660997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.661029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.661057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.661087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.661115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.661143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.661176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.661928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.661964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.662000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.662031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.662064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.662094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.662121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.662155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.662186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.662218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.662248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.662279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.662309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.662338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.662375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.662403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.662432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.662463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.662494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.662523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.192 [2024-10-01 17:39:10.662553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.662583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.662615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.662647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.662680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.662707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.662738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.662767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.662795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.662821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.662868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.662898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.662925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.662955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.662984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.663019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.663049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.663081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.663111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.663141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.663172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.663200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.663246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.663277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.663306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.663335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.663365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.663398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.663428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.663465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.663493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.663522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.663561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.663592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.663627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.663660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.663690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.663721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.663751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.663782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.663816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.663847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.663879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.664027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.664064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.664092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.664123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.664154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.664183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.664211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.664240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.664269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.664307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.664338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.664365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.664399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.664426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.664467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.664500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.664527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.664560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.664584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.664613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.664644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.664676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.664705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.664736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.664766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.664796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.664829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.664858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.664905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.664936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.664964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.665006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.665035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.665068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.665099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.665130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.665163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.665195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.665234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.665264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.665293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.665326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.665356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.665396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.665425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.665456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.665488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.665515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.665542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.665574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.665606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.665638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.665669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.665703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.665732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.665768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.665798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.665826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.665856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.665887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.665915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.665945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.665976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.666011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:39:12.193 [2024-10-01 17:39:10.666633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.666670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.666705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.666735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.666762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.666792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.666823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.666856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.666884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.666909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.666944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.666976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.667016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.667046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.667077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.667107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.667136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.667169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.667198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.667231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.667258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.667289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.667317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.667354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.667383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.667418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.667447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.667477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.667509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.667538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.667573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.667601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.193 [2024-10-01 17:39:10.667631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.667657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.667690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.667724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.667760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.667794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.667825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.667853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.667880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.667906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.667936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.667965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.668000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.668033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.668061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.668090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.668119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.668148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.668176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.668206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.668239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.668268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.668297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.668326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.668357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.668395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.668426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.668453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.668482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.668512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.668541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.668694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.668727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.668763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.668791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.668822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.668852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.668882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.668912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.668942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.668973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.669009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.669040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.669067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.669095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.669125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.669153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.669181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.669209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.669237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.669262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.669302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.669337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.669366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.669394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.669424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.669451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.669480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.669509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.669537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.669576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.669604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.669631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.669662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.669689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.669714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.669738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.669765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.669797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.669822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.669847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.669871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.669895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.669919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.669943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.669968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.669997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.670022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.670050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.670078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.670108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.670135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.670167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.670192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.670224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.670253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.670288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.670316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.670346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.670380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.670409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.670439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.670470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.670500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.670538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.671089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.671125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.671164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.671193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.671223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.671270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.671298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.671329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.671365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.671394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.671429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.671460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.671490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.671521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.671553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.671582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.671613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.671645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.671678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.671717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.671744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.671774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.671805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.671833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.671869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.671907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.671934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.671962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.671989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.672026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.672067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.672095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.672125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.672156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.672185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.672215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.672252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.672282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.672310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.672340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.672368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.672399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.672429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.672456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.672489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.672520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.194 [2024-10-01 17:39:10.672551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.672584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.672614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.672650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.672678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.672711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.672740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.672771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.672823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.672854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.672883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.672910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.672942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.672973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.673007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.673038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.673073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.673222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.673257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.673288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.673322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.673354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.673382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.673413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.673442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.673471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.673502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.673532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.673560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.673591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.673618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.673648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.673682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.673713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.673764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.673795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.673824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.673854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.673884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.673909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.673941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.673971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.674007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.674035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.674063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.674096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.674130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.674159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.674188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.674220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.674256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.674290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.674316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.674344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.674374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.674402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.674439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.674468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.674500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.674528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.674558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.674598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.674630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.674658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.674683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.674712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.674742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.674772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.674805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.674835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.674864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.674893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.674924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.674953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.674983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.675017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.675044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.675075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.675105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.675135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.675163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.675738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.675769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.675802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.675848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.675878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.675907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.675940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.675968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.676002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.676034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.676066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.676107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.676136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.676165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.676195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.676223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.676263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.676292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.676321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.676350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.676381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.676410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.676442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.676472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.676499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.676532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.676562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.676591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.676617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.676649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.676681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.676712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.676741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.676770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.676802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.676833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.676860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.676890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.676931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.676961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.676990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.677021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.677054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.677083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.677114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.677144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.677179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.195 [2024-10-01 17:39:10.677207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.677239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.677271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.677300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.677334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.677363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.677393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.677424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.677456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.677480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.677509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.677539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.677573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.677601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.677632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.677664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.677820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.677853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.677897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.677927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.677958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.677988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.678023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.678054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.678088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.678120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.678152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.678182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.678216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.678251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.678280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.678314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.678343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.678374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.678413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.678442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.678471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.678500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.678531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.678563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.678592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.678621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.678649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.678680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.678710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.678741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.678772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.678801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.678832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.678861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.678891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.678919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.678950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.678982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.679018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.679048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.679077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.679114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.679144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.679171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.679197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.679237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.679266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.679294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.679325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.679354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.679382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.679416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.679455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.679485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.679513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.679544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.679572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.679609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.679638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.679666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.679698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.679725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.679756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.679788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.680423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.680457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.680488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.680518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.680549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.680580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.680612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.680643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.680670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.680699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.680730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.680761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.680789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.680821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.680854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.680882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.680913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.680942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.680971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.681018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.681048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.681076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.681114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.681143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.681174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.681203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.681233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.681264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.681293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.681331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.681359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.681391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.681420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.681445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.681475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.681506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.681536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.681565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.681594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.681619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.681660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.681697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.681730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.681758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.681785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.681815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.681847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.681873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.681903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.681933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.681963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.681991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.682027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.682064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.682094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.682123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.682150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.682179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.196 [2024-10-01 17:39:10.682206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.197 [2024-10-01 17:39:10.682234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.197 [2024-10-01 17:39:10.682263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.197 [2024-10-01 17:39:10.682291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.197 [2024-10-01 17:39:10.682323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.197 [2024-10-01 17:39:10.682483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.197 [2024-10-01 17:39:10.682515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.197 [2024-10-01 17:39:10.682546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.197 [2024-10-01 17:39:10.682576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.197 [2024-10-01 17:39:10.682606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.197 [2024-10-01 17:39:10.682641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.197 [2024-10-01 17:39:10.682671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.197 [2024-10-01 17:39:10.682703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.682732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.682763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.682795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.682824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.682854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.682892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.682925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.682954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.682985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.683017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.683049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.683082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.683111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.683138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.683174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.683210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.683239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.683269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.683300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.683330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.683359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.683411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.683440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.683470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.683498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.683527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.683567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.683596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.683629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.683659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.683686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.683722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.683750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.683784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.683811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.683841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.683876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.683908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.683942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.683972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.684009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.684044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.684075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.684107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.684138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.684168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.684196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.684227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.684256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.684285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.684317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.684352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.684381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.684411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.684442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.684471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.684821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.684850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.684880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.684909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.684940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.684969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.685008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.685035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.685061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.685087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.685112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.685137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.685166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.685197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.685230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.685261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.685291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.685316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.685346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.685377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.685410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.685440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.685471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.685503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.685534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.488 [2024-10-01 17:39:10.685572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.685602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.685632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.685661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.685691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.685724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.685757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.685789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.685820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.685848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.685878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.685907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.685938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.685969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.686005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.686036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.686064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.686094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.686129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.686158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.686189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.686221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.686251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.686281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.686311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.686358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.686388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.686416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.686442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.686472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.686508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.686537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.686599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.686630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.686661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.686693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.686723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.686754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.687101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.687137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.687172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.687199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.687229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.687258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.687287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.687315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.687343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.687373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.687411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.687440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.687467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.687493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.687523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.687551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.687580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.687611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.687644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.687676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.687733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.687765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.687793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.687819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.687850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.687880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.687909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.687943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.687981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.688017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.688047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.688081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.688113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.688143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.688185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.688216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.688248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.688281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.688312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.688342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.688371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.688403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.688434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.688465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.688495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.688524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.688556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.688588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.688620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.688656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.688688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.688716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.688749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.688783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.688814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.688846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.688876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.688905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.688952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.688984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.689021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.689051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.689079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.689112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.689459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.689493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.689535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.689569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.689600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.689633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.689660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.689690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.689725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.689756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.689788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.689818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.689846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.689876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.689906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.689936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.689967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.690004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.690038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.690067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.690097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.690128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.690158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.690189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.690219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.690245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.690270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.690303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.690335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.690360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.690385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.690410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.690436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.690460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.690485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.690510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.690535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.690560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.690583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.690608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.690641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.690669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.690700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.690731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.690761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.690792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.690817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.690841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.690866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.690889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.690914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.690940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.690965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.690990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.691020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.691045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.691070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.691094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.691121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.691153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.691184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.691214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.489 [2024-10-01 17:39:10.691243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.691610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.691642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.691672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.691704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.691735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.691770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.691797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.691828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.691863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.691893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.691923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.691954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.691985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.692023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.692064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.692094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.692124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.692158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.692188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.692225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.692260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.692291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.692323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.692353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.692387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.692416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.692445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.692474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.692502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.692530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.692566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.692596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.692623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.692655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.692944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.692979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.693014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.693042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.693069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.693096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.693129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.693158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.693195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.693225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.693254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.693291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.693321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.693351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.693388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.693420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.693450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.693479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.693511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.693545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.693577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.693606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.693640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.693670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.693702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.693731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.693762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.693802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.693833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.693866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.694078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.694112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.694145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.694177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.694213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.694245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.694273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.694302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.694332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.694368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.694405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.694438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.694468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.694496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.694524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.694556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.694584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.694620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.694653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.694683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.694714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.694744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.694781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.694818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.694852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.694880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.694907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.694932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.694961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.694989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.695032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.695062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.695088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.695119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.695146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.695176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.695205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.695236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.695265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.695295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.695324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.695351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.695382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.695413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.695445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.695469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.695497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.695530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.695561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.695589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.695619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.695648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.695683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.695713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.695746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.695778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.695807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.695835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.695865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.695899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.695929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.695958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.695987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.696230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.696264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.696293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.696341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.696380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.696410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.696442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.696469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.696496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.696523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.696556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.696586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.696616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.696646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.696671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.696704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.696735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.696771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.696801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.696832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.696863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.696894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.696924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.696957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.696992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.697026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.697055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.697087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.697118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.697149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.697173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.697203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.490 [2024-10-01 17:39:10.697233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.697268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.697300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.697330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.697359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.697389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.697419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.697469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.697500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.697530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.697564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.697594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.697626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.697655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.697685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.697741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.697772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.697802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.697840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.697871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.697923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.697954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.697983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.698035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.698065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.698097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.698129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.698158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.698189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.698221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.698251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.698282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.698842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.698881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.698907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.698938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.698980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.699016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.699047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.699086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.699115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.699145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.699173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.699205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.699242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.699275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.699302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.699331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.699363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.699395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.699425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.699458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.699489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.699516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.699545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.699578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.699609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.699642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.699675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.699706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.699738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.699767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.699797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.699827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.699856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.699888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.699918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.699946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.699975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.700008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.700041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.700071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.700101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.700140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.700171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.700201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.700233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.700263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.700295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.700323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.700377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.700406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.700436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.700465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.700493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.700539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.700568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.700598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.700629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.700658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.700687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.700719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.700755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.700787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.700817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.700959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.700986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.701021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.701052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.701090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.701119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.701149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.701176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.701203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.701234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.701263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.701292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.701324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.701355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.701387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.701420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.701451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.701482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.701512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.701542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.701573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.701602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.701631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.701663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.701688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.701719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.701748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.701776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.701806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.701859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.701890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.701926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.701954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.701985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:39:12.491 [2024-10-01 17:39:10.702016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.702042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.702068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.702092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.702117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.702141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.702165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.702189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.702214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.702240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.702264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.702288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.702316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.702347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.702377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.702408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.491 [2024-10-01 17:39:10.702436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.702467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.702500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.702531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.702563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.702594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.702621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.702650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.702675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.702698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.702723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.702755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.702787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.702820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.703108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.703136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.703161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.703186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.703212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.703237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.703264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.703293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.703325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.703353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.703385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.703415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.703445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.703478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.703509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.703538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.703569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.703599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.703629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.703659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.703687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.703715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.703744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.703774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.703811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.703841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.703872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.703901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.703930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.703970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.704007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.704037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.704070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.704100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.704411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.704443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.704478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.704507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.704534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.704565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.704596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.704624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.704655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.704683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.704723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.704757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.704786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.704818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.704846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.704876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.704912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.704938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.704967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.705002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.705035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.705062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.705093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.705127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.705157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.705185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.705215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.705242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.705280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.705306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.705337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.705367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.705396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.705426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.705454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.705484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.705517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.705547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.705575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.705602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.705634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.705666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.705696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.705723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.705754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.705784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.705813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.705842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.705876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.705906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.705939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.705968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.706002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.706037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.706069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.706102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.706163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.706194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.706228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.706260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.706290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.706328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.706356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.706386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.706513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.706543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.706575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.706605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.706636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.706674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.706704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.706732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.706764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.706792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.706820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.706853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.706882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.706913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.706945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.706977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.707017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.707046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.707072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.707102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.707131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.707170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.707202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.707234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.707265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.707292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.707324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.707352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.707382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.707874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.707910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.707944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.707973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.708005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.708037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.708064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.708094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.708125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.708157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.708191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.708222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.708253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.708284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.708315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.708352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.708384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.708413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.708446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.708476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.708509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.708536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.708565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.708600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.708629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.708659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.492 [2024-10-01 17:39:10.708694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.708721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.708758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.708790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.708820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.708854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.708886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.708918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.708952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.708983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.709023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.709058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.709089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.709118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.709147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.709175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.709204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.709235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.709264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.709300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.709329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.709358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.709394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.709424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.709453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.709482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.709511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.709552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.709582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.709610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.709637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.709663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.709696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.709724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.709750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.709777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.709805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.709844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.709973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.710007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.710035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.710065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.710096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.710128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.710157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.710184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.710217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.710248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.710279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.710309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.710339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.710369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.710394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.710423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.710456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.710484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.710515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.710546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.710577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.710607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.710639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.710671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.710706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.710736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.710766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.710795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.710826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.710855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.710884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.710914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.710943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.710975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.711400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.711436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.711464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.711490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.711523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.711554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.711591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.711624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.711654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.711682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.711712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.711741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.711774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.711804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.711835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.711868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.711897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.711930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.711960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.711990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.712030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.712062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.712093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.712123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.712153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.712179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.712203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.712228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.712253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.712278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.712303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.712327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.712352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.712376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.712402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.712427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.712453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.712482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.712512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.712544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.712575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.712604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.712638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.712670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.712700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.712729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.712761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.712801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.712831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.712863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.712895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.712925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.712955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.712985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.713019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.713054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.713084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.713115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.713147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.713175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.713207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.713236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.713264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.713292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.713421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.713453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.713485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.713518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.713546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.713580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.713611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.713641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.713670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.713699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.713728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.713757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.713785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.713819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.713852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.713882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.713907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.713938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.493 [2024-10-01 17:39:10.713973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.714011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.714040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.714069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.714096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.714125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.714165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.714200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.714230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.714258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.714285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.714740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.714773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.714804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.714835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.714866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.714900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.714931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.714961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.714999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.715031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.715062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.715092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.715127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.715158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.715188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.715222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.715252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.715283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.715313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.715343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.715374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.715402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.715434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.715466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.715494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.715526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.715553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.715580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.715610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.715637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.715671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.715703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.715733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.715768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.715797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.715823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.715849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.715878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.715906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.715940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.715971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.716005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.716033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.716064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.716093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.716130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.716161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.716190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.716219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.716244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.716273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.716650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.716682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.716712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.716749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.716776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.716807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.716843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.716872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.716899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.716927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.716972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.717006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.717036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.717085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.717115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.717146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.717176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.717208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.717254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.717284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.717313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.717357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.717388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.717417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.717448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.717478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.717508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.717539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.717564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.717597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.717627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.717661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.717695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.717724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.717753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.717788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.717822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.717851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.717879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.717908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.717937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.717965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.718008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.718037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.718066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.718091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.718122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.718152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.718183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.718215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.718245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.718274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.718303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.718331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.718366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.718397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.718444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.718475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.718502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.718532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.718561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.718615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.718643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.718672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.718805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.718839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.718869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.718899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.718928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.718972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.719009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.719040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.719068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.719099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.719132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.719160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.719582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.719613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.719640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.719667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.719694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.719722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.719753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.719783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.719812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.719838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.719868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.719900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.719932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.719968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.720001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.720031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.720059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.720094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.720128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.720159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.720188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.720218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.720244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.720273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.720306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.720337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.720368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.494 [2024-10-01 17:39:10.720400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.720432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.720459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.720488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.720517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.720547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.720576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.720604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.720638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.720668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.720700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.720737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.720767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.720796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.720823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.720852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.720888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.720918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.720948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.720978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.721015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.721048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.721079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.721104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.721137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.721164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.721196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.721226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.721253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.721295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.721324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.721354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.721380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.721412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.721439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.721465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.721496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.721628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.721659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.721687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.721718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.721879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.721917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.721954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.721985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.722022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.722054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.722083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.722113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.722149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.722178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.722212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.722242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.722271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.722303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.722332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.722362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.722397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.722429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.722465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.722497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.722530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.722560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.722589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.722617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.722648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.722680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.722717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.722743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.722778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.722810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.722839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.722865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.722899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.722930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.722960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.723001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.723033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.723062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.723091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.723129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.723170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.723201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.723228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.723256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.723285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.723313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.723342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.723370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.723407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.723440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.723469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.723498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.723527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.723551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.723581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.723612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.723643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.723676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.723708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.724097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.724141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.724171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.724200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.724262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.724291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.724322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.724354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 true 00:39:12.495 [2024-10-01 17:39:10.724383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.724414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.724443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.724480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.724509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.724538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.724574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.724606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.724633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.724663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.724692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.724721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.724750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.724776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.724807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.724835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.724866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.724899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.724937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.724968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.725001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.725027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.725060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.725090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.725120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.725150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.725179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.725208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.725237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.725267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.725296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.725326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.725356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.725387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.725417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.725450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.725479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.725511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.725546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.725576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.725608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.725639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.725671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.725709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.725739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.725768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.725800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.725831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.725866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.725897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.725926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.725975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.726014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.726044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.726077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.726109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.726295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.495 [2024-10-01 17:39:10.726325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.726363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.726395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.726852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.726886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.726918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.726955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.726988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.727024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.727063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.727094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.727123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.727160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.727188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.727216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.727258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.727288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.727319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.727355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.727383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.727430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.727462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.727491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.727520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.727552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.727583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.727614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.727642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.727679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.727709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.727737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.727767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.727801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.727830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.727862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.727894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.727924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.727956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.727986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.728020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.728052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.728076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.728107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.728137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.728175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.728204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.728233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.728261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.728289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.728324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.728356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.728387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.728416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.728443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.728476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.728507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.728537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.728569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.728596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.728623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.728652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.728687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.728867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.728898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.728929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.728959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.728988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.729025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.729055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.729085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.729113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.729143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.729174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.729206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.729237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.729268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.729303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.729333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.729363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.729395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.729425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.729458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.729495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.729526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.729558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.729592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.729624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.729655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.729692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.729720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.729754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.729782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.729812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.729854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.729886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.729918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.729956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.729986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.730022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.730058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.730088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.730120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.730151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.730182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.730217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.730248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.730282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.730313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.730343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.730373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.730402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.730430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.730463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.730491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.730520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.730556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.730585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.730612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.730641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.730671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.730713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.730748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.730776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.730805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.730835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.730865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.730998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.731029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.731058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.731086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.731621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.731659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.731689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.731721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.731750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.731780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.731807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.731836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.731871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.731902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.731931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.731962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.732000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.732031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.732068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.732099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.732132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.732162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.732191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.732227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.496 [2024-10-01 17:39:10.732255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.732285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.732313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.732342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.732372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.732404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.732435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.732464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.732492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.732526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.732558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.732587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.732615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.732643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.732684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.732710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.732741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.732770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.732797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.732827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.732864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.732896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.732922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.732953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.732985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.733019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.733051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.733082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.733113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.733145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.733176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.733209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.733237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.733269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.733301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.733333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.733359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.733390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.733420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.733580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.733616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.733646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.733676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.733709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.733739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.733766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.733792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.733827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.733865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.733893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.733919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.733949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.733980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.734016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.734044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.734075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.734106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.734136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.734167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.734196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.734226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.734255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.734288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.734317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.734347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.734378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.734409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.734442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.734471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.734501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.734531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.734570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.734601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.734631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.734664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.734695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.734727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.734757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.734786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.734829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.734864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.734892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.734923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.734951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.734980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.735021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.735048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.735077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.735109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.735148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.735175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.735206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.735236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.735266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.735296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.735334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.735365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.735395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.735425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.735458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.735490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.735521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.735550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.735671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.735701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.735730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.735762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 17:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3309225 00:39:12.497 [2024-10-01 17:39:10.736029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.736064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.736094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.736127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.736155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.736192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.736220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.736251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 17:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:12.497 [2024-10-01 17:39:10.736289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.736318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.736349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.736379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.736409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.736439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.736469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.736503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.736534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.736569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.736599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.736630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.736660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.736703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.736731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.736761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.736791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.736823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.736856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.736884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.736916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.736965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.737004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.737041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.737071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.737098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.737128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.737155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.737184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.737220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.737248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.737277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.737310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.737337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.737370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.737399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.737427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.737467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.737497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.737529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.737559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.737595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.737624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.737653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.737680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.737708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.737738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.497 [2024-10-01 17:39:10.737766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.737797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.737825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.737853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.738297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.738330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.738363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.738395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.738429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.738460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.738489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.738518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.738546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.738576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.738608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.738636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.738666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.738695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 Message suppressed 999 times: [2024-10-01 17:39:10.738731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 Read completed with error (sct=0, sc=15) 00:39:12.498 [2024-10-01 17:39:10.738763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.738792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.738821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.738851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.738884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.738914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.738942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.738980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.739014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.739044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.739074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.739105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.739138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.739164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.739192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.739221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.739252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.739280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.739314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.739348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.739377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.739405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.739432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.739457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.739489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.739523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.739554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.739581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.739611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.739641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.739671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.739700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.739735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.739760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.739794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.739830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.739861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.739889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.739922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.739953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.739984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.740021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.740053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.740085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.740117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.740149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.740179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.740211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.740240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.740369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.740402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.740438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.740470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.740931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.740963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.740998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.741028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.741062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.741090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.741120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.741150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.741186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.741217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.741245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.741279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.741308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.741337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.741366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.741396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.741425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.741460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.741489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.741518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.741546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.741582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.741612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.741642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.741669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.741699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.741726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.741754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.741781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.741816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.741857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.741888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.741917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.741945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.741974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.742007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.742037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.742069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.742099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.742131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.742160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.742191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.742221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.742249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.742276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.742310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.742337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.742374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.742404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.742438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.742468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.742497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.742525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.742555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.742594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.742621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.742650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.742680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.742708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.742863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.742894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.742927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.742958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.742990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.743025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.743058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.743086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.743118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.743147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.743184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.743214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.743243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.743282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.743312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.743341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.743371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.743400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.743430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.743460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.743494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.743525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.743556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.743591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.743621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.743656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.743686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.743717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.743755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.743787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.743816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.743848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.743877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.743932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.743962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.498 [2024-10-01 17:39:10.743991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.744052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.744080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.744108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.744142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.744172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.744203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.744234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.744270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.744300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.744329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.744356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.744386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.744414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.744446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.744479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.744509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.744537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.744564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.744593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.744621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.744649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.744676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.744705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.744737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.744766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.744793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.744822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.744856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.745034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.745062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.745092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.745122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.745575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.745608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.745636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.745676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.745710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.745745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.745774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.745802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.745834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.745863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.745897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.745929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.745961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.745999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.746032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.746062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.746093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.746122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.746155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.746184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.746216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.746242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.746272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.746307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.746336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.746366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.746393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.746424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.746453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.746488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.746516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.746546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.746574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.746602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.746633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.746661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.746693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.746730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.746756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.746786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.746818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.746847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.746879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.746906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.746938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.746970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.747006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.747036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.747070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.747102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.747134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.747163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.747192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.747221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.747250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.747276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.747308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.747338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.747372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.747522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.747556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.747589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.747619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.747648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.747686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.747716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.747747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.747778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.747808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.747840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.747869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.747900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.747931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.747959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.747999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.748030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.748059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.748120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.748147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.748178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.748205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.748235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.748289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.748320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.748350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.748381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.748410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.748437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.748465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.748506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.748540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.748570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.748598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.748628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.748654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.748693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.748723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.748751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.748782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.748812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.748841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.748870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.748907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.748943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.748974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.749009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.749042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.749072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.749103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.749132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.749161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.749190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.749222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.749252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.749279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.749311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.499 [2024-10-01 17:39:10.749340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.749370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.749402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.749430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.749462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.749492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.749522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.749650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.749685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.749718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.749750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.750212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.750248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.750276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.750306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.750342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.750373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.750406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.750433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.750464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.750500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.750532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.750567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.750598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.750628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.750657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.750683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.750711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.750740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.750773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.750803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.750836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.750868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.750898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.750929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.750957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.750999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.751029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.751059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.751086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.751116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.751149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.751179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.751212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.751250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.751280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.751308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.751335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.751365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.751394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.751423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.751461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.751500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.751528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.751553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.751586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.751614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.751642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.751672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.751699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.751728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.751759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.751790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.751822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.751852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.751882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.751912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.751941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.751969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.752006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.752161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.752192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.752222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.752254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.752284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.752316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.752347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.752377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.752413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.752441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.752471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.752498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.752526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.752561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.752590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.752620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.752647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.752674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.752702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.752736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.752778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.752809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.752841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.752873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.752901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.752931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.752958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.752992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.753119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.753151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.753180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.753215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.753244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.753274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.753306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.753338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.753370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.753401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.753428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.753456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.753490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.753519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.753551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.753585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.753616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.753646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.753676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.753704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.753737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.753766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.753796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.753825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.753853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.753889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.753919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.753964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.754004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.754037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.754065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.754094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.754124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.754164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.754194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.754227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.754364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.754396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.754428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.754460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.754953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.754984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.755019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.755056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.755086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.755115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.755148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.755179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.755221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.755252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.755283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.755312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.755342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.755382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.755423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.755456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.755483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.755510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.755538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.755568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.755596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.755624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.755652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.755681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.755707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.755738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.755770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.755810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.500 [2024-10-01 17:39:10.755839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.755869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.755896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.755924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.755952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.755984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.756022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.756052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.756081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.756109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.756139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.756174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.756204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.756239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.756268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.756295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.756328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.756359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.756390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.756422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.756452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.756502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.756531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.756561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.756593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.756622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.756674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.756704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.756736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.756766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.756795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.756945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.756982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.757020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.757050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.757097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.757125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.757156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.757190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.757220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.757252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.757283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.757322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.757352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.757383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.757411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.757441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.757469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.757500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.757539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.757571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.757598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.757629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.757659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.757688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.757715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.757745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.757774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.757813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.757840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.757868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.757897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.757925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.757954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.757985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.758019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.758047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.758081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.758109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.758139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.758169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.758198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.758224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.758253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.758277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.758311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.758343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.758372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.758401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.758437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.758468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.758498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.758530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.758559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.758604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.758636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.758667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.758694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.758724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.758753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.758783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.758826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.758858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.758884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.758916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.759058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.759088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.759114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.759142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.759799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.759826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.759850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.759874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.759901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.759925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.759949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.759973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.760007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.760040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.760069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.760099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.760124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.760155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.760180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.760204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.760228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.760252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.760277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.760301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.760325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.760353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.760384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.760414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.760450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.760478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.760518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.760548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.760577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.760609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.760638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.760670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.760699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.760731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.760764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.760793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.760822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.760854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.760884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.760914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.760944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.760976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.761013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.761041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.761074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.761106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.761137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.761167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.761195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.761227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.761255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.761284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.761313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.761345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.761380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.761412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.501 [2024-10-01 17:39:10.761442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.761469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.761503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.761663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.761693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.761729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.761758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.761783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.761813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.761844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.761874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.761902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.761930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.761957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.761989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.762030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.762055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.762084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.762117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.762147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.762177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.762209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.762239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.762268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.762297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.762331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.762360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.762395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.762426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.762454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.762486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.762515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.762550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.762580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.762611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.762647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.762676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.762708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.762739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.762767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.762797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.762827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.762865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.762894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.762921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.762958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.762986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.763022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.763054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.763084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.763120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.763153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.763183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.763214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.763243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.763277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.763308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.763336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.763371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.763401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.763441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.763470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.763502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.763530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.763559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.763594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.763622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.763756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.763782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.763812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.763844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.764284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.764324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.764354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.764386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.764414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.764443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.764475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.764514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.764544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.764577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.764605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.764635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.764663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.764689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.764716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.764744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.764773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.764803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.764836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.764865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.764897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.764925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.764952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.764981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.765017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.765046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.765076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.765105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.765132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.765160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.765189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.765220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.765251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.765276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.765301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.765325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.765349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.765373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.765400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.765433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.765467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.765499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.765527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.765554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.765582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.765609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.765637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.765667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.765695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.765722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.765752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.765780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.765830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.765856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.765889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.765919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.765949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.765977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.766013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.766166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.766195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.766227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.766253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.766286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.766313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.766344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.766371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.766397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.766424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.766452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.766479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.766507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.766534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.766566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.766595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.766633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.766661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.766688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.766717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.766744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.766771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.766794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.766820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.766847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.766873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.766900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.766928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.766954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.766981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.502 [2024-10-01 17:39:10.767018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.767043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.767070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.767101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.767131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.767162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.767191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.767220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.767252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.767283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.767312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.767339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.767370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.767398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.767433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.767463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.767493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.767523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.767557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.767585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.767613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.767644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.767674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.767702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.767729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.767759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.767787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.767816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.767845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.767876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.767906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.767934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.767964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.767992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.768128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.768162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.768191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.768224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.768688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.768722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.768759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.768792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.768820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.768848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.768879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.768907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.768941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.768979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.769014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.769043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.769075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.769103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.769132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.769160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.769195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.769221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.769248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.769286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.769315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.769344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.769374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.769402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.769436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.769466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.769496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.769527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.769555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.769584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.769614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.769643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.769676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.769706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.769738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.769766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.769797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.769826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.769857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.769887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.769918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.769948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.769981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.770019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.770049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.770080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.770116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.770148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.770177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.770210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.770241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.770269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.770301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.770328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.770361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.770392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.770426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.770456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.770482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.770640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.770671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.770699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.770726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.770753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.770785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.770814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.770845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.770879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.770908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.770940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.770969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.771004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.771032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.771066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.771098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.771131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.771163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.771193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.771226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.771258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.771290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.771319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.771357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.771387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.771414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.771442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.771472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.771513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.771541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.771572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.771606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.771636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.771676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.771705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.771735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.771763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.771794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.771821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.771850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.771880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.771911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.771951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.771981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.772013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.772042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.772070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.772103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.772128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.772159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.772194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.772225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.772258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.772291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.772322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.772350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.772382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.772412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.772440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.772473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.772503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.772535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.772567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.772595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.772719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.772748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.503 [2024-10-01 17:39:10.772780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.772810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.773086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.773126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.773154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.773190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.773218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.773248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.773280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.773312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.773346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.773375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.773404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.773438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.773467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.773498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.773529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.773559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.773590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.773622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.773652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.773682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.773713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.773746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.773776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.773806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.773861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.773892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.773926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.773961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.773996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.774025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.774059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.774088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.774119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.774147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.774176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.774212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.774243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.774271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.774302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.774329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.774364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.774396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.774423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.774455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.774486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.774517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.774546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.774575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.774614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.774644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.774674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.775043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.775076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.775109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.775136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.775168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.775196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.775225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.775257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.775288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.775315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.775345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.775374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:39:12.504 [2024-10-01 17:39:10.775425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.775457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.775487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.775518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.775548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.775596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.775626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.775658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.775688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.775717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.775747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.775776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.775809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.775838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.775868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.775904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.775935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.775965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.776003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.776038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.776069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.776107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.776140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.776169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.776200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.776232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.776263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.776292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.776329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.776357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.776386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.776418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.776445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.776473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.776510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.776539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.776567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.776597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.776627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.776653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.776679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.776706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.776735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.776765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.776806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.776836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.776870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.776900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.776934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.776964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.776999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.777031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.777170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.777197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.777230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.777255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.777289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.777316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.777367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.777398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.777428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.777461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.777492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.777522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.777967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.778011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.778041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.778073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.778105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.778137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.778167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.778196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.778232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.778260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.778290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.778317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.778346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.778377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.778402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.778434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.778474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.778506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.778530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.778555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.778584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.778608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.778633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.778657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.778682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.504 [2024-10-01 17:39:10.778706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.778730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.778759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.778787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.778817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.778848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.778879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.778909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.778942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.778975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.779012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.779042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.779075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.779105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.779135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.779160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.779191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.779225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.779251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.779281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.779314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.779344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.779373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.779405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.779434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.779465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.779633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.779663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.779698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.779727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.779755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.779785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.779815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.779847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.779876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.779905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.779937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.779967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.780009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.780041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.780072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.780109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.780138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.780214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.780244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.780275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.780313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.780343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.780374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.780406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.780438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.780468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.780497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.780538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.780571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.780599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.780633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.780663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.780693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.780721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.780758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.780786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.780820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.780850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.780879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.780908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.780937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.780968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.781007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.781036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.781066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.781094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.781128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.781158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.781191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.781221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.781253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.781281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.781312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.781343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.781375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.781407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.781439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.781473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.781501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.781535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.781587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.781618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.781652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.781679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.781711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.781765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.781793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.781826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.781866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.781897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.781927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.781955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.781984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.782025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.782055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.782090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.782435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.782473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.782504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.782534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.782568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.782598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.782633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.782661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.782694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.782731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.782762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.782794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.782825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.782857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.782888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.782915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.782947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.782979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.783013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.783040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.783073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.783110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.783138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.783166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.783199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.783226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.783254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.783293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.783320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.783349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.783377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.783405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.783433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.783465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.783502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.783529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.783560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.783590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.783621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.783650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.783675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.783707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.783737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.783769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.783798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.783828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.783862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.783892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.783919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.783945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.783969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.784003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.784034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.784064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.784096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.784128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.784155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.784188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.784212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.784245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.784276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.784300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.505 [2024-10-01 17:39:10.784355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.784385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.784521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.784555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.784586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.784617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.785079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.785124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.785155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.785185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.785217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.785249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.785283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.785314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.785344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.785374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.785404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.785435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.785466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.785495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.785526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.785553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.785593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.785622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.785651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.785680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.785708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.785737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.785772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.785802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.785830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.785857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.785885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.785913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.785941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.785971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.786014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.786046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.786085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.786120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.786150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.786178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.786204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.786232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.786261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.786291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.786321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.786352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.786381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.786409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.786440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.786468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.786501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.786533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.786567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.786598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.786629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.786661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.786691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.786722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.786754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.786785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.786820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.786848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.786878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.787036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.787067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.787100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.787131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.787161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.787190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.787218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.787246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.787276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.787316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.787344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.787376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.787407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.787438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.787477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.787507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.787535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.787572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.787603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.787635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.787667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.787698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.787729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.787760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.787790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.787819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.787848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.787879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.787911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.787938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.787963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.787992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.788028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.788063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.788093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.788122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.788149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.788179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.788215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.788248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.788276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.788308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.788335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.788360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.788389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.788418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.788446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.788474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.788513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.788542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.788571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.788600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.788626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.788653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.788682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.788710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.788742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.788771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.788799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.788827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.788857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.788888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.788920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.788951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.789508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.789543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.789574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.789603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.789631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.789667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.789696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.789728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.789760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.789793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.789823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.789853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.789891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.789922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.789951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.789984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.790021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.790052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.790080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.790109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.790142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.790171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.790201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.790237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.790266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.790297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.790325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.790355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.790413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.790442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.790473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.790506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.790544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.790574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.790603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.506 [2024-10-01 17:39:10.790632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.790662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.790692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.790720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.790760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.790791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.790822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.790850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.790882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.790912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.790941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.790970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.791010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.791041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.791073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.791102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.791131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.791169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.791195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.791224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.791255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.791284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.791314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.791346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.791378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.791407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.791436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.791469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.791628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.791659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.791689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.791719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.791749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.791786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.791817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.791845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.791874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.791903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.791937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.791966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.792002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.792032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.792061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.792097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.792126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.792156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.792185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.792215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.792246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.792276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.792306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.792338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.792370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.792402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.792434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.792461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.792492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.792522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.792554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.792589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.792621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.792653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.792684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.792712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.792748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.792777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.792816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.792844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.792874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.792904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.792937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.792962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.792997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.793024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.793052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.793081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.793116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.793144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.793176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.793207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.793236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.793262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.793292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.793330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.793355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.793384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.793422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.793452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.793481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.793511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.793541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.793569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.794208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.794248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.794278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.794313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.794343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.794373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.794407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.794437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.794468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.794497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.794529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.794564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.794592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.794622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.794655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.794686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.794716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.794747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.794778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.794810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.794838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.794866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.794894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.794926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.794954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.794982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.795018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.795056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.795092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.795118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.795150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.795178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.795206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.795238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.795271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.795304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.795333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.795366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.795398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.795428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.795459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.795488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.795516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.795547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.795578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.795609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.795639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.795668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.795698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.795731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.795762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.795806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.795834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.795863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.795891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.795921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.795956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.795987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.796021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.796053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.796082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.507 [2024-10-01 17:39:10.796112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.796142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.796277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.796309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.796349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.796375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.796410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.796439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.796470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.796499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.796530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.796562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.796588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.796619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.796649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.796679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.796709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.796747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.796775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.796805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.796836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.796866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.796900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.796934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.796964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.797017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.797048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.797075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.797106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.797137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.797172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.797201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.797232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.797260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.797290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.797330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.797358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.797387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.797419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.797448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.797474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.797503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.797531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.797559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.797598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.797626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.797655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.797680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.797713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.797742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.797776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.797806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.797837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.797870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.797901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.797928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.797960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.797990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.798024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.798054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.798085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.798118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.798149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.798179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.798210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.798249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.798604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.798636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.798671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.798700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.798729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.798771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.798800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.798831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.798859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.798886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.798914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.798948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.798977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.799010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.799039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.799072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.799103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.799132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.799162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.799191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.799216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.799244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.799294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.799324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.799353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.799385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.799413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.799457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.799487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.799518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.799547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.799577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.799607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.799640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.799671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.799699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.799731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.799768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.799801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.799831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.799863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.799894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.799926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.799952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.799979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.800012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.800038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.800067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.800095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.800132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.800166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.800195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.800224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.800251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.800276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.800307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.800335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.800366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.800396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.800427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.800460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.800491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.800523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.800878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.800907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.800938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.800969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.801006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.801035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.801065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.801101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.801133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.801163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.801193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.801223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.801279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.801310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.801342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.801373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.801404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.801435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.801465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.801495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.801526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.801556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.801619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.801647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.801676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.801703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.801734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.801773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.801802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.801832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.801868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.801906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.801934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.801963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.801991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.508 [2024-10-01 17:39:10.802025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.802056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.802092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.802122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.802151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.802178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.802210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.802249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.802279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.802310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.802338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.802365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.802396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.802433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.802461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.802491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.802522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.802553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.802582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.802617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.802649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.802677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.802709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.802742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.802772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.802802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.802827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.802858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.802884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.803240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.803273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.803309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.803338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.803369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.803400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.803429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.803460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.803492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.803537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.803568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.803597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.803626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.803657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.803684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.803714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.803747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.803775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.803805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.803836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.803863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.803888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.803922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.803952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.803980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.804015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.804046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.804079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.804113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.804145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.804176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.804207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.804238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.804269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.804298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.804328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.804361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.804387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.804417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.804449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.804475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.804499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.804526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.804551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.804575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.804599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.804624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.804649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.804672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.804697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.804722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.804746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.804770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.804796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.804820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.804844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.804869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.804893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.804920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.804953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.804985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.805019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.805049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.805418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.805450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.805486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.805516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.805548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.805581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.805610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.805642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.805672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.805705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.805737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.805766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.805816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.805846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.805877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.805905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.805938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.806228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.806262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.806293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.806322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.806351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.806382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.806410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.806439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.806474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.806504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.806532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.806559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.806586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.806618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.806650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.806691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.806722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.806752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.806781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.806820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.806849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.806880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.806909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.806937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.806967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.807002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.807035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.807068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.807098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.807126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.807156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.807191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.807222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.807259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.807290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.807319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.807353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.807384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.807412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.807438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.807468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.807509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.807538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.807570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.807598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.807626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.807657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.807831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.807863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.807895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.807924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.807953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.807985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.808020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.509 [2024-10-01 17:39:10.808049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.808086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.808115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.808144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.808172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.808202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.808236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.808268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.808297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.808328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.808359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.808388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.808412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.808445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.808474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.808502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.808535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.808564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.808592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.808631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.808660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.808690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.808728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.808768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.808808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.808837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.808862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.808891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.808919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.808948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.808975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.809014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.809041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.809070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.809098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.809133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.809169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.809200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.809222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.809253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.809281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.809309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.809336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.809368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.809397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.809426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.809454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.809484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.809512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.809538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.809565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.809592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.809623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.809650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.809679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.809709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.809738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.809879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.809909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.809944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.809972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.810007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.810037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.810068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.810108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.810136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.810165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.810194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.810222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.810253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.810281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.810311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.810342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.810783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.810817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.810851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.810879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:39:12.510 [2024-10-01 17:39:10.810913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.810939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.810967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.811004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.811034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.811062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.811095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.811124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.811152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.811182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.811210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.811241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.811279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.811319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.811348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.811373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.811403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.811431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.811461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.811488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.811516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.811546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.811586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.811617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.811647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.811675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.811704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.811734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.811767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.811796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.811827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.811858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.811892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.811923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.811953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.811984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.812019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.812051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.812081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.812112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.812143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.812173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.812200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.812233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.812265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.812295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.812326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.812355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.812390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.812419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.812451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.812483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.812513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.812545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.812574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.812603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.812634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.812665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.812699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.812729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.812924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.812957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.812992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.813028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.813059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.813090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.813120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.813149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.813186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.813219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.813251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.813286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.813318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.813348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.813379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.813411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.510 [2024-10-01 17:39:10.813440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.813471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.813503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.813538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.813567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.813595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.813632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.813660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.813691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.813723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.813761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.813791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.813822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.813848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.813877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.813907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.813942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.813973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.814011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.814039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.814068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.814104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.814135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.814166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.814191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.814223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.814250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.814282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.814315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.814343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.814373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.814405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.814438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.814468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.814499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.814530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.814559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.814589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.814619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.814648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.814704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.814739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.814770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.814800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.814830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.814871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.814900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.815257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.815309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.815340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.815370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.815402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.815430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.815483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.815513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.815543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.815574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.815604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.815637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.815669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.815704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.815734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.815765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.815804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.815834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.815863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.815892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.815920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.815956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.815983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.816018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.816049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.816080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.816108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.816144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.816177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.816206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.816233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.816275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.816310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.816342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.816371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.816397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.816431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.816464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.816492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.816523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.816550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.816580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.816611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.816642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.816671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.816697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.816730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.816756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.816786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.816816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.816847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.816878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.816908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.816938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.816972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.817006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.817030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.817055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.817082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.817113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.817145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.817175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.817202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.817231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.817858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.817892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.817925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.817954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.817998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.818028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.818061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.818099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.818130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.818160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.818205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.818237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.818268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.818300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.818331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.818361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.818392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.818423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.818450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.818482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.818512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.818540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.818570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.818612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.818643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.818675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.818705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.818735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.818765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.818796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.818825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.818852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.818882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.818910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.818938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.818976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.819013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.819046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.819075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.819112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.819149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.819179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.819210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.819235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.819264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.819293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.819323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.819362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.819395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.819424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.819455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.819483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.511 [2024-10-01 17:39:10.819507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.819537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.819572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.819596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.819627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.819658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.819686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.819714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.819743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.819776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.819807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.819945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.819978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.820014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.820067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.820096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.820132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.820164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.820192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.820224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.820253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.820281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.820313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.820343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.820373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.820403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.820434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.820471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.820499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.820529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.820562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.820594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.820624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.820653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.820682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.820712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.820742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.820773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.820802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.820831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.820859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.820891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.820930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.820961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.820992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.821028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.821057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.821088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.821114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.821146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.821178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.821207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.821233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.821263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.821291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.821326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.821358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.821390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.821429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.821458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.821487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.821511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.821543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.821573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.821601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.821630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.821657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.821685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.821725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.821759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.821790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.821819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.821848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.821876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.821906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.822638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.822680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.822710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.822741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.822769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.822799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.822828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.822857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.822891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.822921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.822951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.822980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.823013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.823043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.823080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.823108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.823139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.823170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.823199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.823232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.823260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.823293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.823322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.823355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.823388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.823417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.823449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.823477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.823505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.823538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.823568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.823604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.823637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.823668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.823708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.823739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.823764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.823794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.823825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.823859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.823889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.823919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.823951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.823998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.824029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.824060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.824089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.824125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.824155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.824183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.824214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.824246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.824275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.824305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.824334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.824363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.824404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.824430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.824459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.824488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.824516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.824542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.824572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.824727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.824761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.824791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.824824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.824851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.824886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.824914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.824940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.824969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.825002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.825035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.825069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.825100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.825131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.512 [2024-10-01 17:39:10.825162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.825192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.825223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.825251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.825283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.825310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.825340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.825377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.825406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.825435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.825464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.825498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.825533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.825562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.825592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.825636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.825667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.825699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.825731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.825762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.825819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.825851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.825879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.825910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.825940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.825970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.826006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.826037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.826065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.826128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.826160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.826190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.826240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.826270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.826300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.826324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.826356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.826386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.826414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.826443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.826478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.826507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.826541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.826571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.826601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.826640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.826676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.826707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.826733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.826763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.827103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.827136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.827167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.827197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.827228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.827259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.827289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.827321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.827352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.827382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.827410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.827441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.827474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.827506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.827534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.827566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.827597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.827629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.827664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.827691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.827719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.827749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.827777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.827810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.827841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.827870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.827895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.827925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.827953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.827982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.828017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.828046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.828077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.828116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.828141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.828173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.828198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.828228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.828262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.828291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.828321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.828351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.828380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.828410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.828442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.828473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.828502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.828532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.828572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.828600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.828631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.828662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.828695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.828732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.828760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.828791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.828821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.828851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.828883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.828914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.828944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.828976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.829009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.829350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.829381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.829410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.829436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.829465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.829498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.829529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.829558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.829586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.829615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.829642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.829673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.829702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.829731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.829767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.829796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.829822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.829852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.829878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.829909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.829938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.829969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.830013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.830043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.830073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.830106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.830136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.830168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.830220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.830251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.830282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.830314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.830346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.830380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.830410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.830439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.830474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.830505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.830538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.830566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.830595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.830627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.830661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.830695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.830724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.830754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.830809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.830840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.830869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.830902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.513 [2024-10-01 17:39:10.830931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.830969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.831003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.831034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.831066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.831098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.831131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.831170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.831200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.831238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.831273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.831302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.831330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.831361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.831709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.831743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.831771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.831799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.831823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.831855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.831887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.831916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.831944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.831972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.832005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.832035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.832065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.832095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.832127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.832158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.832197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.832226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.832259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.832290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.832320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.832344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.832374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.832408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.832433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.832457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.832482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.832507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.832532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.832555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.832580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.832605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.832629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.832654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.832679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.832703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.832730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.832761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.832790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.832823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.832855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.832886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.832919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.832947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.832972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.833007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.833038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.833068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.833099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.833129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.833174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.833205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.833234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.833266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.833296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.833325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.833358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.833386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.833415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.833453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.833484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.833510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.833538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.834071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.834109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.834139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.834169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.834220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.834252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.834283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.834314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.834345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.834376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.834407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.834435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.834462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.834490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.834526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.834557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.834589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.834618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.834648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.834680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.834711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.834740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.834767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.834800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.834836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.834873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.834904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.834933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.834962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.835000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.835032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.835060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.835088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.835116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.835152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.835187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.835218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.835245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.835276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.835307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.835338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.835365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.835391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.835415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.835444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.835476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.835501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.835529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.835564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.514 [2024-10-01 17:39:10.835592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.835622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.835656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.835687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.835718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.835755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.835784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.835818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.835847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.835877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.835912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.835943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.835972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.836006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.836036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.836398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.836436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.836467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.836499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.836528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.836557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.836595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.836625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.836655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.836686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.836717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.836747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.836777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.836805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.836842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.836871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.836904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.836932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.836959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.836990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.837027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.837056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.837092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.837121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.837151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.837181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.837209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.837238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.837277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.837306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.837334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.837362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.837391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.837421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.837456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.837486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.837514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.837544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.837577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.837606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.837636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.837664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.837697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.837722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.837751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.837781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.837810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.837842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.837873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.837901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.837933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.837961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.837999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.838031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.838063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.838092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.838121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.838151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.838180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.838215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.838241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.838271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.838300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.838723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.838749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.838780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.838808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.838837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.838869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.838898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.838930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.838960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.839001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.839031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.839063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.839096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.839125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.839155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.839186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.839217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.839251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.839281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.839316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.515 [2024-10-01 17:39:10.839344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.839373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.839410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.839451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.839480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.839511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.839541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.839572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.839617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.839645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.839674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.839705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.839735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.839769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.839796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.839829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.839862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.839894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.839923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.839951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.839989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.840021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.840050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.840080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.840104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.840137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.840165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.840193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.840220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.840248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.840280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.840312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.840342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.840371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.840403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.840433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.840465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.840493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.840521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.840552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.840583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.840611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.840636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.840663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.841024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.841056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.841090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.841119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.841149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.841179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.841209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.841238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.841270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.841300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.841330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.841358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.841390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.841419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.841452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.841485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.841515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.841546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.841574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.841606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.841635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.841666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.841696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.841726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.841764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.841794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.841824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.841859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.841889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.841919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.841948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.841988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.842020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.842049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.516 [2024-10-01 17:39:10.842076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.842115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.842144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.842172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.842203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.842234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.842263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.842292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.842324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.842366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.842400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.842430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.842458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.842486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.842516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.842545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.842586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.842620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.842651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.842680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.842712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.842743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.842772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.842803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.842834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.842864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.842895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.842925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.842957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.843318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.843350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.843381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.843414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.843443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.843476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.843507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.843535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.843563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.843592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.843653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.843681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.843713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.843748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.843778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.843837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.843869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.843900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.843931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.843958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.843987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.844021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.844048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.844077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.844107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.844144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.844173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.844205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.844233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.844266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.844298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.844336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.844366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.844397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.844426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.844452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.844487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.844516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.844544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.844577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.844608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.844637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.844668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.844693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.844722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.844755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.844787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.844818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.844851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.844883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.844915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.844944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.844971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.845005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.845035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.845064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.845095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.845123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.845159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.845189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.845218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.845247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.845273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.517 [2024-10-01 17:39:10.845302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.845849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.845883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.845915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.845946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.845978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.846027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.846066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.846096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.846123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.846152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.846182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.846230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.846261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.846291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.846322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.846351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.846382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.846417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.846450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.846480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.846508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.846534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.846571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.846600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.846638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.846665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.846693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.846726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.846758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.846786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.846816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.846846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.846886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.846916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.846945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.846978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.847013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.847042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.847072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.847102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.847133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.847163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.847189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.847227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.847259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.847288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.847317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.847346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.847375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.847416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.847442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.847471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.847504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.847537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.847567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.847599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.847629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.847653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.847684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.847713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.847743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.847776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.847805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.847950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.847999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.848028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.848059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.848090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.848121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.848154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.848188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.848217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.848273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.848303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.848333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:39:12.518 [2024-10-01 17:39:10.848367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.848395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.848428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.848457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.848493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.848520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.848552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.848602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.848631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.848661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.848691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.848720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.848751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.848779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.848811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.848841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.848871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.848903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.848936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.848967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.849001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.518 [2024-10-01 17:39:10.849035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.849065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.849092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.849126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.849151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.849182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.849211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.849242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.849279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.849307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.849333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.849360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.849390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.849418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.849446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.849478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.849515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.849540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.849571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.849600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.849629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.849658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.849691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.849726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.849756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.849785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.849812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.849848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.849877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.849903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.849937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.850618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.850652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.850681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.850710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.850741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.850783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.850813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.850843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.850880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.850909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.850944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.850973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.851011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.851046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.851076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.851135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.851164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.851196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.851228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.851257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.851296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.851325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.851353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.851387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.851418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.851482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.851512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.851543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.851573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.851605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.851667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.851698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.851732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.851759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.851785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.851815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.851850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.851879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.851914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.851951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.851980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.852014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.852051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.852083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.852116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.852146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.852173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.852204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.852236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.852265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.852292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.852330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.852360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.852390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.852419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.852448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.519 [2024-10-01 17:39:10.852476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.852504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.852528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.852559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.852592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.852625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.852658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.852811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.852847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.852879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.852907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.852935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.852966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.852998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.853031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.853060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.853090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.853130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.853158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.853190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.853220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.853252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.853287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.853319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.853347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.853381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.853414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.853473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.853502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.853532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.853560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.853590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.853619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.853647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.853676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.853707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.853743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.853772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.853803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.853833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.853861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.853893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.853948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.853981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.854017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.854065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.854097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.854126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.854158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.854189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.854223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.854253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.854283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.854309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.854342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.854368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.854398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.854427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.854457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.854485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.854512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.854540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.854579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.854609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.854640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.854669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.854695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.854723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.854753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.854780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.854815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.855176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.855212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.855244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.855273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.855305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.855335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.855364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.855397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.855427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.855455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.855485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.855515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.855547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.855579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.855607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.855638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.855669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.855730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.520 [2024-10-01 17:39:10.855760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.855790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.855817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.855848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.855883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.855912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.855938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.855964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.856000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.856028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.856073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.856104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.856134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.856168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.856196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.856222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.856252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.856283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.856312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.856343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.856373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.856404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.856435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.856463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.856496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.856524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.856557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.856587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.856618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.856659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.856690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.856723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.856753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.856783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.856818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.856847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.856876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.856902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.856936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.856966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.857002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.857033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.857064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.857094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.857126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.857470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.857500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.857539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.857568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.857599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.857627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.857655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.857687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.857723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.857751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.857781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.857807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.857839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.857869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.857896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.857926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.857958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.857989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.858026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.858056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.858086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.858115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.858144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.858173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.858202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.858233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.858293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.858323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.858353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.858384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.858414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.858465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.858493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.858524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.858554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.858583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.858616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.858646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.858676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.858711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.858741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.858771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.858804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.858835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.858872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.858902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.858958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.858987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.859020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.859051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.521 [2024-10-01 17:39:10.859081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.859110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.859141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.859171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.859199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.859229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.859256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.859286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.859315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.859340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.859373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.859408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.859436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.859465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.859791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.859828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.859857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.859885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.859911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.859940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.859970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.860008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.860035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.860064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.860093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.860127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.860158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.860191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.860221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.860250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.860283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.860314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.860347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.860379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.860408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.860436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.860460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.860489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.860514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.860539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.860563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.860587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.860612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.860638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.860663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.860688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.860713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.860737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.860761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.860789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.860823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.860856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.860885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.860918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.860949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.860981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.861018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.861048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.861078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.861112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.861142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.861170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.861201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.861231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.861259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.861292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.861333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.861364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.861390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.861418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.861448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.861474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.861499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.861523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.861550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.861574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.861600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.861951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.861986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.862020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.862050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.862082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.862114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.862143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.862174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.862210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.862241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.862277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.862307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.862339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.862372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.862402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.862433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.862465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.522 [2024-10-01 17:39:10.862501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.862534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.862562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.862593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.862621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.862650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.862680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.862713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.862744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.862774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.862805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.862839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.862872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.862904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.862932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.862961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.862990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.863025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.863063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.863092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.863120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.863151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.863189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.863217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.863246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.863276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.863308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.863337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.863369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.863397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.863430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.863459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.863488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.863513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.863541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.863573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.863602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.863641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.863671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.863702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.863730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.863759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.863795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.863827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.863858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.863889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.863920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.864484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.864518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.864552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.864582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.864613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.864642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.864674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.864703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.864734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.864761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.864789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.864825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.864855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.864883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.864912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.864937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.864969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.865003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.865032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.865059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.865089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.865119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.865156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.865191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.865221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.865248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.865280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.865309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.865338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.865367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.865394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.865431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.865470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.865499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.865529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.865559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.865589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.865621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.865651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.865689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.865716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.865746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.865776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.865806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.865835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.865866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.865893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.865924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.523 [2024-10-01 17:39:10.865955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.865986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.866021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.866049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.866082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.866110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.866141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.866176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.866208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.866237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.866269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.866301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.866335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.866365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.866396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.866551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.866586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.866615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.866645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.866700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.866733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.866769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.866798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.866829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.866884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.866912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.866944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.866974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.867011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.867043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.867080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.867110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.867141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.867169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.867200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.867258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.867287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.867318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.867348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.867378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.867407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.867442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.867473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.867504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.867533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.867569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.867601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.867629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.867655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.867684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.867717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.867747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.867775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.867810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.867840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.867868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.867898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.867927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.867957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.867987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.868017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.868053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.868085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.868115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.868147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.868180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.868212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.868242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.868269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.868302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.868334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.868366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.868396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.868430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.868459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.868494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.868523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.868552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.868584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.868937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.868972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.869009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.869041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.869070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.869098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.869124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.869154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.869183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.869214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.869242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.869272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.869303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.869348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.869374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.869405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.869438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.524 [2024-10-01 17:39:10.869471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.869507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.869537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.869568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.869599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.869631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.869664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.869695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.869725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.869756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.869787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.869818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.869846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.869875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.869906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.869933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.869963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.869991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.870021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.870045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.870071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.870094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.870119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.870143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.870169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.870193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.870219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.870243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.870267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.870291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.870316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.870342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.870368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.870393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.870416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.870441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.870468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.870493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.870517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.870541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.870567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.870591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.870616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.870641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.870666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.870690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.871041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.871073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.871106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.871139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.871171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.871199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.871231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.871263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.871295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.871329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.871368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.871399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.871443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.871471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.871504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.871535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.871562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.871591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.871624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.871653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.871688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.871720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.871752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.871783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.871815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.871847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.871879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.871915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.871946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.871975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.872017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.872050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.872086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.872115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.872418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.872446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.872480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.872512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.872551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.872580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.872609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.872639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.872669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.872697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.872730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.525 [2024-10-01 17:39:10.872768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.872796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.872824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.872851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.872884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.872913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.872948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.872978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.873010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.873043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.873078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.873108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.873137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.873166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.873201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.873233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.873262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.873314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.873344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.873377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.873406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.873436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.873467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.873496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.873526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.873556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.873586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.873625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.873655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.873683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.873714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.873743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.873776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.873804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.873836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.873868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.873899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.873938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.873971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.874005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.874036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.874067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.874096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.874127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.874157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.874190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.874222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.874251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.874280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.874308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.874336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.874365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.874393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.874518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.874546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.874576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.874607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.874637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.874662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.874693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.874724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.874756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.874786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.874817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.874848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.874877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.874909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.874937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.874967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.875003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.875036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.875068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.875100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.875132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.875163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.875202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.875231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.875263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.875298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.875327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.875368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.875397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.875797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.875831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.875860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.875890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.875917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.875950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.875980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.876018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.876050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.876086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.876116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.876144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.876172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.876204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.876233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.526 [2024-10-01 17:39:10.876261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.876297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.876325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.876360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.876390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.876419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.876467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.876499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.876531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.876585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.876614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.876643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.876674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.876705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.876741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.876771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.876800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.876832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.876864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.877105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.877139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.877174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.877205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.877234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.877262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.877294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.877326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.877354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.877396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.877425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.877451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.877479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.877508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.877538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.877567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.877596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.877633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.877661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.877691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.877720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.877751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.877790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.877820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.877849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.877878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.877910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.877942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.877971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.878006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.878039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.878068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.878098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.878127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.878151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.878180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.878208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.878238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.878264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.878295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.878326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.878355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.878385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.878420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.878449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.878484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.878511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.878542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.878570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.878599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.878628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.878659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.878689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.878724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.878753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.878815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.878844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.878874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.878902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.878931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.878963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.878991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.879026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 [2024-10-01 17:39:10.879062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.527 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:12.527 17:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:12.527 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:12.527 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:12.527 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:12.527 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:12.867 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:12.867 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:12.867 [2024-10-01 17:39:11.057236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.867 [2024-10-01 17:39:11.057277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.867 [2024-10-01 17:39:11.057302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.867 [2024-10-01 17:39:11.057325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.867 [2024-10-01 17:39:11.057348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.867 [2024-10-01 17:39:11.057373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.867 [2024-10-01 17:39:11.057396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.867 [2024-10-01 17:39:11.057425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.057454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.057483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.057511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.057539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.057563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.057587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.057610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.057634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.057662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.057709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.057740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.057773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.057803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.057833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.057864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.057893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.057920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.057952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.057981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.058017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.058047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.058076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.058104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.058134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.058189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.058219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.058251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.058279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.058307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.058335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.058364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.058395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.058423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.058451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.058480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.058682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.058713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.058739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.058765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.058791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.058820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.058857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.058888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.058925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.058959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.058989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.059026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.059068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.059103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.059134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.059170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.059198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.059226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.059252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.059279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.059309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.059343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.059371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.059405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.059434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.059463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.059492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.059523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.059551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.059577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.059604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.059633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.059666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.059696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.059741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.059771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.059802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.059830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.059859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.059892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.059920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.059947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.059976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.060010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.060038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.060074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.060105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.060136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.060165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.060193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.060221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.060248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.060281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.060311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.060345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.060373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.060410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.060439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.060477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.060507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.060559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.060587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.060617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.060647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.060934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.060974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.061010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.061038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.061066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.061095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.061130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.061172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.061204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.061232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.061257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.061286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.061314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.061342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.061370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.061401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.061429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.061457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.061484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.061515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.061543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.061574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.061599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.061628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.061654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.061684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.061712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.061745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.061775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.061806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.061836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.061864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.061895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.061925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.061956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.061986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.062020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.062047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.062074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.062101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.062134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.062165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.062197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.062224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.062251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.062274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.062303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.062331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.062356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.062386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.062415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.062448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.062475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.062504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.062534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.062558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.062582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.062607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.062630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.062658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.062686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.062713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.062744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.868 [2024-10-01 17:39:11.063035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.063064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.063092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.063122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.063156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.063185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.063218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.063245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.063273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.063303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.063353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.063385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.063417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.063445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.063482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.063513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.063540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.063569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.063597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.063626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.063652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.063679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.063708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.063735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.063765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.063794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.063821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.063852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.063883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.063912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.063940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.063969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.064002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.064041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.064070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.064100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.064129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.064158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.064188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.064213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.064248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.064288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.064326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.064356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.064387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.064424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.064462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.064501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.064530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.064563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.064589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.064620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.064649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.064680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.064708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.064736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.064762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.064794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.064824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.064850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.064881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.064910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.064939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.064972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.065549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.065585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.065615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.065645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.065674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.065704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.065732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.065761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.065790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.065819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.065845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.065873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.065905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.065935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.065963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.065993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.066040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.066066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.066098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.066125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.066162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.066193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.066228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.066258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.066288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.066319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.066347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.066376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.066403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.066434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.066462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.066491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.066517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.066543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.066572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.066599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.066626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.066653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.066680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.066705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.066734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.066766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.066792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.066821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.066851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.066878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.066909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.066938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.066966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.066991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.067023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.067051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.067076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.067107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.067138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.067164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.067194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.067222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.067250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.067278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.067305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.067334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.067360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.067520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.067553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.067582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.067615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.067644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.067673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.067706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.067736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.067762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.067790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.067821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.067849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.067879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.067908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.067939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.067968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.068003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.068031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.068058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.068086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.068113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.068147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.068174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.068200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.068229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.068266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.068295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.068325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.068353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.068380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.068409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.068438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.068467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.068493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.068521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.068548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.068591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.068632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.068667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.869 [2024-10-01 17:39:11.068702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.068737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.068777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.068813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.068838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.068870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.068896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.068925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.068958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.068988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.069016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.069046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.069071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.069094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.069124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.069154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.069183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.069211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.069244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.069271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.069301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.069324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.069347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.069370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.069394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.069669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.069702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.069730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.069759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.069787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.069818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.069844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.069873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.069900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.069930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.069959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.069990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.070024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.070055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.070082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.070109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.070144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.070176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.070210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.070240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.070268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.070296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.070328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.070354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.070381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.070419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.070449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.070479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.070508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 17:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:39:12.870 [2024-10-01 17:39:11.070542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.070574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.070602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.070631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.070661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.070690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.070719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.070747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.070778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.070808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.070836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.070881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 17:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:39:12.870 [2024-10-01 17:39:11.070909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.070940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.070970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.071003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.071030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.071061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.071093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.071123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.071166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.071193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.071221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.071248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.071278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.071306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.071335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.071361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.071388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.071414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.071443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.071470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.071496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.071524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.071860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.071895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.071929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.071962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.071990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.072025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.072056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.072088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.072118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.072148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.072181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.072209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.072247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.072276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.072323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.072352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.072385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.072416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.072445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.072472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.072500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.072534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.072562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.072594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.072623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.072660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.072687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.072717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.072746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.072773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.072808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.072835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.072863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.072891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.072921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.072949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.072981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.073015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.073051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.073080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.073111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.073144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.073172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.073201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.073231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.073257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.073288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.073316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.073344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.073375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.073403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.073430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.073460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.073487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.073514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.073547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.073578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.073607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.870 [2024-10-01 17:39:11.073634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.073664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.073691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.073719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.073752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.073789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.074152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.074181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.074206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.074236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.074266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.074298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.074332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.074364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.074394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.074423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.074453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.074480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.074509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.074536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.074563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.074587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.074612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.074639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.074666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.074696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.074721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.074750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.074778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.074805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.074833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.074863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.074894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.074955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.074983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.075020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.075062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.075100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.075128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.075155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.075181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.075214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.075239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.075263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.075286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.075309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.075334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.075357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.075380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.075404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.075430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.075459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.075489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.075519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.075548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.075579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.075609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.075638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.075666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.075690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.075722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.075750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.075781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.075810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.075847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.075876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.075908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.075937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.075970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.076339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.076379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.076408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.076437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.076464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.076498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.076526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.076559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.076587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.076616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.076646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.076674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.076699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.076726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.076758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.076786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.076817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.076843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.076871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.076899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.076929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.076956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.076984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.077017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.077046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.077074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.077100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.077127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.077153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.077180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.077211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.077241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.077276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.077314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.077347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.077374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.077401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.077430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.077461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.077491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.077519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.077548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.077576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.077603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.077639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.077667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.077696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.077725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.077769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.077800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.077853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.077884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.077914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.077940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.077969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.078003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.078033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.078070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.078100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.078151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.078180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.078210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.078240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.078274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.078625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.078658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.078692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.078721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.078751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.078777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.078806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.078832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.078862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.078886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.078913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.078942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.078969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.078998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.079030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.079068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.079099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.079128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.079156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.079182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.079209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.079238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.079262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.079294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.079321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.079359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.079401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.079440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.079481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.079509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.079536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.871 [2024-10-01 17:39:11.079565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.079592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.079617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.079644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.079668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.079699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.079727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.079759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.079786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.079816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.079846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.079871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.079901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.079928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.079953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.079983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.080016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.080048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.080075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.080106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.080134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.080163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.080192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.080226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.080255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.080281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.080314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.080343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.080372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.080395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.080427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.080459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.080807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.080847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.080877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.080937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.080967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.081005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.081034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.081062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.081092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.081121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.081152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.081180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.081211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.081239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.081270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.081299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.081330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.081359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.081391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.081420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.081465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.081494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.081527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.081553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.081585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.081618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.081648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.081678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.081708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.081736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.081765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.081795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.081826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.081856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.081884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.081911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.081941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.081967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.082004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.082039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.082066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.082095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.082124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.082154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.082181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.082211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.082245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.082268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.082296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.082325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.082353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.082380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.082412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.082448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.082479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.082508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.082536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.082566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.082594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.082622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.082653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.082683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.082714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.082743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.083114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.083145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.083173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.083199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.083229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.083261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.083291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.083322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.083352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.083381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.083415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.083446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.083476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.083507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.083536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:39:12.872 [2024-10-01 17:39:11.083567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.083599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.083629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.083666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.083697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.083726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.083756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.083786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.083815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.083845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.083878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.083910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.083939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.083968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.084003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.084036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.084067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.084094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.084124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.084161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.084192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.084220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.084251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.084280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.084308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.084341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.084370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.084406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.084443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.084472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.084502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.084529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.084556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.084590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.084628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.084658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.084691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.084719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.084745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.084777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.084807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.084840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.084870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.084899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.084930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.084961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.084990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.085027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.085385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.085417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.085449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.085483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.085514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.085543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.085575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.872 [2024-10-01 17:39:11.085604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.085641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.085673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.085704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.085739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.085770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.085805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.085833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.085864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.085897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.085927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.085984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.086017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.086046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.086084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.086116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.086147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.086181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.086210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.086240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.086277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.086305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.086335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.086365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.086401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.086431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.086458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.086488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.086514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.086554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.086583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.086614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.086640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.086667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.086700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.086728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.086757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.086799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.086833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.086870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.086899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.086925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.086955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.086982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.087020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.087046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.087077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.087107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.087137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.087172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.087200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.087233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.087266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.087301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.087332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.087361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.087392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.087736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.087771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.087800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.087838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.087867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.087901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.087930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.087960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.087988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.088020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.088052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.088086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.088114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.088144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.088172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.088200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.088232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.088267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.088306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.088337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.088365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.088393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.088425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.088457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.088494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.088527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.088557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.088586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.088615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.088643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.088678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.088707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.088737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.088768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.088799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.088829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.088861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.088889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.088919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.088949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.088978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.089014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.089040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.089064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.089093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.089125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.089152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.089184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.089209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.089234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.089259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.089284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.089309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.089333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.089379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.089409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.089438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.089471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.089501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.089537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.089566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.089596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.089628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.090002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.090044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.090073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.090104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.090134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.090163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.090193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.090224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.090254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.090306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.090336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.090368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.090396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.090428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.090459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.090492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.090521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.090552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.090586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.090617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.090648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.090676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.090703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.090733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.090761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.090797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.090824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.090854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.090881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.090914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.873 [2024-10-01 17:39:11.090944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.090982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.091020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.091049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.091088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.091118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.091147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.091176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.091204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.091229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.091262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.091292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.091323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.091355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.091388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.091418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.091447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.091479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.091507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.091539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.091570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.091599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.091631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.091661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.091692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.091722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.091753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.091786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.091817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.091852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.091879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.091908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.091939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.091969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.092321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.092353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.092385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.092419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.092450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.092485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.092515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.092546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.092580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.092610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.092640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.092670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.092698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.092729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.092756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.092797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.092829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.092857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.092886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.092916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.092946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.092976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.093009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.093040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.093070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.093110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.093140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.093167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.093194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.093223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.093252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.093285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.093315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.093342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.093373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.093403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.093430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.093460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.093490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.093522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.093552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.093582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.093609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.093644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.093675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.093707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.093737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.093766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.093798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.093827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.093857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.093889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.093918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.093943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.093968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.093998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.094024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.094049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.094073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.094099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.094124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.094148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.094173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.094611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.094643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.094675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.094708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.094740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.094771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.094802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.094830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.094865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.094894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.094935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.094965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.095002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.095032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.095064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.095105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.095134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.095162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.095198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.095222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.095247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.095272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.095297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.095323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.095348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.095373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.095398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.095423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.095448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.095474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.095502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.095535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.095564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.095588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.095614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.095641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.095665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.095690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.095716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.095741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.095768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.095799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.095830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.095865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.095892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.095923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.095960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.095990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.096026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.096069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.096099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.096134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.096166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.096195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.096223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.096254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.096292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.096321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.096350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.096381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.096412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.096446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.096476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.096508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.096857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.096889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.096926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.874 [2024-10-01 17:39:11.096954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.096984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.097015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.097045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.097074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.097100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.097128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.097161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.097195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.097220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.097253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.097282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.097318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.097348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.097377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.097407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.097438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.097471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.097501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.097532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.097560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.097590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.097623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.097654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.097685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.097715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.097743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.097779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.097810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.097840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.097869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.097898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.097934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.097963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.098001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.098040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.098070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.098102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.098130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.098160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.098196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.098226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.098254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.098284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.098316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.098351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.098384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.098417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.098448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.098477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.098531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.098561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.098595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.098624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.098654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.098695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.098731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.098762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.098793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.098823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.099168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.099199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.099231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.099263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.099295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.099324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.099353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.099386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.099415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.099446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.099473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.099502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.099531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.099561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.099592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.099621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.099650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.099680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.099710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.099744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.099776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.099807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.099836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.099872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.099902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.099933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.099965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.100001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.100031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.100057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.100084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.100119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.100147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.100175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.100204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.100230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.100261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.100288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.100318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.100344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.100368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.100394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.100418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.100446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.100476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.100506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.100537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.100568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.100601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.100630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.100655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.100680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.100705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.100730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.100755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.100779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.100804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.100833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.100863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.100894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.100924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.100954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.100982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.101013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.101292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.101319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.101345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.101371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.101396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.101421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.101446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.101472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.101498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.101523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.101548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.101573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.101598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.101623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.101648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.101672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.101697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.101722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.101747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.101771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.101796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.101821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.101846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.101871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.101895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.101920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.101944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.101969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.101998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.102028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.102061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.102093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.102124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.102155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.102186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.102219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.102251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.102282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.875 [2024-10-01 17:39:11.102312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.102343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.102375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.102406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.102435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.102466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.102497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.102527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.102557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.102587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.102617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.102647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.102683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.102713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.102743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.102773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.102806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.102838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.102867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.102898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.102928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.102958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.102991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.103030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.103059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.103402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.103433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.103460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.103491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.103532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.103569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.103604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.103630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.103660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.103689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.103717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.103753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.103782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.103815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.103842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.103873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.103902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.103939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.103967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.103999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.104030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.104063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.104093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.104122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.104151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.104181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.104219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.104249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.104281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.104312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.104342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.104374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.104403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.104431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.104459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.104488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.104517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.104546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.104577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.104614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.104643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.104695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.104724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.104754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.104786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.104813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.104843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.104875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.104907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.104948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.104977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.105010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.105045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.105073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.105104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.105137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.105166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.105199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.105231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.105261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.105289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.105317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.105346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.105383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.105732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.105762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.105799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.105830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.105859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.105888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.105928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.105960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.105997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.106028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.106057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.106085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.106116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.106149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.106177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.106208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.106239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.106268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.106294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.106325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.106350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.106375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.106400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.106426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.106451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.106476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.106501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.106526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.106552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.106577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.106602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.106628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.106653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.106685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.106716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.106746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.106777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.106809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.106840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.106867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.106900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.106928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.106954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.106981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.107017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.107050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.107080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.107115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.107147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.107177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.107202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.107238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.107268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.107295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.107324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.107355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.107386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.107417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.107448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.107478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.107504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.107529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.876 [2024-10-01 17:39:11.107555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.107843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.107871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.107897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.107922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.107948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.107986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.108022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.108054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.108090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.108119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.108150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.108179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.108209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.108240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.108276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.108310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.108341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.108384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.108417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.108446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.108485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.108516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.108550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.108589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.108619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.108650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.108679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.108711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.108738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.108769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.108806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.108838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.108867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.108897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.108928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.108962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.108993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.109027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.109076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.109107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.109139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.109169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.109208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.109239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.109266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.109297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.109330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.109368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.109397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.109427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.109457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.109487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.109514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.109544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.109586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.109617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.109648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.109677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.109706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.109734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.109762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.109803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.109831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.109857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.110213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.110248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.110276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.110309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.110340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.110369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.110402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.110432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.110467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.110496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.110527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.110558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.110588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.110620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.110650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.110680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.110730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.110762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.110790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.110827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.110856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.110888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.110919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.110950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.110991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.111024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.111060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.111091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.111128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.111161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.111192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.111222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.111250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.111286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.111315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.111346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.111380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.111413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.111445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.111470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.111501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.111536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.111565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.111595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.111624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.111651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.111679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.111709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.111744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.111773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.111802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.111831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.111856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.111891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.111919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.111949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.111976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.112011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.112053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.112082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.112114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.112142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.112176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.112537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.112571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.112599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.112629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.112661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.112693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.112723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.112754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.112785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.112814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.112843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.112872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.112914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.112943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.112974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.113016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.113047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.113077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.113113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.113143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.113172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.113201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.113231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.113289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.113319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.113349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.113381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.113412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.113444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.113472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.113501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.113533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.113560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.113597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.877 [2024-10-01 17:39:11.113628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.113659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.113687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.113716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.113751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.113782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.113811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.113847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.113876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.113909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.113938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.113970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.114006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.114035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.114065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.114095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.114124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.114163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.114193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.114223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.114252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.114291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.114317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.114351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.114383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.114410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.114438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.114467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.114494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.114530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.115262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.115294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.115326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.115357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.115386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.115418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.115449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.115477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.115512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.115541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.115581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.115610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.115639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.115671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.115702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.115730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.115760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.115794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.115823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.115852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.115892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.115926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.115958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.116000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.116035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.116068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.116100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.116132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.116163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.116194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.116231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.116262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.116293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.116323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.116353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.116383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.116408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.116443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.116476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.116506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.116545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.116575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.116608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.116639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.116671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.116708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.116737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.116765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.116790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.116820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.116848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.116877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.116907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.116938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.116966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.117002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.117031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.117061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.117092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.117117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.117150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.117180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.117212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.117357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.117390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.117422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.117451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.117479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.117508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.117557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.117587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.117616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.117654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.117685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.117717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.117749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.117782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.117812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.117842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.117870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.117902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.117934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.117969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.118003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.118034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.118065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.118099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.118128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.118157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.118189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.118225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.118258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.118288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.118319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.118348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.118406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.118435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.118467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.118499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.118530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.118561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.118599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.118628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.118660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.118688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.118718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.118774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.118804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.118835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.118866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.118896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.118953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.118985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.119021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.119054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.119085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.119116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.119147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.119178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.119211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.119241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.119272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.878 [2024-10-01 17:39:11.119302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.119334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.119364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.119398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.119427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.119766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.119797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.119826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.119856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.119891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:39:12.879 [2024-10-01 17:39:11.119920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.119949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.119978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.120012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.120041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.120075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.120104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.120134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.120168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.120197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.120227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.120257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.120286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.120323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.120354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.120385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.120415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.120447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.120478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.120513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.120544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.120574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.120602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.120633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.120663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.120695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.120725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.120755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.120785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.120810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.120842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.120873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.120903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.120934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.120969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.121003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.121032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.121071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.121100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.121128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.121159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.121191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.121221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.121253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.121281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.121313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.121342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.121374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.121404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.121438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.121468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.121498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.121533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.121564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.121594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.121624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.121660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.121692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.122104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.122135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.122164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.122196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.122228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.122260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.122295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.122324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.122355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.122392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.122423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.122453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.122514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.122545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.122576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.122605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.122637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.122666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.122698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.122727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.122775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.122804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.122834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.122868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.122897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.122936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.122968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.123004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.123036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.123067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.123098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.123146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.123176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.123205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.123236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.123267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.123295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.123327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.123357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.123393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.123425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.123457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.123487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.123511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.123544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.123577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.123615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.123646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.123675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.123705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.123734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.123773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.123802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.123834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.123862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.123891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.123919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.123952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.123982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.124017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.124052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.124081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.124111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.124140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.124897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.124929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.124957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.124987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.125021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.125049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.125087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.125116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.125154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.125185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.125216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.125247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.125276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.125305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.125347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.125378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.125406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.125435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.125464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.125496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.125526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.125555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.125588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.125619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.125651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.125682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.125714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.125744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.125789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.125820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.125850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.125879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.125910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.125942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.879 [2024-10-01 17:39:11.125972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.126007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.126043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.126073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.126102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.126159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.126185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.126216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.126243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.126273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.126301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.126330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.126369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.126399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.126430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.126461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.126488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.126523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.126552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.126578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.126610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.126638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.126665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.126703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.126733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.126764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.126794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.126839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.126869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.127047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.127080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.127113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.127143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.127174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.127205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.127234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.127267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.127297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.127326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.127354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.127382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.127417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.127446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.127477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.127507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.127537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.127569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.127600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.127633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.127663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.127695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.127730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.127760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.127791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.127821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.127850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.127885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.127915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.127944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.127984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.128024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.128055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.128086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.128122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.128153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.128182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.128211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.128241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.128282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.128312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.128342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.128378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.128411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.128444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.128475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.128505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.128532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.128561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.128591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.128623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.128648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.128676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.128708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.128738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.128779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.128809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.128840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.128868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.128912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.128941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.128970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.129004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.129035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.129778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.129809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.129839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.129870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.129899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.129930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.129961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.129993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.130027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.130056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.130099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.130128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.130157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.130192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.130225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.130259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.130291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.130321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.130381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.130410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.130441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.130487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.130517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.130547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.130587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.130615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.130647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.130677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.130710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.130763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.130792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.130819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.130853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.130886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.130917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.130953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.130983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.131017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.131052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.131082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.131114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.131149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.131183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.131211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.131239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.131269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.131300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.131331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.131359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.131387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.131419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.131449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.131478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.131507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.131540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.131573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.131603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.131633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.131666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.131696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.131723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.131761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.131793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.131935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.131970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.132009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.132039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.132070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.132101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.132135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.132166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.132196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.880 [2024-10-01 17:39:11.132229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.132268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.132303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.132332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.132360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.132393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.132427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.132457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.132486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.132513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.132540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.132573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.132605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.132638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.132668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.132698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.132728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.132758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.132797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.132825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.132857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.132886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.132918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.132955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.132987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.133023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.133056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.133092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.133120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.133153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.133208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.133239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.133269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.133302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.133331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.133361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.133393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.133422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.133454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.133485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.133514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.133545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.133574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.133611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.133643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.133673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.133704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.133743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.133774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.133804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.133833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.133862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.133893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.133924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.133953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.134566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.134597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.134626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.134660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.134693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.134722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.134752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.134796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.134823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.134857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.134890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.134921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.134950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.134977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.135012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.135044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.135074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.135105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.135141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.135171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.135207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.135238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.135269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.135307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.135337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.135367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.135404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.135434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.135472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.135502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.135534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.135571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.135600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.135636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.135665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.135697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.135730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.135761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.135794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.135824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.135853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.135878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.135907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.135941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.135970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.136011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.136040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.136068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.136098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.136140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.136172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.136202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.136234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.136267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.136294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.136322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.136350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.136383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.136415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.136444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.136475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.136505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.136535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.136685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.136717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.136755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.136785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.136821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.136852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.136883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.136915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.136946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.136978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.137020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.137052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.137083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.137126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.137157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.137186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.137215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.137244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.137276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.137307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.137337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.137372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.137402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.137433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.137463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.137495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.137527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.137559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.137589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.137621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.137650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.137678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.137708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.881 [2024-10-01 17:39:11.137737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.137768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.137796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.137841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.137870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.137901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.137930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.137958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.137991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.138026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.138055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.138091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.138121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.138150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.138180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.138211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.138242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.138274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.138304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.138335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.138370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.138401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.138429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.138473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.138503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.138532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.138562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.138592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.138622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.138653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.138682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.139045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.139085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.139115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.139149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.139185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.139215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.139247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.139280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.139312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.139342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.139368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.139401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.139433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.139470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.139506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.139538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.139567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.139598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.139639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.139673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.139704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.139733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.139768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.139795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.139826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.139855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.139882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.139912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.139947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.139972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.140010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.140039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.140073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.140103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.140135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.140164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.140194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.140238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.140268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.140298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.140331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.140365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.140398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.140434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.140464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.140504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.140533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.140564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.140593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.140624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.140654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.140687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.140716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.140747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.140778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.140808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.140837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.140869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.140899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.140929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.140961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.140986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.141023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.141360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.141390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.141421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.141446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.141478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.141509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.141539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.141570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.141601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.141658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.141691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.141724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.141759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.141790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.141824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.141862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.141891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.141924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.141954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.141984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.142022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.142054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.142085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.142118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.142149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.142183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.142216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.142246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.142278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.142324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.142354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.142381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.142410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.142439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.142467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.142499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.142529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.142566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.142597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.142634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.142669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.142702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.142732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.142769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.142802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.142837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.142868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.142900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.142930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.142962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.142991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.143022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.143050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.143081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.143112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.143141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.143174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.143205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.143236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.143262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.143300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.143331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.143358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.143385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.143717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.143746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.143774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.143806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.143839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.143871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.882 [2024-10-01 17:39:11.143898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.143948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.143980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.144014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.144049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.144080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.144112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.144148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.144180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.144209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.144249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.144280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.144316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.144347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.144377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.144413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.144444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.144476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.144516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.144546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.144578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.144612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.144644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.144675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.144706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.144737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.144776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.144810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.144841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.144875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.144902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.144933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.144968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.145002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.145033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.145081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.145113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.145142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.145197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.145228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.145259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.145292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.145322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.145352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.145382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.145411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.145440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.145472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.145502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.145534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.145577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.145604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.145636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.145669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.145699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.145741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.145770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.146117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.146150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.146184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.146213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.146243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.146276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.146308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.146341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.146373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.146407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.146442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.146472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.146502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.146532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.146564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.146594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.146623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.146655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.146688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.146720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.146751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.146817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.146847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.146878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.146918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.146950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.146980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.147011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.147042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.147074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.147104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.147140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.147169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.147198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.147228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.147268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.147298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.147331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.147365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.147397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.147431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.147461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.147494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.147522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.147554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.147586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.147616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.147654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.147686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.147716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.147773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.147809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.147838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.147869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.147899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.147929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.147965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.148001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.148032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.148068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.148098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.148127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.148157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.148188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.148754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.148789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.148821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.148850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.148875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.148907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.148935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.148965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.149000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.149031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.149057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.149083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.149109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.149134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.149160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.149185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.149216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.149246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.149276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.149308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.149340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.149371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.149400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.149432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.149463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.149488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.149515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.149539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.149565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.149590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.149615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.149640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.149666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.149691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.149748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.149779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.149809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.149841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.149871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.149901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.149933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.149963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.149999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.883 [2024-10-01 17:39:11.150030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.150061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.150089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.150118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.150166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.150201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.150234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.150277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.150309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.150339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.150369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.150400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.150430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.150460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.150489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.150531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.150560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.150591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.150629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.150662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.150803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.150837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.150869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.150904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.150934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.150962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.150991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.151025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.151055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.151089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.151118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.151148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.151179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.151213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.151245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.151275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.151313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.151342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.151371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.151402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.151431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.151462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.151492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.151521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.151562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.151591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.151620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.151655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.151685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.151721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.151751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.151781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.151833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.151865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.151893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.151936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.151967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.151999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.152034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.152064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.152095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.152133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.152168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.152203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.152234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.152266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.152296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.152325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.152355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.152387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.152418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.152452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.152483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.152513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.152541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.152577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.152611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.152646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.152685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.152713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.152742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.152772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.152801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.152835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.153194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.153225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.153260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.153292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.153322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.153349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.153379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.153408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.153446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.153474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.153507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.153534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.153564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.153591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.153627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.153655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.153683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.153711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.153751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.153781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.153814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.153843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.153874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.153907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.153937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.153967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.154004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.154036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.154065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.154096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.154125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.154156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.154184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.154212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.154243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.154269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.154295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.154320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.154344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.154370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.154395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.154420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.154445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.154470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.154495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.154520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.154545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.154571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.154596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.154621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.154647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.154672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.154699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.154724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.154749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.154775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.154800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.154825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.154850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.154875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.154900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.154926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.154951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.155338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.155372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.155400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.884 [2024-10-01 17:39:11.155434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.155464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.155495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.155526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.155556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.155590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.155618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.155647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.155680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.155711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.155739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.155768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.155801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.155834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.155865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.155895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.155923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.155951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.155977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.156009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.156035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.156060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.156086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.156112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.156142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.156171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.156203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.156234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.156265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.156297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.156327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.156357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.156388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.156420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.156450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.156479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.156508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.156554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.156585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.156614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.156652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.156684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.156714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.156751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.156781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.156824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.156852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.156882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.156909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.156948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.156978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.157008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.157042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.157074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.157105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.157134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.157184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.157214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.157243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.157277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.157308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:39:12.885 [2024-10-01 17:39:11.157664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.157698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.157730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.157762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.157791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.157825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.157854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.157884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.157916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.157946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.157974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.158013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.158041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.158070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.158101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.158130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.158160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.158191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.158223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.158252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.158282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.158312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.158344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.158373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.158402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.158435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.158465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.158493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.158523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.158553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.158614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.158646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.158675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.158706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.158736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.158769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.158798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.158829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.158862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.158891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.158923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.158953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.158982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.159022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.159052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.159083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.159115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.159148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.159180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.159209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.159239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.159273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.159305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.159339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.159368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.159399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.159431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.159465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.159498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.159529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.159561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.159588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.159620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.159955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.159988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.160020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.160056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.160086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.160113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.160144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.160171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.160212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.160242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.160269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.160299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.160329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.160360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.160391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.160421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.160451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.160484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.160514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.160546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.160575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.160607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.160644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.160676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.160707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.160753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.160785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.160813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.160851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.160882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.160913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.160951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.160982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.161020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.161056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.161086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.161115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.161165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.885 [2024-10-01 17:39:11.161195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.161227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.161261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.161292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.161322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.161353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.161384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.161422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.161453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.161483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.161515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.161544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.161576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.161604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.161638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.161663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.161694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.161724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.161756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.161797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.161827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.161857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.161888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.161919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.161946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.161975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.162746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.162777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.162807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.162836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.162873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.162902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.162935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.162964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.162997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.163029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.163058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.163090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.163137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.163169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.163199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.163232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.163264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.163295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.163331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.163360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.163404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.163434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.163463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.163502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.163532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.163564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.163595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.163623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.163654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.163682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.163712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.163752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.163783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.163811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.163842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.163870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.163909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.163936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.163967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.163998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.164028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.164057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.164087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.164119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.164149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.164180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.164211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.164241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.164273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.164315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.164344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.164373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.164405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.164438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.164467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.164503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.164535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.164566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.164598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.164628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.164661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.164690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.164720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.164863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.164898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.164929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.164959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.164987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.165024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.165056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.165088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.165120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.165146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.165178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.165217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.165251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.165280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.165309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.165337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.165376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.165404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.165434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.165463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.165500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.165527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.165557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.165589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.165617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.165646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.165677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.165707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.165754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.165783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.165814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.165850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.165881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.165910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.165945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.165976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.166011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.166043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.166074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.166104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.166138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.166168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.166198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.166230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.166260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.166288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.166326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.166355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.166391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.166421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.166452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.166485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.166515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.166547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.166576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.166607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.166640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.166669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.166702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.166733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.166766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.166795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.166826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.166857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.886 [2024-10-01 17:39:11.167242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.167276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.167306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.167349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.167374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.167404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.167436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.167465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.167492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.167535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.167565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.167595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.167622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.167653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.167685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.167716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.167746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.167781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.167814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.167853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.167882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.167913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.167942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.167972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.168011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.168045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.168077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.168121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.168153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.168182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.168213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.168245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.168274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.168306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.168335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.168369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.168399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.168429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.168460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.168490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.168523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.168554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.168585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.168616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.168643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.168674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.168704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.168732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.168771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.168800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.168831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.168860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.168894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.168926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.168959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.169003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.169030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.169064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.169092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.169124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.169156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.169187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.169217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.169557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.169593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.169626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.169659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.169690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.169722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.169753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.169789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.169819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.169848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.169879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.169908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.169944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.169973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.170012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.170042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.170071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.170102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.170132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.170165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.170196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.170231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.170272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.170300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.170330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.170360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.170396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.170425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.170454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.170482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.170512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.170551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.170581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.170615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.170661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.170691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.170717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.170747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.170777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.170813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.170842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.170871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.170903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.170932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.170964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.170997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.171026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.171057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.171091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.171121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.171151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.171182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.171212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.171249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.171279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.171316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.171347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.171377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.171406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.171438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.171469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.171499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.171530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.171569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.172095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.172130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.172156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.172189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.172217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.172247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.172275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.172311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.172338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.172367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.172394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.172422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.172452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.172493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.172521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.172553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.172583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.172611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.172640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.172668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.172703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.172733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.172760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.172789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.172818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.172849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.172882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.172914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.172947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.172978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.173009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.173038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.173076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.173106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.173136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.173190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.173221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.173251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.173286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.173315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.173348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.887 [2024-10-01 17:39:11.173379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.173408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.173466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.173493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.173524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.173553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.173584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.173615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.173647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.173681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.173712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.173744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.173773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.173803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.173829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.173862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.173892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.173923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.173958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.173989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.174021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.174062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.174417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.174452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.174483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.174513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.174542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.174573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.174603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.174634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.174665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.174695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.174726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.174761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.174789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.174819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.174847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.174877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.174915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.174943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.174973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.175012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.175040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.175071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.175105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.175137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.175170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.175201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.175231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.175262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.175294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.175323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.175352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.175381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.175417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.175447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.175477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.175510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.175543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.175573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.175605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.175638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.175666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.175697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.175726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.175755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.175784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.175813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.175854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.175881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.175915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.175944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.175974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.176012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.176042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.176067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.176097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.176125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.176164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.176196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.176226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.176257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.176284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.176315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.176343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.176368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.177007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.177049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.177080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.177110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.177146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.177177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.177205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.177236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.177267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.177302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.177330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.177359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.177399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.177428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.177460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.177492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.177522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.177557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.177585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.177630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.177659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.177689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.177717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.177749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.177801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.177830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.177861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.177893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.177925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.177957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.177989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.178024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.178055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.178085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.178115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.178145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.178174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.178202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.178234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.178265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.178296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.178329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.178366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.178398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.178427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.178454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.178484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.178515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.178553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.178581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.178608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.178639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.178667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.178702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.178740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.178769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.178798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.178826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.178863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.178888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.178921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.178953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.178984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.179134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.179168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.179198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.179230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.179260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.179292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.179324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.179354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.179382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.179415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.179441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.179473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.179505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.179537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.888 [2024-10-01 17:39:11.179567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.179598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.179629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.179688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.179719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.179749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.179784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.179818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.179851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.179880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.179908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.179941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.179973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.180009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.180053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.180084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.180115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.180152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.180182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.180211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.180244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.180274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.180304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.180336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.180366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.180401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.180430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.180460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.180492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.180521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.180558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.180589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.180618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.180646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.180674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.180703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.180738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.180768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.180796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.180825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.180854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.180890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.180919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.180950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.180980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.181015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.181045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.181075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.181104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.181136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.181803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.181836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.181876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.181908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.181938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.181978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.182012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.182042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.182075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.182107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.182156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.182188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.182219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.182256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.182287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.182318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.182349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.182379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.182411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.182442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.182472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.182512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.182543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.182577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.182609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.182639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.182670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.182702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.182733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.182764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.182795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.182824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.182856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.182884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.182911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.182944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.182973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.183008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.183038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.183075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.183109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.183139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.183169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.183202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.183235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.183266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.183296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.183334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.183370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.183402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.183430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.183461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.183496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.183526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.183557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.183589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.183620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.183650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.183683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.183712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.183741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.183778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.183805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.183968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.184006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.184036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.184066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.184096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.184126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.184183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.184214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.184242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.184282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.184318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.184350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.184378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.184405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.184438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.184464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.184489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.184514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.184539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.184564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.184590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.184615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.184640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.184665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.184691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.184716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.184742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.184770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.184799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.184831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.184864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.184896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.184927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.184957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.184990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.185029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.185061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.889 [2024-10-01 17:39:11.185091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.185123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.185153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.185181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.185211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.185236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.185262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.185287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.185312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.185338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.185364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.185389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.185414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.185439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.185464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.185489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.185514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.185539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.185564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.185589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.185615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.185640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.185666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.185691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.185716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.185750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.185782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.186135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.186170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.186200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.186234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.186264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.186295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.186326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.186357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.186389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.186419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.186452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.186482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.186511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.186563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.186593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.186626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.186673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.186954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.186986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.187020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.187050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.187078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.187121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.187149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.187180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.187210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.187240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.187271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.187313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.187342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.187373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.187402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.187432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.187471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.187497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.187527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.187556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.187595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.187627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.187659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.187692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.187724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.187754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.187782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.187815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.187843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.187875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.187905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.187941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.187971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.188007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.188040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.188072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.188103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.188136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.188166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.188195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.188231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.188261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.188292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.188323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.188353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.188384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.188567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.188607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.188639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.188669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.188706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.188735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.188765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.188793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.188825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.188862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.188890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.188917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.188947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.188985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.189017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.189045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.189080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.189114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.189144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.189172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.189213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.189243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.189271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.189299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.189327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.189360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.189395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.189425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.189452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.189484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.189512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.189541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.189570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.189602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.189633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.189665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.189696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.189733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.189763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.189796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.189827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.189859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.189890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.189924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.189957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.189986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.190020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.190052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.190082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.190112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.190145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.190175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.190206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.190241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.190273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.190310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.190339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.190368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.190396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.190427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.190459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.190488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.190518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.190547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.190681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.190713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.190742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.190769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.190799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.190830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.190860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.890 [2024-10-01 17:39:11.190895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.190925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.190955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.190989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.191022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.191055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.191086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.191114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.191145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.191172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.191617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.191652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.191684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.191745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.191776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.191806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.191838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.191868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.191901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.191930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.191961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.192003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.192033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.192063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.192095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.192126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.192158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.192189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.192218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.192258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.192291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.192317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.192347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.192379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.192410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.192439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.192469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.192528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.192558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.192588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.192625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.192657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.192690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.192721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.192750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.192779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.192809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.192840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.192878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.192908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.192935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.192966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.193004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.193031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.193064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.193093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.193260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.193292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.193320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.193350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.193376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.193406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.193436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.193468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.193498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.193529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.193558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.193590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.193615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.193647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.193679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.193711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.193743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.193772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.193801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.193853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:39:12.891 [2024-10-01 17:39:11.193882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.193912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.193949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.193979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.194014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.194045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.194076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.194107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.194137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.194168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.194206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.194241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.194273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.194301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.194333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.194370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.194401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.194433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.194465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.194498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.194529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.194560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.194589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.194618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.194646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.194683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.194712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.194739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.194769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.194808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.194838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.194867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.194896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.194926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.194961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.194991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.195024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.195052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.195096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.195125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.195155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.195184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.195213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.195245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.195383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.195411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.195440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.195469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.195494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.195525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.195557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.195588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.195622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.195653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.195686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.195715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.195746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.195789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.195817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.195850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.195886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.196334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.196368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.196399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.196433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.196463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.196493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.196522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.196552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.196583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.196612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.196642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.196668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.196696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.196734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.196763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.196793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.196824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.196851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.196881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.196906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.196938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.891 [2024-10-01 17:39:11.196974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.197011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.197043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.197079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.197109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.197136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.197168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.197201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.197232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.197262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.197295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.197323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.197354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.197388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.197418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.197447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.197476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.197511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.197546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.197578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.197610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.197643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.197677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.197707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.197739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.197773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.197805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.197837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.197872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.197903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.197936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.197967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.198000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.198033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.198102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.198139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.198169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.198204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.198235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.198265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.198315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.198346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.198550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.198588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.198615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.198644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.198679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.198709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.198738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.198775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.198806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.198839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.198868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.198901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.198931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.198960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.199000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.199028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.199059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.199088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.199125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.199155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.199181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.199212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.199243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.199271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.199312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.199350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.199379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.199409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.199437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.199464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.199494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.199522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.199552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.199579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.199610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.199643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.199673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.199706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.199739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.199771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.199803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.199837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.199871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.199900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.199931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.199962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.199998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.200030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.200061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.200092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.200122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.200155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.200187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.200219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.200253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.200282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.200311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.200342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.200373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.200405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.200438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.200469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.200504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.200536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.201082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.201114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.201142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.201172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.201199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.201228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.201259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.201290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.201319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.201352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.201384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.201418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.201448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.201478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.201507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.201537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.201572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.201603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.201631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.201662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.201696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.201721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.201753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.201782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.201816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.201845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.201875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.201918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.201949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.201979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.202013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.202044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.202073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.202105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.202134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.202166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.202194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.202234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.202262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.202292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.202324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.202354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.202389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.202419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.892 [2024-10-01 17:39:11.202447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.202477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.202512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.202538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.202563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.202588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.202613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.202638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.202663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.202687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.202712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.202737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.202767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.202799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.202828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.202863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.202894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.202919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.202943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.202968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.203104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.203136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.203166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.203198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.203237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.203265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.203298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.203329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.203358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.203395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.203425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.203456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.203486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.203515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.203547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.203578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.203866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.203898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.203931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.203962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.203998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.204027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.204063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.204092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.204126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.204155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.204187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.204218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.204249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.204280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.204312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.204340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.204369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.204398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.204426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.204455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.204496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.204524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.204552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.204583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.204611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.204641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.204673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.204704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.204736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.204770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.204798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.204829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.204859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.204889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.204924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.204959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.204988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.205017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.205048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.205079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.205109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.205144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.205176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.205211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.205242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.205274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.205316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.205488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.205528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.205558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.205587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.205615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.205645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.205676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.205708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.205740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.205791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.205821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.205849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.205880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.205912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.205944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.205975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.206009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.206040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.206068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.206097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.206125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.206156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.206187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.206216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.206245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.206276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.206307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.206338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.206365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.206398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.206430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.206462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.206490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.206520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.206552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.206589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.206618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.206650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.206679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.206708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.206750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.206779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.206811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.206841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.206868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.206898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.206931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.206961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.206987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.207020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.207052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.207086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.207118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.207150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.207180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.207211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.207243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.207275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.207306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.207338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.207384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.207416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.207446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.207489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.207620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.207663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.207695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.207724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.207761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.207793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.207826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.207855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.207887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.207929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.207960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.207990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.208032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.893 [2024-10-01 17:39:11.208061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.208091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.208147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.208576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.208607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.208636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.208665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.208694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.208756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.208788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.208824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.208856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.208900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.208931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.208966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.209003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.209032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.209061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.209094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.209123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.209151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.209180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.209210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.209244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.209273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.209301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.209330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.209370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.209399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.209429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.209460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.209490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.209532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.209563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.209590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.209629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.209661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.209690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.209719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.209748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.209777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.209802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.209834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.209863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.209902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.209934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.209965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.209999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.210031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.210064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.210262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.210296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.210325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.210356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.210386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.210413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.210444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.210470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.210505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.210533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.210564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.210598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.210630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.210664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.210694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.210723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.210754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.210783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.210814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.210843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.210874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.210910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.210939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.210970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.211003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.211034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.211063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.211095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.211137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.211166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.211196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.211226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.211254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.211286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.211319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.211350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.211391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.211420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.211447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.211485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.211513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.211541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.211571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.211603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.211639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.211671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.894 [2024-10-01 17:39:11.211700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.211727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.211760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.211792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.211823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.211855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.211884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.211913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.211951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.211982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.212018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.212046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.212078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.212108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.212142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.212173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.212203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.212232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.212377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.212411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.212442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.212472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.212504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.212533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.212564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.212596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.212626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.212656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.212687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.212718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.212747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.212776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.212837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.212869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.213305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.213339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.213371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.213402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.213431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.213462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.213492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.213521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.213551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.213581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.213613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.213645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.213674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.213702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.213734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.213761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.213791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.213833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.213863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.213895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.213924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.213952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.213984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.214018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.214045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.214077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.214106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.214141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.214174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.214204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.214233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.214262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.214299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.214326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.214358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.214387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.214417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.214449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.214480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.214510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.214544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.214575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.214608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.214639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.214670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.214704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.214733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.214930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.214963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.214998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.215029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.895 [2024-10-01 17:39:11.215059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.215096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.215127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.215157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.215189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.215221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.215254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.215284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.215314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.215368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.215396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.215426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.215458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.215490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.215521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.215550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.215579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.215636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.215668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.215698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.215729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.215758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.215790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.215820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.215850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.215911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.215942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.215973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.216006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.216035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.216077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.216107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.216136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.216163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.216190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.216220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.216252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.216292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.216319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.216346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.216376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.216405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.216434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.216462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.216501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.216531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.216563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.216594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.216625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.216657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.216687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.216724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.216752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.216780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.216810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.216841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.216870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.216904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.216933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.216965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.217093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.217124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.217153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.217185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.217213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.217250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.217281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.217312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.217341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.217371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.217404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.217433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.217464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.217516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.217546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.217575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.218042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.218072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.218101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.218133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.218167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.218198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.896 [2024-10-01 17:39:11.218230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.218260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.218289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.218317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.218348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.218383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.218416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.218447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.218491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.218522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.218562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.218591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.218617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.218648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.218674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.218705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.218734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.218765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.218795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.218822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.218859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.218892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.218923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.218951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.218981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.219020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.219047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.219079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.219109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.219143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.219178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.219206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.219234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.219262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.219290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.219335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.219366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.219392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.219420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.219449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.219483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.219515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.219548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.219578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.219607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.219642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.219672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.219702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.219728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.219757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.219782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.219814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.219843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.219905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.219933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.219962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.219998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.220027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.220155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.220184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.220215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.220244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.220274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.220303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.220337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.220369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.220398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.220429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.220461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.220490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.220527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.220555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.220585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.220617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.220646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.221007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.897 [2024-10-01 17:39:11.221045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.221075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.221105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.221138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.221169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.221197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.221227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.221256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.221281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.221313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.221344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.221372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.221397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.221422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.221447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.221472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.221497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.221521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.221546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.221572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.221598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.221623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.221649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.221675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.221702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.221726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.221752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.221777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.221802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.221827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.221852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.221883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.221916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.221946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.221978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.222015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.222048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.222074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.222106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.222135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.222167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.222195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.222223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.222256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.222289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.222469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.222501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.222529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.222561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.222590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.222645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.222675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.222704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.222738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.222769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.222807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.222838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.222868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.222898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.222929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.222960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.223002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.223032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.223061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.223088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.223122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.223151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.223181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.223209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.223240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.223268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.223301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.223335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.223362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.223391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.223417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.223448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.223476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.223508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.223540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.223578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.223611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.223639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.223668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.898 [2024-10-01 17:39:11.223698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.223725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.223759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.223788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.223820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.223857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.223887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.223922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.223951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.223980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.224018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.224049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.224078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.224104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.224137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.224166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.224196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.224255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.224283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.224312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.224343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.224376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.224420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.224451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.224481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.224764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.224795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.224830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.224860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.224890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.224929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.224961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.224998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.225035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.225064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.225099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.225128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.225159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.225191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.225222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.225253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.225286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.225317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.225345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.225376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.225410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.225439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.225470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.225498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.225527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.225556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.225600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.225627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.225655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.225686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.225720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.225753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.225783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.225811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.225838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.225869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.225906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.225932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.225964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.225992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.226029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.226057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.226087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.226116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.226148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.226178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.226209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.226240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.226271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.226299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.226328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.226357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.226387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.226416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.226444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.226474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.226506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.226540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.226570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.226600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.899 [2024-10-01 17:39:11.226630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.226659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.226694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.227052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.227084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.227118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.227148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.227181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.227215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.227247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.227279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.227308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.227338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.227396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.227426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.227457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.227494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.227524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.227570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.227599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.227629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.227660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.227692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.227717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.227747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.227777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.227804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.227833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.227873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.227910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.227940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.227973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.228003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.228040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.228072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.228103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.228128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.228159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.228186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.228218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.228245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.228282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.228311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.228337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.228365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.228400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.228429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.228457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.228482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.228515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.228547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.228578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.228610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.228643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.228673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.228704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.228733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.228762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.228792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.228820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.228850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.228877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.228907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.228937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.228967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.229009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.229039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.229431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.229466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.229495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.229529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.229560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.229589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.229620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.229648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.900 [2024-10-01 17:39:11.229695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.229726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.229756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.229794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.229825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.229861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.229891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.229921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.229952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.229983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.230018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.230054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.230083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.230112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.230144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.230175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.230209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.230240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.230272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.230305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.230337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.230369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.230404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.230433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.230461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.230489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.230526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.230555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.230583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.230614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.230646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.230674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.230703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.230732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.230771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.230800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.230827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.230855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.230889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.230922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.230950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.230980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.231015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.231045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.231075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.231107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.231135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.231165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.231196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.231226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.231261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.231291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.231320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.231350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.231381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.231769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.231802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.231844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:39:12.901 [2024-10-01 17:39:11.231873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.231900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.231932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.231963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.231999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 true 00:39:12.901 [2024-10-01 17:39:11.232030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.232063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.232093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.232123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.232153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.232182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.232212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.232247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.232277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.232306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.901 [2024-10-01 17:39:11.232354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.232384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.232413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.232450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.232480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.232509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.232546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.232578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.232609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.232637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.232666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.232699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.232733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.232769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.232800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.232833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.232861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.232892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.232926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.232956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.232986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.233019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.233048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.233073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.233104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.233132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.233162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.233191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.233220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.233258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.233288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.233318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.233347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.233376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.233417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.233445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.233475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.233504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.233537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.233570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.233600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.233631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.233664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.233695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.233722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.233754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.234125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.234156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.234192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.234224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.234254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.234286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.234316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.234346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.234376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.234405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.234446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.234476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.234505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.234538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.234571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.234603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.234634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.234663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.234700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.234730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.234760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.234812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.234843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.234874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.234908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.234939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.234977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.235019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.235050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.235080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.235111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.235140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.235167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.235199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.235233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.235262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.235291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.235327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.902 [2024-10-01 17:39:11.235356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.235386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.235416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.235444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.235482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.235513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.235544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.235572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.235597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.235629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.235658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.235687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.235723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.235757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.235786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.235817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.235843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.235884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.235913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.235941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.235968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.236003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.236034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.236064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.236096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.236518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.236552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.236584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.236617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.236675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.236704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.236735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.236764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.236793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.236852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.236881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.236912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.236941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.236972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.237014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.237044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.237075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.237102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.237133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.237167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.237205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.237234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.237275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.237306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.237336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.237396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.237425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.237460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.237490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.237521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.237555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.237587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.237618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.237646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.237677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.237708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.237739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.237768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.237797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.237837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.237867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.237894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.237924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.237952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.237992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.238023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.238053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.238078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.238112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.238140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.238169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.238201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.238229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.238257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.238286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.238319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.238349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.238377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.238404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.238430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.238461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.903 [2024-10-01 17:39:11.238492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.238525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.238555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.238912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.238942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.238979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.239020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.239051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.239081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.239109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.239138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.239180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.239209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.239237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.239272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.239303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.239332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.239360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.239392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.239423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.239453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.239485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.239511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.239545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.239574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.239603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.239632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.239665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.239695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.239724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.239753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.239780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.239813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.239848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.239877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.239910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.239939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.239968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.240000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.240031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.240060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.240093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.240124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.240158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.240190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.240219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.240244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.240269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.240295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.240320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.240345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.240375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.240405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.240437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.240467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.240496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.240525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.240555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.240581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.240614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.240656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.240688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.240716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.240750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.240779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.240819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.241182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.241215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.241244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.241275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.241304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.241337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.241367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.241397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.241431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.241461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.241493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.241524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.241555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.904 [2024-10-01 17:39:11.241592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.241621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.241652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.241682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.241713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.241743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.241774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.241805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.241855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.241884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.241912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.241943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.241973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.242007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.242037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.242075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.242108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.242139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.242167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.242206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.242235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.242263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.242294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.242323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.242355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.242388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.242417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.242449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.242480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.242508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.242538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.242576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.242604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.242631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.242661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.242688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.242719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.242747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.242777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.242807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.242838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.242866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.242897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.242931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.242962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.242991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.243026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.243061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.243091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.243123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.243154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.243506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.243540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.243572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.243607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 17:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3309225 00:39:12.905 [2024-10-01 17:39:11.243636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.243668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.243699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.243734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.243767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.243798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.243830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.243858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.243887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.243917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.243948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.243988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.244022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 17:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:12.905 [2024-10-01 17:39:11.244053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.244081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.244113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.244140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.244168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.244199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.244227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.244256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.244292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.244327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.244356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.244384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.905 [2024-10-01 17:39:11.244414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.244449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.244482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.244512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.244540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.244569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.244598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.244629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.244666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.244694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.244722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.244761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.244790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.244815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.244845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.244877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.244905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.244934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.244968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.245003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.245033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.245066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.245095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.245125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.245153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.245185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.245217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.245247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.245278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.245314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.245345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.245375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.245411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.245442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.245800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.245833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.245863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.245892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.245919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.245954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.245986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.246022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.246050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.246090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.246119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.246149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.246174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.246206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.246238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.246268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.246298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.246327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.246356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.246382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.246409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.246434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.246459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.246485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.246510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.246535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.246561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.246586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.246616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.246649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.246682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.246711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.246744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.246775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.246809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.246842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.246872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.246904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.246935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.246964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.246992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.247033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.247060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.247085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.247110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.247135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.247161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.247186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.247212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.247237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.247268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.247311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.247343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.906 [2024-10-01 17:39:11.247371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.247402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.247434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.247466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.247494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.247525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.247559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.247589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.247620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.247651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.247682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.248244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.248276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.248309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.248341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.248367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.248399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.248428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.248457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.248505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.248535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.248563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.248598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.248629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.248669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.248700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.248729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.248761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.248830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.248862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.248915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.248946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.248974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.249014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.249044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.249074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.249103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.249134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.249166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.249216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.249247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.249278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.249310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.249342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.249373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.249404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.249433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.249474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.249507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.249536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.249566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.249597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.249628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.249662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.249694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.249723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.249753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.249780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.249810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.249841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.249877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.249909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.249936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.249966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.250001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.250031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.250059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.250086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.250119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.250148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.250178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.250216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.250250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.250278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.250455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.250487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.250516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.250545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.250581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.250610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.250638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.250667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.250697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.250728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.250759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.250791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.250821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.250852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.250882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.250913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.250943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.250974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.251010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.251042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.251073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.251122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.251152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.251181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.251217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.251247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.907 [2024-10-01 17:39:11.251282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.251315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.251340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.251373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.251405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.251433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.251468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.251499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.251528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.251558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.251593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.251623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.251657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.251687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.251717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.251749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.251779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.251804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.251836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.251863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.251888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.251913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.251939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.251963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.251989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.252018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.252048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.252079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.252109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.252141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.252171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.252200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.252229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.252258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.252286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.252318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.252348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.252377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.252513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.252544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.252575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.252636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.252666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.252695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.252731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.252761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.252790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.252819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.252848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.252876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.252906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.252955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.252989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.253023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.253050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.253487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.253521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.253553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.253583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.253621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.253651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.253683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.253713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.253743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.253780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.253810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.253847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.253876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.253906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.253941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.253973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.254006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.254040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.254072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.254101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.254162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.254191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.254222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.254251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.254279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.254309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.254344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.254372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.254398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.254428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.254456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.254486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.254525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.254553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.254581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.254613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.254640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.254672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.254699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.254730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.254758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.254791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.254820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.254870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.254901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.908 [2024-10-01 17:39:11.254931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.255116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.255158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.255191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.255221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.255253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.255285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.255316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.255349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.255378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.255410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.255452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.255486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.255523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.255551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.255579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.255608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.255636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.255666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.255704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.255739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.255768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.255793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.255823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.255852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.255882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.255916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.255945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.255973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.256008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.256042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.256071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.256100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.256133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.256163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.256193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.256223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.256251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.256283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.256312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.256340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.256397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.256424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.256455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.256487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.256518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.256552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.256584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.256615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.256643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.256676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.256705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.256734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.256766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.256800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.256828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.256863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.256895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.256940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.256982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.257020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.257046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.257077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.257107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.257137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.257265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.257299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.257328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.257359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.257391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.257421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.257450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.257481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.257509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.257539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.257569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.257602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.257632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.257663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.257699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.257728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.257758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.257977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.258019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.258050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.258082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.258116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.258147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.258177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.258219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.258248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.258276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.258307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.258337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.258374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.258402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.258432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.258465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.258495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.909 [2024-10-01 17:39:11.258525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.258555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.258586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.258615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.258649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.258682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.258713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.258745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.258775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.258806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.258862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.258891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.258923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.258951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.258980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.259020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.259050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.259079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.259113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.259145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.259175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.259203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.259235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.259267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.259296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.259333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.259364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.259396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.259426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.259825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.259861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.259902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.259931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.259960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.259991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.260027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.260054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.260093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.260130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.260158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.260190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.260221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.260249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.260278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.260309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.260347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.260377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.260407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.260436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.260465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.260502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.260536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.260567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.260596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.260627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.260655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.260686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.260715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.260745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.260774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.260803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.260834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.260864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.260894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.260923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.260952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.260983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.261022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.261051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.261083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.261107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.261132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.261157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.261182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.261208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.261233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.261260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.261295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.261322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.261348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.261378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.261408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.261434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.261459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.261484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.261509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.261536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.261566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.261597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.261629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.910 [2024-10-01 17:39:11.261658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.261687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.261719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.261847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.261873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.261897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.261922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.261947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.261972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.262001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.262026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.262051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.262078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.262103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.262128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.262153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.262178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.262203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.262228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.262253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.262764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.262798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.262829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.262860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.262890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.262934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.262964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.262998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.263030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.263061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.263093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.263125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.263156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.263190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.263219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.263249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.263280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.263309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.263340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.263371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.263402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.263436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.263464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.263494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.263526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.263559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.263595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.263632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.263664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.263695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.263722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.263752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.263785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.263815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.263847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.263877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.263908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.263958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.263992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.264025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.264053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.264081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.264116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.264152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.264184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.264215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.264456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.264488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.264527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.264555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.264587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.264617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.264644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.264681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.264708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.264738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.264765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.264798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.264829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.264858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.264892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.264920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.264958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.264989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.265026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.265066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.265096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.265128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.265160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.265189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.265217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.265252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.265282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.265318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.265350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.265386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.265414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.265444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.265496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.265524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.911 [2024-10-01 17:39:11.265555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.265587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.265619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.265661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.265693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.265723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.265753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.265782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.265821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.265851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.265879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.265911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.265941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.265975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.266010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.266039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.266075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.266105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.266144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.266172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.266201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.266231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.266261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.266298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.266331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.266358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.266387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.266411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.266443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.266483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.266612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.266642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.266671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.266700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.266728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.266764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.266793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.266824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.266860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.266890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.266922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.266951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.266983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.267018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.267049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.267077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.267105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:39:12.912 [2024-10-01 17:39:11.267448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.267482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.267514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.267542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.267573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.267602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.267628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.267660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.267694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.267719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.267744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.267769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.267795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.267825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.267855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.267889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.267921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.267952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.267983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.268014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.268039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.268064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.268093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.268126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.268155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.268186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.268216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.268250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.268279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.268304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.268329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.268354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.268378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.268403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.268428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.268453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.268478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.268503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.268527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.268553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.268578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.268603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.268628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.268653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.268678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.268703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.268850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.268878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.268903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.268928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.268953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.912 [2024-10-01 17:39:11.268978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.269008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.269042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.269072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.269111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.269141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.269171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.269201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.269231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.269263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.269295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.269325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.269358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.269390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.269422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.269452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.269482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.269516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.269548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.269578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.269614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.269644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.269701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.269730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.269761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.269792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.269822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.269858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.269889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.269918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.269953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.269981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.270020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.270050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.270083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.270110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.270140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.270173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.270204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.270232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.270261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.270291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.270326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.270357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.270383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.270414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.270443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.270482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.270514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.270540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.270569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.270599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.270629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.270658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.270697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.270725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.270758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.270786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.270817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.271168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.271202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.271229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.271263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.271292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.271322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.271351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.271380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.271427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.271458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.271486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.271521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.271551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.271582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.271613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.271643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.271677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.271711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.271741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.271775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.271806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.271837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.271871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.271901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.271933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.271964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.271998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.272030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.272060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.272090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.272127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.272158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.272188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.272221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.272251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.272286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.272317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.272346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.272376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.272404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.272433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.913 [2024-10-01 17:39:11.272461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.272492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.272523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.272554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.272583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.272613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.272642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.272671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.272699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.272734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.272765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.272791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.272820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.272852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.272890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.272917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.272946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.272977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.273012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.273052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.273083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.273113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.273575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.273608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.273637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.273669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.273699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.273728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.273762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.273799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.273828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.273860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.273889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.273918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.273959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.273989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.274023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.274059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.274090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.274121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.274148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.274183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.274216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.274255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.274284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.274314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.274347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.274375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.274412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.274445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.274477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.274508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.274537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.274566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.274606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.274639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.274669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.274696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.274738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.274764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.274793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.274824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.274855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.274891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.274921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.274954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.274985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.275020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.275049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.275081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.275110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.275140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.275165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.275190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.275216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.275241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.275266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.275291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.275317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.275342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.275367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.275392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.275417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.275442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.275467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.275493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.275979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.276019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.276050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.276082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.276110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.276141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.276196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.276228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.276258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.276291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.276321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.276366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.276398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.276429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.914 [2024-10-01 17:39:11.276463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.276495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.276530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.276559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.276588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.276619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.276648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.276679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.276715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.276746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.276777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.276807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.276836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.276866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.276895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.276932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.276962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.276992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.277028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.277059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.277088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.277146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.277176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.277206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.277235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.277263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.277317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.277346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.277375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.277407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.277438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.277469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.277500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.277528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.277566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.277594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.277628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.277656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.277685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.277721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.277751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.277781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.277810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.277837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.277870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.277900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.277928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.277971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.278005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.278351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.278382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.278413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.278448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.278479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.278507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.278543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.278572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.278601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.278631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.278662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.278697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.278732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.278763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.278795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.278825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.278857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.278895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.278926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.278954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.278985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.279019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.279047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.279084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.279116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.279148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.279208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.279238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.279269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.279305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.279337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.279376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.279404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.279435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.279468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.279501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.279531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.279562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.279591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.279624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.915 [2024-10-01 17:39:11.279654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.279684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.279712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.279742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.279771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.279799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.279829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.279857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.279891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.279921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.279952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.279982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.280016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.280050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.280078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.280109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.280139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.280167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.280197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.280227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.280265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.280303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.280331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.280364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.281249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.281282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.281312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.281347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.281376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.281407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.281435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.281466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.281496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.281525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.281555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.281590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.281622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.281650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.281675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.281704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.281736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.281766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.281794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.281826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.281858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.281888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.281921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.281955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.281987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.282021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.282067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.282096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.282125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.282154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.282185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.282228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.282259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.282291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.282351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.282382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.282413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.282444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.282475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.282517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.282546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.282575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.282610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.282643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.282677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.282708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.282735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.282768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.282799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.282829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.282859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.282887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.282918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.282947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.282985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.283016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.283043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.283073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.283151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.283181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.283210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.283245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.283277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.283417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.283448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.283479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.283508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.283540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.283572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.283604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.283634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.283671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.283703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.283733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.283761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.283791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.916 [2024-10-01 17:39:11.283823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.283852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.283883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.283914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.283944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.283975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.284008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.284038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.284086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.284117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.284149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.284183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.284213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.284274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.284304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.284335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.284365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.284396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.284427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.284456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.284484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.284516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.284549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.284578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.284615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.284648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.284676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.284704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.284732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.284760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.284789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.284818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.284861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.284887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.284919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.284946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.284975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.285011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.285040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.285073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.285105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.285134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.285164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.285194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.285229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.285258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.285290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.285324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.285354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.285384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.285419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.285975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.286009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.286042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.286078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.286107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.286138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.286165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.286192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.286223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.286256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.286289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.286322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.286354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.286385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.286413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.286445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.286476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.286505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.286538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.286568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.286599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.286631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.286660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.286715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.286745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.286777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.286811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.286841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.286871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.286900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.286930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.286964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.286996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.287027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.287055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.287084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.287139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.287170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.287198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.287227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.287258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.287288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.287317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.287346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.287407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.287438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.287469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.287499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.917 [2024-10-01 17:39:11.287529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.287558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.287590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.287620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.287652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.287682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.287708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.287743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.287773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.287802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.287832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.287860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.287899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.287931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.287961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.288097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.288127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.288156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.288189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.288224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.288253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.288280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.288310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.288336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.288366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.288404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.288429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.288459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.288490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.288520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.288552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.288583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.288618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.288647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.288682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.288712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.288742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.288773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.288803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.288835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.288865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.288893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.288926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.288957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.288984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.289024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.289055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.289086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.289118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.289152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.289189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.289220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.289250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.289288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.289317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.289349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.289385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.289415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.289452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.289481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.289512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.289548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.289578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.289611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.289642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.289671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.289704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.289733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.289764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.289795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.289824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.289855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.289885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.289916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.289945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.289973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.290012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.290038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.290069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.290425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.290453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.290484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.290512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.290540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.290573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.290609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.290639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.290666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.290696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.290725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.290759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.290788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.290813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.290845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.290876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.290908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.290940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.290971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.918 [2024-10-01 17:39:11.291006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.291040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.291072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.291102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.291132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.291163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.291190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.291221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.291249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.291278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.291308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.291339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.291363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.291388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.291415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.291440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.291470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.291507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.291537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.291570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.291600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.291629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.291659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.291690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.291722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.291755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.291783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.291814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.291842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.291871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.291900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.291934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.291965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.291998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.292028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.292058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.292090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.292121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.292151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.292177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.292202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.292227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.292252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.292284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.292603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.292634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.292665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.292699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.292728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.292759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.292801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.292833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.292863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.292897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.292927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.292960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.292991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.293025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.293056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.293088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.293116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.293150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.293180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.293211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.293242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.293273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.293307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.293337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.293373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.293401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.293429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.293464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.293494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.293524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.293553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.293582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.293613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.293647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.293679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.293710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.293749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.293781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.293805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.293838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.293867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.293896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.293937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.919 [2024-10-01 17:39:11.293965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.293999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.294024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.294050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.294075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.294100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.294130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.294160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.294186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.294214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.294247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.294279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.294311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.294341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.294394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.294425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.294457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.294490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.294520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.294548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.294578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.295144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.295178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.295210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.295246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.295277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.295309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.295342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.295372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.295405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.295436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.295467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.295507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.295536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.295570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.295602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.295635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.295661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.295690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.295717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.295748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.295777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.295806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.295834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.295862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.295889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.295915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.295951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.295986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.296019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.296046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.296082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.296107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.296138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.296168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.296199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.296229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.296256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.296287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.296318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.296349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.296382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.296415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.296445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.296477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.296506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.296537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.296565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.296593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.296622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.296653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.296682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.296713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.296745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.296778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.296806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.296836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.296867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.296898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.296929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.296959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.296990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.297043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.297072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.297223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.297255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.297286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.297324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.297354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.297383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.297419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.297452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.297482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.297512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.297541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.297571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.920 [2024-10-01 17:39:11.297600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.297629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.297660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.297687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.297717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.297746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.297777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.297804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.297834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.297865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.297901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.297933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.297963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.297990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.298027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.298055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.298086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.298115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.298150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.298180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.298208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.298237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.298262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.298293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.298325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.298356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.298387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.298421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.298454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.298487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.298519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.298546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.298577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.298605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.298638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.298667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.298697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.298722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.298751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.298781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.298806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.298832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.298857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.298881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.298906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.298933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.298965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.299005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.299035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.299070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.299101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.299130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.299481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.299512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.299541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.299570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.299601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.299632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.299664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.299695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.299727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.299759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.299794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.299829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.299856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.299885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.299915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.299944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.299981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.300013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.300039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.300070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.300100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.300128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.300164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.300191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.300220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.300250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.300280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.300308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.300335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.300367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.300396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.300425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.300468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.300497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.300526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.300557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.300589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.300619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.300653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.300681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.300713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.300746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.300775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.300808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.300843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.300879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.921 [2024-10-01 17:39:11.300907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.300936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.300969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.301005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.301034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.301060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.301095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.301127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.301156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.301184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.301212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.301240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.301269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.301299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.301328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.301366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.301396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.301745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.301783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.301815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.301845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.301887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.301919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.301948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.301985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.302019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.302050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.302101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.302135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.302163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.302192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.302231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.302259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.302290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.302328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.302361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.302394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.302421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.302452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.302501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.302535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.302577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.302605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.302635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.302663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.302692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.302724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.302761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.302791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.302821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.302851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.302880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.302910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.302939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.302968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.303009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.303039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.303069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.303097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.303136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.303166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.303200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.303227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.303257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.303288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.303320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.303351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.303379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.303410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.303435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.303465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.303491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.303539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.303571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.303600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.303633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.303662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.303695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.303724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.303757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.303793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.304135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.304168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.304201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.304232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.304263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.304293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.304323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.304354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.304390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.304422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.304452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.304481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:39:12.922 [2024-10-01 17:39:11.304510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.304549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.304577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.304609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.922 [2024-10-01 17:39:11.304634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.304663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.304693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.304723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.304753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.304785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.304814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.304846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.304877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.304905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.304940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.304970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.305003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.305033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.305064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.305093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.305125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.305149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.305180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.305213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.305242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.305274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.305303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.305331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.305362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.305395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.305425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.305462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.305490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.305522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.305558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.305587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.305619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.305670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.305701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.305730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.305765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.305793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.305821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.305860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.305888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.305917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.305948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.305983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.306018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.306050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.306079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.306528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.306562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.306592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.306628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.306657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.306693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.306723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.306752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.306785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.306814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.306847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.306878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.306907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.306936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.306968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.307004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.307046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.307075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.307106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.307138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.307168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.307196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.307226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.307262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.307292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.307321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.307353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.307382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.307408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.307434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.307476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.307503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.307531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.307561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.307588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.307620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.307654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.307689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.307724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.307750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.307779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.307810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.307839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.307881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.307912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.307943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.307979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.308015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.308046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.308078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.308108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.308172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.923 [2024-10-01 17:39:11.308201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.308229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.308260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.308289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.308324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.308356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.308388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.308420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.308448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.308482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.308512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.308541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.308909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.308945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.308977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.309010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.309047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.309077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.309107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.309134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.309163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.309191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.309228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.309259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.309286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.309313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.309345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.309383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.309414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.309443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.309472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.309511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.309539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.309572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.309604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.309636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.309666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.309694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.309724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.309757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.309789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.309819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.309849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.309877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.309907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.309934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.309965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.309992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.310029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.310058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.310090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.310126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.310156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.310191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.310226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.310255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.310294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.310322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.310357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.310388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.310417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.310474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.310504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.310534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.310563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.310590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.310626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.310657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.310687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.310717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.310747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.310777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.310813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.310845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.310877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.311232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.311267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.311298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.311331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.311362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.311393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.311428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.311457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.311487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.311512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.311542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.311576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.924 [2024-10-01 17:39:11.311605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.311633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.311666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.311696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.311727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.311758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.311789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.311821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.311850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.311880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.311912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.311943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.311971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.312011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.312040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.312070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.312099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.312129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.312166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.312196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.312227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.312257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.312289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.312320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.312351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.312381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.312423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.312454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.312484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.312515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.312544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.312575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.312605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.312634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.312675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.312707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.312737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.312766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.312799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.312830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.312865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.312900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.312932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.312962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.312999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.313027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.313052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.313084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.313113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.313152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.313180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.313213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.313582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.313614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.313644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.313679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.313712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.313741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.313771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.313803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.313835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.313865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.313899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.313930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.313961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.314007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.314038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.314069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.314112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.314141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.314171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.314207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.314241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.314272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.314301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.314332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.314367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.314397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.314430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.314467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.314497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.314529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.314565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.314596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.314628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.314657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.314687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.314723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.314753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.314785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.314814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.314844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.314880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.314914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.314943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.314984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.315019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.315050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.315085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.925 [2024-10-01 17:39:11.315114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.315143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.315171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.315199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.315234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.315267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.315299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.315330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.315362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.315389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.315418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.315459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.315490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.315518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.315546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.315574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.315945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.315975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.316010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.316040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.316066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.316097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.316122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.316157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.316187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.316212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.316242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.316274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.316301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.316332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.316362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.316390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.316415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.316440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.316465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.316495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.316524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.316555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.316585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.316618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.316651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.316682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.316707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.316733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.316763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.316794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.316824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.316854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.316885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.316917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.316943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.316968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.316999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.317023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.317049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.317075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.317100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.317125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.317150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.317176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.317201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.317226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.317251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.317275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.317301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.317326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.317352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.317377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.317403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.317428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.317453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.317479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.317505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.317530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.317556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.317581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.317606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.317631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.317656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.317682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.318494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.318528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.318559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.318589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.318620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.318654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.318684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.318714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.318762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.318792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.318825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.318855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.318885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.318914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.318946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.318977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.319011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.319040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.319073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.319102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.926 [2024-10-01 17:39:11.319131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.319162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.319191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.319222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.319257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.319286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.319319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.319348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.319379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.319408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.319435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.319464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.319492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.319528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.319567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.319596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.319624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.319653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.319691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.319721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.319746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.319778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.319807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.319837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.319867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.319903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.319932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.319961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.319988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.320023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.320052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.320081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.320107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.320142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.320173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.320205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.320235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.320267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.320295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.320331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.320464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.320514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.320544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.320575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.320604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.320635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.320665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.320697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.320727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.320759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.320789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.320820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.320852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.320883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.320915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.320947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.320980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.321015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.321047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.321077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.321108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.321142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.321172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.321202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.321245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.321277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.321308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.321338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.321369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.321401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.321435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.321466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.321496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.321526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.321554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.321582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.321616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.321649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.321682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.321711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.321740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.321768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.321812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.321841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.321872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.321902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.321934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.321972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.322006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.322036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.322063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.322099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.322128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.322156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.322186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.322228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.322258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.322289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.322318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.927 [2024-10-01 17:39:11.322350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.322379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.322409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.322440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.322470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.322617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.322645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.322677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.323139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.323170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.323200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.323233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.323265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.323298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.323327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.323359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.323390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.323418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.323447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.323477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.323513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.323542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.323572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.323600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.323628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.323658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.323697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.323732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.323761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.323792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.323822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.323851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.323882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.323913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.323944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.323969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.324000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.324026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.324052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.324077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.324107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.324139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.324170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.324201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.324233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.324263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.324291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.324321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.324350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.324380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.324412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.324443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.324477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.324506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.324536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.324582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.324612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.324642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.324677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.324707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.324736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.324768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.324797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.324838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.324867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.324897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.324931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.324963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.325119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.325151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.325178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.325208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.325240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.325276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.325313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.325347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.325375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.325405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.325440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.325469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.325501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.325529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.325561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.325600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.325629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.325660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.325690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.325722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.325756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.325784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.928 [2024-10-01 17:39:11.325815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.325863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.325892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.325921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.325960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.325992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.326027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.326057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.326085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.326123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.326152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.326201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.326232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.326262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.326298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.326328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.326358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.326390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.326421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.326475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.326505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.326539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.326568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.326600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.326630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.326657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.326685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.326715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.326743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.326771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.326804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.326840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.326869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.326899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.326928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.326959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.326997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.327028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.327053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.327084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.327115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.327153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.327278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.327307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.327337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.327801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.327832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.327864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.327894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.327929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.327957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.327987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.328022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.328052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.328084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.328113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.328142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.328174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.328206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.328237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.328279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.328310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.328340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.328384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.328414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.328444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.328473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.328505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.328539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.328567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.328597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.328633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.328662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.328692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.328722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.328751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.328781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.328809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.328839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.328870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.328898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.328927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.328963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.329004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.329037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.329070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.329105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.329136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.329165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.329195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.329224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.329256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.329293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.329327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.329355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.329386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.329413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.329442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.329480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.929 [2024-10-01 17:39:11.329519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.329549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.329579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.329603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.329637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.329667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.329817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.329848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.329885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.329915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.329944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.329975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.330013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.330046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.330076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.330105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.330138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.330169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.330201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.330236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.330265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.330297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.330328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.330358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.330417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.330448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.330479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.330512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.330542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.330573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.330605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.330636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.330666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.330697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.330728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.330760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.330789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.330822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.330852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.330885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.330921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.330954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.330986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.331026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.331058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.331087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.331118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.331151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.331180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.331212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.331244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.331270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.331304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.331335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.331365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.331402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.331431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.331461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.331488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.331518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.331548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.331596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.331626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.331654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.331681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.331715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.331746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.331784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.331816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.331848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.331972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.332006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.332037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.332834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.332867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.332899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.332930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.332970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.333004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.333036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.333069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.333100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.333134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.333164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.333191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.333225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.333259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.333287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.333316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.333347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.333377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.333408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.333446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.333472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.333519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.333550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.333581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.333611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.333639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.333671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.930 [2024-10-01 17:39:11.333702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.333733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.333770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.333802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.333833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.333864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.333895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.333933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.333964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.334000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.334033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.334064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.334094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.334124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.334157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.334187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.334219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.334248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.334280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.334321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.334352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.334381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.334411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.334439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.334471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.334501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.334532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.334565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.334597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.334628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.334657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.334689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.334717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.334866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.334896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.334932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.334962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.334989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.335022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.335058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.335087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.335116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.335156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.335192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.335218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.335246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.335278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.335326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.335356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.335388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.335423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.335453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.335484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.335514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.335547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.335584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.335612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.335642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.335674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.335706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.335743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.335773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.335802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.335833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.335862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.335893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.335925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.335954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.336001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.336031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.336062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.336091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.336123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.336154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.336183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.336213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.336246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.336275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.336308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.336339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.336368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.336399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.336430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.336459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.336491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.336522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.336552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.336581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.336610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.336639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.336670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.336697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.336727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.336756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.336785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.336825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.336855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.336978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.337012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.337046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.931 [2024-10-01 17:39:11.337488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.337523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.337556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.337586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.337617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.337647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.337679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.337710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.337742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.337774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.337802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.337832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.337866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.337896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.337930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.337961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.337997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.338029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.338058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.338093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.338121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.338151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.338197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.338228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.338259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.338303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.338335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.338366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.338397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.338428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.338455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.338482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.338512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.338539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.338569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.338601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.338642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.338675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.338708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.338738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.338779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.338807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.338836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.338863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.338892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.338923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.338949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.338983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.339012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.339043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.339072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.339105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.339137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.339171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.339203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.339240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.339270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.339301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.339331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.339363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.339515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.339551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.339582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.339613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.339662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.339693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.339723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.339757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.339788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.339820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.339851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.339882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.339912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.339940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.339972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.340005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.340048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.340078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.340107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.340135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.340177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.340206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.340238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.340268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.340294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.340325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.340351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.340376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.340402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.340427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.340452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.340477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.340502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.340529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.340563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.340592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.340624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.340653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.340683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.932 [2024-10-01 17:39:11.340713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.340744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.340777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.340807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.340839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.340871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.340901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.340935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.340964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.340998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.341034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.341065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.341098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.341129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.341160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.341188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.341218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.341249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.341280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.341309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.341341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.341371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.341405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.341436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.341466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.341817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.341851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.341881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.341912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:39:12.933 [2024-10-01 17:39:11.341943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.341974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.342013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.342042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.342072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.342114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.342141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.342168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.342200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.342232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.342273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.342304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.342332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.342361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.342391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.342427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.342454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.342485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.342515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.342540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.342574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.342612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.342640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.342668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.342695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.342722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.342752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.342781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.342816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.342847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.342879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.342911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.342969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.343005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.343034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.343066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.343097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.343134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.343165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.343194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.343252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.343281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.343313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.343359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.343391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.343418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.343449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.343481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.343509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.343538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.343570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.343601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.343631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.343659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.343692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.343720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.343751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.343782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.343813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.344162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.344191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.344221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.344251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.344283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.344314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.933 [2024-10-01 17:39:11.344346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.344380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.344411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.344445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.344477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.344508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.344541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.344571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.344603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.344638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.344671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.344705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.344735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.344764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.344824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.344853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.344883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.344914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.344946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.344983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.345016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.345047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.345089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.345117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.345148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.345175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.345204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.345243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.345271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.345301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.345334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.345364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.345393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.345426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.345458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.345488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.345515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.345549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.345579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.345609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.345638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.345667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.345698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.345728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.345759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.345796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.345825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.345857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.345882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.345913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.345941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.345972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.346013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.346045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.346075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.346104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.346134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.346165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.346728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.346763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.346792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.346823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.346862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.346893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.346922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.346954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.346984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.347018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.347049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.347079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.347112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.347143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.347171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.347205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.347235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.347267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.347296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.347327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.347357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.347385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.347424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.347453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.347487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.347515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.347543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.347578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.347617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.347647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.347675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.347703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.347734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.347763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.347796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.347830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.347859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.347885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.347916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.347948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.347984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.348018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.348048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.348075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.348107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.348153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.348181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.348211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.348243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.934 [2024-10-01 17:39:11.348272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.348308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.348341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.348372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.348409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.348439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.348471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.348504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.348533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.348591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.348621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.348650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.348681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.348711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.348883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.348910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.348943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.348973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.349006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.349034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.349072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.349103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.349130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.349158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.349187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.349225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.349256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.349284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.349314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.349344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.349378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.349411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.349443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.349474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.349508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.349541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.349573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.349607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.349640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.349673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.349713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.349743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.349774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.349821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.349847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.349877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.349907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.349939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.349970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.350014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.350068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.350098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.350128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.350164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.350194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.350227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.350258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.350288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.350317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.350348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.350378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.350407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.350440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.350472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.350503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.350540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.350570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.350601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.350633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.350661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.350691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.350720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.350756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.350787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.350817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.350847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.350878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.350908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.351336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.351368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.351400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.351438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.351469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.351503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.351531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.351562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.351595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.351628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.351660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.351692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.351723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.351757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.351792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.351824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.351854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.351886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.351917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.351968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.352002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.352033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.352067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.352100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.352130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.352163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.352195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.352227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.352275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.935 [2024-10-01 17:39:11.352306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.352337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.352367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.352398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.352449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.352482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.352513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.352552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.352582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.352612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.352644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.352673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.352703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.352731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.352761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.352809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.352839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.352871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.352901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.352933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.352968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.353000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.353031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.353062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.353097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.353132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.353161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.353194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.353225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.353255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.353283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.353311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.353340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.353377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.353720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.353755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.353788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.353817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.353846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.353888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.353913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.353945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.353973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.354006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.354046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.354077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.354107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.354133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.354163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.354194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.354224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.354254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.354284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.354313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.354344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.354375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.354405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.354436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.354464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.354495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.354529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.354559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.354593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.354624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.354658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.354695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.354724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.354753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.354785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.354815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.354843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.354871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.354900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.354935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.354966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.354999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.355038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.355068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.355095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.355124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.355149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.355184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.355216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.355247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.355279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.355312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.355343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.355376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.355405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.355434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.355466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.355496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.355534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.355565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.355597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.355634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.355663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.355696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.356308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.356340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.356369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.356409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.356440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.356470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.356497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.356522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.356553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.356580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.356613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.356645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.356675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.356705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.356734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.356762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.356795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.356826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.356855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.936 [2024-10-01 17:39:11.356889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.356921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.356952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.356980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.357013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.357047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.357079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.357111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.357140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.357169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.357204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.357233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.357264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.357301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.357332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.357360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.357394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.357423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.357454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.357485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.357513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.357547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.357580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.357611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.357641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.357670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.357701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.357729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.357758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.357790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.357821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.357858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.357887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.357916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.357943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.357971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.358016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.358048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.358080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.358109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.358141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.358167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.358199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.358231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.358371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.358405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.358433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.358461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.358500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.358529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.358562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.358593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.358626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.358657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.358686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.358715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.358744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.358774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.358803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.358835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.358865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.358895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.358949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.358982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.359019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.359081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.359112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.359141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.359173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.359202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.359232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.359263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.359295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.359330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.359369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.359400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.359432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.359461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.359486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.359515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.359546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.359578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.359619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.359647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.359674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.359702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.359731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.359771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.359801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.359831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.359860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.359896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.359923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.359953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.359983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.360017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.360046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.360076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.360111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.360143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.360177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.360210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.360241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.360268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.360298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.360324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.937 [2024-10-01 17:39:11.360350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.360376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.360667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.360703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.360730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.360760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.360789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.360815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.360843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.360870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.360897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.360925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.360951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.360978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.361012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.361042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.361072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.361101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.361141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.361173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.361203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.361232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.361262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.361294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.361324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.361354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.361386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.361413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.361440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.361471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.361513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.361545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.361577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.361613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.361644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.361673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.361701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.361733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.361765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.361798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.361827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.361860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.361891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.361922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.361955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.361987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.362025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.362056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.362087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.362118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.362145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.362175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.362203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.362236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.362267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.362299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.362332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.362366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.362396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.362422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.362453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.362483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.362514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.362547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.362576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.362929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.362961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.362993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.363033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.363064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.363099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.363136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.363166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.363195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.363230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.363259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.363294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.363325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.363353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.363387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.363419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.363456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.363488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.363517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.363554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.363583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.363612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.363647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.363677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.363710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.363740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.363773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.363829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.363859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.363890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.363927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.363956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.364008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.364038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.364070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.364100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.364131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.364161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.364194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.364225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.364258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.364291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.364320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.364356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.364386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.364416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.364443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.938 [2024-10-01 17:39:11.364472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.364505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.364535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.364563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.364589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.364632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.364661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.364692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.364722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.364751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.364779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.364816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.364849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.364877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.364909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.364938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.364975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.365323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.365354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.365387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.365415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.365445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.365476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.365509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.365537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.365566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.365599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.365628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.365657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.365683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.365713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.365746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.365776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.365808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.365838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.365869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.365901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.365931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.365960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.366004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.366039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.366069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.366101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.366133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.366162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.366190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.366220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.366249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.366285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.366314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.366339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.366367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.366397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.366429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.366460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.366491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.366521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.366551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.366582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.366614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.366644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.366674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.366706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.366744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.366775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.366804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.366841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.366872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.366901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.366933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.366965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.367002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.367038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.367067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.367101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.367131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.367162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.367217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.367245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.367275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.367657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.367687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.367727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.367755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.367786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.367816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.367853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.367883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.367908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.367937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.367968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.368006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.368037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.368069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.368098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.368130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.368161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.368198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.368228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.368265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.368303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.368334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.368362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.368392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.368422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.368452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.368483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.368514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.939 [2024-10-01 17:39:11.368546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.368576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.368615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.368644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.368674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.368706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.368738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.368772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.368803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.368834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.368874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.368903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.368932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.368965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.369002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.369032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.369069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.369103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.369133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.369170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.369203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.369233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.369268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.369298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.369330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.369360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.369389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.369430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.369463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.369495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.369529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.369561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.369589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.369620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.369653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.369689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.370039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.370082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.370111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.370138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.370168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.370195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.370228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.370256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.370285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.370318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.370352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.370383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.370414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.370444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.370476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.370507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.370540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.370571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.370602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.370631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.370664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.370694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.370726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.370757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.370787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.370824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.370854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.370882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.370911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.370939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.370970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.371003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.371033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.371063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.371092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.371118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.371150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.371184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.371213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.371244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.371282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.371312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.371346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.371374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.371403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.371432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.371467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.371500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.371530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.371592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.371623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.371656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.371687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.371719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.371747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.371779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.371812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.371843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.371876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.371905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.371935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.371965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.372002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.372391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.372425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.372453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.372483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.372512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.372540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.372569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.372601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.372634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.372665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.372693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.372725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.372757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.372785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.372814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.372844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.372873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.372903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.372936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.372968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.373001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.373030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.373060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.373090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.373128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.373158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.373187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.373215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.940 [2024-10-01 17:39:11.373243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.373273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.373303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.373333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.373363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.373395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.373428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.373458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.373491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.373519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.373548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.373583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.373614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.373646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.373673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.373704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.373737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.373765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.373794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.373826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.373856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.373886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.373917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.373948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.373981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.374023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.374054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.374090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.374118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.374149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.374179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.374211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.374243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.374274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.374307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.374337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.374770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.374801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.374830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.374862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.374893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.374923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.374954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.374982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.375018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.375047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.375077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.375104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.375128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.375162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.375196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.375228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.375256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.375283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.375312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.375345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.375374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.375403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.375434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.375464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.375490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.375521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.375552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.375582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.375617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.375650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.375679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.375709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.375744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.375773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.375802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.375831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.375861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.375890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.375923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.375954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.375983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.376018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.376047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.376079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.376110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.376143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.376173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.376203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.376236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.376267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.376304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.376333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.376362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.376395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.376426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.376458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.376487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.376512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.376544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.376577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.376607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.376636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.376665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.377063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.377097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.377126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.377156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.377186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.377217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.377247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.377277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.377305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.377332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.377361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.377392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.377425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.377455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.377488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.377518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.377550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.377578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.377608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.377647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.377677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.377707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.377743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.377776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.377807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.377837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.377865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.377894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.377923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.377956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.377988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.378022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.378058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.378090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.378119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.378151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.378182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.378213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.378245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.378276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.378307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.378337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.378364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.378392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.378432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.378463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.378491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.378520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.378555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.378589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.378616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.378647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.378678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.941 [2024-10-01 17:39:11.378708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.942 [2024-10-01 17:39:11.378737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.942 [2024-10-01 17:39:11.378767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.942 [2024-10-01 17:39:11.378802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.942 [2024-10-01 17:39:11.378833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.942 [2024-10-01 17:39:11.378863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.942 [2024-10-01 17:39:11.378896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.942 [2024-10-01 17:39:11.378923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.942 [2024-10-01 17:39:11.378952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.942 [2024-10-01 17:39:11.378983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.942 [2024-10-01 17:39:11.379018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.942 [2024-10-01 17:39:11.379367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.942 [2024-10-01 17:39:11.379400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.942 [2024-10-01 17:39:11.379433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.942 [2024-10-01 17:39:11.379466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.942 [2024-10-01 17:39:11.379501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.942 [2024-10-01 17:39:11.379539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.942 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:39:12.942 [2024-10-01 17:39:11.379571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.942 [2024-10-01 17:39:11.379603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.942 [2024-10-01 17:39:11.379632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.942 [2024-10-01 17:39:11.379661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.942 [2024-10-01 17:39:11.379700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.942 [2024-10-01 17:39:11.379732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.942 [2024-10-01 17:39:11.379763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.942 [2024-10-01 17:39:11.379804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.942 [2024-10-01 17:39:11.379832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.942 [2024-10-01 17:39:11.379863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.942 [2024-10-01 17:39:11.379891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.942 [2024-10-01 17:39:11.379920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.942 [2024-10-01 17:39:11.379948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.942 [2024-10-01 17:39:11.379976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.942 [2024-10-01 17:39:11.380010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.942 [2024-10-01 17:39:11.380040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.942 [2024-10-01 17:39:11.380067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.942 [2024-10-01 17:39:11.380096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.942 [2024-10-01 17:39:11.380126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.942 [2024-10-01 17:39:11.380166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.942 [2024-10-01 17:39:11.380193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.942 [2024-10-01 17:39:11.380221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.942 [2024-10-01 17:39:11.380253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.942 [2024-10-01 17:39:11.380284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.942 [2024-10-01 17:39:11.380311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.942 [2024-10-01 17:39:11.380342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:12.942 [2024-10-01 17:39:11.380371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.380408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.380440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.380472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.380505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.380536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.380569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.380597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.380625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.380651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.380686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.380719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.380749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.380781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.380812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.380846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.380875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.380906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.380937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.380967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.381001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.381031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.381063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.381094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.381123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.381156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.381187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.381215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.381245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.381275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.381307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.381688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.381725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.381752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.381786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.381815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.381842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.381870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.381903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.381934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.381963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.382000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.382031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.382061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.382091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.382120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.382147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.382180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.382209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.382239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.382267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.382297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.382329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.382360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.382397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.382426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.382455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.382486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.382518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.382547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.382575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.382604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.382638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.382667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.382700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.382728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.382763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.382792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.382821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.382849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.382882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.382911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.382944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.382974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.383008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.383040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.383071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.383126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.383156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.383187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.383219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.383249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.383280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.383311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.383340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.383375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.383406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.383440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.383470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.383499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.383536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.383568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.383598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.383627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 [2024-10-01 17:39:11.383663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:39:13.203 17:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:13.203 17:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:39:13.203 17:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:39:13.203 true 00:39:13.464 17:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3309225 00:39:13.464 17:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:13.464 17:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:13.724 17:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:39:13.724 17:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:39:13.984 true 00:39:13.984 17:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3309225 00:39:13.984 17:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:14.922 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:14.922 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:14.922 17:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:15.182 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:15.182 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:15.182 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:15.182 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:15.182 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:15.182 17:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:39:15.182 17:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:39:15.441 true 00:39:15.441 17:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3309225 00:39:15.441 17:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:16.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:16.379 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:16.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:16.379 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:39:16.379 17:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:39:16.639 true 00:39:16.639 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3309225 00:39:16.639 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:16.900 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:16.900 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:39:16.900 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:39:17.160 true 00:39:17.160 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3309225 00:39:17.160 17:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:18.542 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:18.542 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:18.542 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:18.542 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:18.542 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:18.542 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:18.542 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:18.542 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:18.542 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:39:18.542 17:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:39:18.542 true 00:39:18.801 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3309225 00:39:18.801 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:19.743 17:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:19.743 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:39:19.743 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:39:19.743 true 00:39:20.002 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3309225 00:39:20.002 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:20.002 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:20.261 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:39:20.261 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:39:20.521 true 00:39:20.521 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3309225 00:39:20.521 17:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:21.504 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:21.504 17:39:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:21.504 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:21.504 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:21.504 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:21.504 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:21.764 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:21.764 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:21.764 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:39:21.764 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:39:21.764 true 00:39:22.023 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3309225 00:39:22.023 17:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:22.969 17:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:22.969 17:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:39:22.969 17:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:39:22.969 true 00:39:23.229 17:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3309225 00:39:23.229 17:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:23.229 17:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:23.489 17:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:39:23.489 17:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:39:23.750 true 00:39:23.750 17:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3309225 00:39:23.750 17:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:24.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:24.688 17:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:24.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:24.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:24.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:24.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:24.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:24.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:24.947 17:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:39:24.948 17:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:39:25.207 true 00:39:25.207 17:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3309225 00:39:25.207 17:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:26.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:26.148 17:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:26.148 17:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:39:26.148 17:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:39:26.409 true 00:39:26.409 17:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3309225 00:39:26.409 17:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:26.669 17:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:26.669 17:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:39:26.669 17:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:39:26.928 true 00:39:26.928 17:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3309225 00:39:26.928 17:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:26.928 17:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:27.188 17:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:39:27.188 17:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:39:27.447 true 00:39:27.447 17:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3309225 00:39:27.447 17:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:27.707 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:27.707 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:39:27.707 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:39:27.967 true 00:39:27.967 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3309225 00:39:27.967 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:28.226 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:28.226 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:39:28.226 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:39:28.486 true 00:39:28.486 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3309225 00:39:28.486 17:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:28.746 17:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:28.746 17:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:39:29.005 17:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:39:29.005 true 00:39:29.005 17:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3309225 00:39:29.005 17:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:30.386 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:30.386 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:30.387 17:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:30.387 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:30.387 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:30.387 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:30.387 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:30.387 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:30.387 17:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:39:30.387 17:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:39:30.646 true 00:39:30.646 17:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3309225 00:39:30.646 17:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:31.585 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:31.585 17:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:31.585 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:31.585 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:39:31.585 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:39:31.845 true 00:39:31.845 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3309225 00:39:31.845 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:32.106 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:32.106 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:39:32.106 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:39:32.366 true 00:39:32.366 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3309225 00:39:32.366 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:32.627 17:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:32.627 17:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:39:32.627 17:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:39:32.887 true 00:39:32.887 17:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3309225 00:39:32.887 17:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:33.147 17:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:33.408 17:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:39:33.408 17:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:39:33.408 true 00:39:33.408 17:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3309225 00:39:33.408 17:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:33.408 Initializing NVMe Controllers 00:39:33.408 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:33.408 Controller IO queue size 128, less than required. 00:39:33.408 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:33.408 Controller IO queue size 128, less than required. 00:39:33.408 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:33.408 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:39:33.408 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:39:33.408 Initialization complete. Launching workers. 00:39:33.408 ======================================================== 00:39:33.408 Latency(us) 00:39:33.408 Device Information : IOPS MiB/s Average min max 00:39:33.408 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2798.81 1.37 26176.80 1521.96 1149221.33 00:39:33.408 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16897.49 8.25 7550.50 1634.54 405854.64 00:39:33.408 ======================================================== 00:39:33.408 Total : 19696.30 9.62 10197.26 1521.96 1149221.33 00:39:33.408 00:39:33.670 17:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:33.670 17:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:39:33.670 17:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:39:33.930 true 00:39:33.930 17:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3309225 00:39:33.930 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3309225) - No such process 00:39:33.930 17:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3309225 00:39:33.930 17:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:34.191 17:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:34.191 17:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:39:34.191 17:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:39:34.191 17:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:39:34.191 17:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:34.191 17:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:39:34.451 null0 00:39:34.451 17:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:34.451 17:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:34.451 17:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:39:34.712 null1 00:39:34.712 17:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:34.712 17:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:34.712 17:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:39:34.712 null2 00:39:34.712 17:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:34.712 17:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:34.712 17:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:39:34.972 null3 00:39:34.972 17:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:34.972 17:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:34.972 17:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:39:34.972 null4 00:39:35.232 17:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:35.232 17:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:35.232 17:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:39:35.232 null5 00:39:35.232 17:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:35.232 17:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:35.232 17:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:39:35.494 null6 00:39:35.495 17:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:35.495 17:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:35.495 17:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:39:35.495 null7 00:39:35.495 17:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:35.495 17:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:35.495 17:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:39:35.495 17:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:35.495 17:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:35.495 17:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:35.495 17:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:35.495 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:35.496 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3315797 3315799 3315802 3315805 3315807 3315810 3315813 3315815 00:39:35.496 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:39:35.496 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:39:35.496 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:35.496 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:35.496 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:35.757 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:35.757 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:35.757 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:35.757 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:35.757 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:35.757 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:35.757 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:35.757 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:36.018 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:36.018 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:36.018 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:36.018 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:36.018 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:36.018 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:36.018 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:36.018 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:36.018 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:36.018 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:36.018 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:36.018 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:36.018 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:36.018 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:36.018 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:36.018 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:36.018 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:36.018 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:36.018 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:36.018 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:36.018 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:36.018 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:36.018 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:36.018 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:36.018 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:36.279 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:36.280 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:36.280 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:36.280 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:36.280 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:36.280 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:36.280 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:36.280 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:36.280 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:36.280 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:36.280 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:36.280 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:36.280 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:36.280 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:36.280 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:36.280 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:36.280 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:36.280 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:36.280 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:36.280 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:36.280 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:36.280 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:36.280 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:36.280 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:36.280 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:36.280 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:36.280 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:36.280 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:36.542 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:36.542 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:36.542 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:36.542 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:36.542 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:36.542 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:36.542 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:36.542 17:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:36.542 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:36.542 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:36.542 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:36.542 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:36.542 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:36.542 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:36.803 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:36.803 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:36.803 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:36.803 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:36.803 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:36.803 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:36.803 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:36.803 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:36.803 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:36.803 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:36.803 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:36.803 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:36.803 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:36.803 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:36.803 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:36.803 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:36.803 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:36.803 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:36.803 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:36.803 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:36.803 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:36.803 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:36.803 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:36.803 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:36.803 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:37.064 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:37.064 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:37.064 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:37.064 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:37.064 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:37.064 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:37.064 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:37.064 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:37.064 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:37.064 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:37.064 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:37.064 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:37.064 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:37.064 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:37.064 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:37.064 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:37.064 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:37.064 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:37.064 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:37.064 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:37.064 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:37.064 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:37.064 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:37.064 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:37.064 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:37.064 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:37.324 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:37.324 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:37.324 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:37.324 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:37.324 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:37.324 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:37.324 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:37.324 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:37.324 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:37.324 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:37.325 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:37.325 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:37.325 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:37.325 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:37.325 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:37.325 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:37.325 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:37.325 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:37.325 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:37.585 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:37.585 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:37.585 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:37.585 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:37.585 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:37.585 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:37.585 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:37.585 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:37.585 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:37.585 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:37.585 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:37.585 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:37.585 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:37.585 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:37.585 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:37.585 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:37.585 17:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:37.585 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:37.585 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:37.846 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:37.846 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:37.846 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:37.846 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:37.846 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:37.846 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:37.846 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:37.846 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:37.846 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:37.846 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:37.846 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:37.846 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:37.846 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:37.846 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:37.846 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:37.846 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:37.846 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:37.846 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:37.846 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:37.846 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:37.846 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:37.846 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:37.846 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:37.846 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:37.846 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:37.846 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:37.847 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:37.847 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:37.847 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:37.847 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:37.847 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:38.108 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:38.108 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:38.108 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:38.108 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:38.108 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.108 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.108 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:38.108 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.108 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.108 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.108 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.108 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:38.108 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:38.108 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:38.108 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.108 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.108 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:38.370 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.370 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.370 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:38.370 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:38.370 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.370 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.370 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:38.370 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.370 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.370 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:38.370 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:38.370 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:38.370 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.370 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.370 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:38.370 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:38.370 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.370 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.370 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:38.370 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:38.370 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:38.370 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.370 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.370 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:38.370 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.370 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.370 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:38.631 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:38.632 17:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:38.632 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.632 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.632 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:38.632 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.632 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.632 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:38.632 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:38.632 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.632 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.632 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:38.632 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:38.632 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:38.632 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.632 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.632 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:38.632 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.632 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.632 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:38.893 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:38.893 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:38.893 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:38.893 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.893 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.893 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:38.893 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.893 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.893 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:38.893 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:38.893 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.893 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.893 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:38.893 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:38.893 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.893 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.893 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:38.893 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.893 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.893 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:38.893 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.893 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.893 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:38.893 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:39.155 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:39.155 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.155 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.155 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:39.155 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:39.155 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:39.155 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.155 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.155 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:39.155 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:39.155 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.155 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.155 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:39.155 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.155 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.155 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.155 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.155 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:39.155 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.155 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.415 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:39.415 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.415 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.415 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.415 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.415 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.415 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.415 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.415 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.415 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:39:39.415 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:39:39.415 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:39:39.415 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:39:39.415 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:39.415 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:39:39.415 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:39.415 17:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:39.415 rmmod nvme_tcp 00:39:39.675 rmmod nvme_fabrics 00:39:39.675 rmmod nvme_keyring 00:39:39.675 17:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:39.675 17:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:39:39.675 17:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:39:39.675 17:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 3308502 ']' 00:39:39.675 17:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 3308502 00:39:39.675 17:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 3308502 ']' 00:39:39.675 17:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 3308502 00:39:39.675 17:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:39:39.675 17:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:39.675 17:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3308502 00:39:39.675 17:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:39.675 17:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:39.675 17:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3308502' 00:39:39.675 killing process with pid 3308502 00:39:39.675 17:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 3308502 00:39:39.675 17:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 3308502 00:39:39.675 17:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:39:39.675 17:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:39:39.675 17:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:39:39.675 17:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:39:39.675 17:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:39:39.675 17:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:39:39.675 17:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:39:39.935 17:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:39.935 17:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:39.935 17:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:39.935 17:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:39.935 17:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:41.846 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:41.846 00:39:41.846 real 0m48.953s 00:39:41.846 user 2m58.973s 00:39:41.846 sys 0m21.020s 00:39:41.846 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:41.846 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:41.846 ************************************ 00:39:41.846 END TEST nvmf_ns_hotplug_stress 00:39:41.846 ************************************ 00:39:41.846 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:39:41.846 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:41.846 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:41.846 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:41.846 ************************************ 00:39:41.846 START TEST nvmf_delete_subsystem 00:39:41.846 ************************************ 00:39:41.846 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:39:42.108 * Looking for test storage... 00:39:42.108 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:42.108 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:42.108 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:39:42.108 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:42.108 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:42.108 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:42.108 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:42.108 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:42.108 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:39:42.108 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:39:42.108 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:39:42.108 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:39:42.108 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:39:42.108 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:39:42.108 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:39:42.108 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:42.108 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:39:42.108 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:39:42.108 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:42.108 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:42.108 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:39:42.108 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:39:42.108 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:42.108 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:39:42.108 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:39:42.108 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:39:42.108 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:39:42.108 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:42.108 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:39:42.108 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:39:42.108 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:42.108 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:42.108 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:39:42.108 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:42.108 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:42.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:42.108 --rc genhtml_branch_coverage=1 00:39:42.108 --rc genhtml_function_coverage=1 00:39:42.108 --rc genhtml_legend=1 00:39:42.108 --rc geninfo_all_blocks=1 00:39:42.108 --rc geninfo_unexecuted_blocks=1 00:39:42.108 00:39:42.108 ' 00:39:42.108 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:42.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:42.108 --rc genhtml_branch_coverage=1 00:39:42.108 --rc genhtml_function_coverage=1 00:39:42.108 --rc genhtml_legend=1 00:39:42.108 --rc geninfo_all_blocks=1 00:39:42.108 --rc geninfo_unexecuted_blocks=1 00:39:42.108 00:39:42.108 ' 00:39:42.108 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:42.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:42.108 --rc genhtml_branch_coverage=1 00:39:42.108 --rc genhtml_function_coverage=1 00:39:42.108 --rc genhtml_legend=1 00:39:42.108 --rc geninfo_all_blocks=1 00:39:42.108 --rc geninfo_unexecuted_blocks=1 00:39:42.108 00:39:42.108 ' 00:39:42.108 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:42.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:42.108 --rc genhtml_branch_coverage=1 00:39:42.108 --rc genhtml_function_coverage=1 00:39:42.108 --rc genhtml_legend=1 00:39:42.108 --rc geninfo_all_blocks=1 00:39:42.108 --rc geninfo_unexecuted_blocks=1 00:39:42.108 00:39:42.108 ' 00:39:42.108 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:39:42.109 17:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:50.251 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:50.251 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:39:50.251 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:50.251 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:50.251 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:50.251 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:50.251 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:50.251 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:39:50.251 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:50.251 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:39:50.251 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:39:50.251 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:39:50.251 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:39:50.251 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:39:50.251 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:39:50.251 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:50.251 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:50.251 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:50.251 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:50.251 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:50.251 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:50.251 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:50.251 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:50.251 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:50.251 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:50.251 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:50.251 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:50.251 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:50.251 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:50.251 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:50.251 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:50.251 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:50.251 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:50.251 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:50.251 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:39:50.251 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:39:50.251 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:50.251 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:50.251 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:50.251 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:50.251 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:50.251 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:39:50.252 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:39:50.252 Found net devices under 0000:4b:00.0: cvl_0_0 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:39:50.252 Found net devices under 0000:4b:00.1: cvl_0_1 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:50.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:50.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.582 ms 00:39:50.252 00:39:50.252 --- 10.0.0.2 ping statistics --- 00:39:50.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:50.252 rtt min/avg/max/mdev = 0.582/0.582/0.582/0.000 ms 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:50.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:50.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:39:50.252 00:39:50.252 --- 10.0.0.1 ping statistics --- 00:39:50.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:50.252 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=3320713 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 3320713 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 3320713 ']' 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:50.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:50.252 17:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:50.252 [2024-10-01 17:39:47.920238] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:50.252 [2024-10-01 17:39:47.921485] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:39:50.252 [2024-10-01 17:39:47.921546] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:50.252 [2024-10-01 17:39:47.999992] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:50.252 [2024-10-01 17:39:48.039351] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:50.252 [2024-10-01 17:39:48.039399] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:50.253 [2024-10-01 17:39:48.039408] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:50.253 [2024-10-01 17:39:48.039416] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:50.253 [2024-10-01 17:39:48.039422] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:50.253 [2024-10-01 17:39:48.039577] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:50.253 [2024-10-01 17:39:48.039579] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:50.253 [2024-10-01 17:39:48.090139] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:50.253 [2024-10-01 17:39:48.090854] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:50.253 [2024-10-01 17:39:48.091137] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:50.253 17:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:50.253 17:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:39:50.253 17:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:39:50.253 17:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:50.253 17:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:50.253 17:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:50.253 17:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:50.253 17:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:50.253 17:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:50.253 [2024-10-01 17:39:48.756549] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:50.253 17:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:50.253 17:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:50.253 17:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:50.253 17:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:50.514 17:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:50.514 17:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:50.514 17:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:50.514 17:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:50.514 [2024-10-01 17:39:48.804598] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:50.514 17:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:50.514 17:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:39:50.514 17:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:50.514 17:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:50.514 NULL1 00:39:50.514 17:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:50.514 17:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:50.514 17:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:50.514 17:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:50.514 Delay0 00:39:50.514 17:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:50.514 17:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:50.514 17:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:50.514 17:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:50.514 17:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:50.514 17:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3321061 00:39:50.514 17:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:39:50.514 17:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:39:50.514 [2024-10-01 17:39:48.893503] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:39:52.427 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:52.427 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:52.427 17:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Write completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Write completed with error (sct=0, sc=8) 00:39:52.689 starting I/O failed: -6 00:39:52.689 Write completed with error (sct=0, sc=8) 00:39:52.689 Write completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 starting I/O failed: -6 00:39:52.689 Write completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Write completed with error (sct=0, sc=8) 00:39:52.689 starting I/O failed: -6 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Write completed with error (sct=0, sc=8) 00:39:52.689 starting I/O failed: -6 00:39:52.689 Write completed with error (sct=0, sc=8) 00:39:52.689 Write completed with error (sct=0, sc=8) 00:39:52.689 Write completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 starting I/O failed: -6 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Write completed with error (sct=0, sc=8) 00:39:52.689 starting I/O failed: -6 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Write completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 starting I/O failed: -6 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Write completed with error (sct=0, sc=8) 00:39:52.689 Write completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 starting I/O failed: -6 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Write completed with error (sct=0, sc=8) 00:39:52.689 starting I/O failed: -6 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 starting I/O failed: -6 00:39:52.689 Write completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 starting I/O failed: -6 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 [2024-10-01 17:39:51.054509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5a70 is same with the state(6) to be set 00:39:52.689 Write completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Write completed with error (sct=0, sc=8) 00:39:52.689 Write completed with error (sct=0, sc=8) 00:39:52.689 Write completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Write completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Write completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Write completed with error (sct=0, sc=8) 00:39:52.689 Write completed with error (sct=0, sc=8) 00:39:52.689 Write completed with error (sct=0, sc=8) 00:39:52.689 Write completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Write completed with error (sct=0, sc=8) 00:39:52.689 Write completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Write completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Write completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Write completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Write completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Write completed with error (sct=0, sc=8) 00:39:52.689 Write completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Write completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Write completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 starting I/O failed: -6 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 starting I/O failed: -6 00:39:52.689 Write completed with error (sct=0, sc=8) 00:39:52.689 Write completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Write completed with error (sct=0, sc=8) 00:39:52.689 starting I/O failed: -6 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 starting I/O failed: -6 00:39:52.689 Write completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Write completed with error (sct=0, sc=8) 00:39:52.689 starting I/O failed: -6 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.689 Read completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 starting I/O failed: -6 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 starting I/O failed: -6 00:39:52.690 Write completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Write completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 starting I/O failed: -6 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Write completed with error (sct=0, sc=8) 00:39:52.690 Write completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 starting I/O failed: -6 00:39:52.690 Write completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 starting I/O failed: -6 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 starting I/O failed: -6 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 [2024-10-01 17:39:51.058480] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa41000d450 is same with the state(6) to be set 00:39:52.690 Write completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Write completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Write completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Write completed with error (sct=0, sc=8) 00:39:52.690 Write completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Write completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Write completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Write completed with error (sct=0, sc=8) 00:39:52.690 Write completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Write completed with error (sct=0, sc=8) 00:39:52.690 Write completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Write completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Write completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Write completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Write completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:52.690 Read completed with error (sct=0, sc=8) 00:39:53.660 [2024-10-01 17:39:52.033108] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b20 is same with the state(6) to be set 00:39:53.660 Read completed with error (sct=0, sc=8) 00:39:53.660 Read completed with error (sct=0, sc=8) 00:39:53.660 Read completed with error (sct=0, sc=8) 00:39:53.660 Read completed with error (sct=0, sc=8) 00:39:53.660 Write completed with error (sct=0, sc=8) 00:39:53.660 Read completed with error (sct=0, sc=8) 00:39:53.660 Read completed with error (sct=0, sc=8) 00:39:53.660 Read completed with error (sct=0, sc=8) 00:39:53.660 Write completed with error (sct=0, sc=8) 00:39:53.660 Read completed with error (sct=0, sc=8) 00:39:53.660 Write completed with error (sct=0, sc=8) 00:39:53.660 Read completed with error (sct=0, sc=8) 00:39:53.660 Read completed with error (sct=0, sc=8) 00:39:53.660 Read completed with error (sct=0, sc=8) 00:39:53.660 Read completed with error (sct=0, sc=8) 00:39:53.660 Read completed with error (sct=0, sc=8) 00:39:53.660 Write completed with error (sct=0, sc=8) 00:39:53.660 Read completed with error (sct=0, sc=8) 00:39:53.660 Read completed with error (sct=0, sc=8) 00:39:53.660 Write completed with error (sct=0, sc=8) 00:39:53.660 Read completed with error (sct=0, sc=8) 00:39:53.660 Read completed with error (sct=0, sc=8) 00:39:53.660 Read completed with error (sct=0, sc=8) 00:39:53.660 Read completed with error (sct=0, sc=8) 00:39:53.660 Read completed with error (sct=0, sc=8) 00:39:53.660 Read completed with error (sct=0, sc=8) 00:39:53.660 [2024-10-01 17:39:52.057881] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5c50 is same with the state(6) to be set 00:39:53.660 Read completed with error (sct=0, sc=8) 00:39:53.660 Write completed with error (sct=0, sc=8) 00:39:53.660 Read completed with error (sct=0, sc=8) 00:39:53.660 Read completed with error (sct=0, sc=8) 00:39:53.660 Write completed with error (sct=0, sc=8) 00:39:53.660 Write completed with error (sct=0, sc=8) 00:39:53.660 Read completed with error (sct=0, sc=8) 00:39:53.660 Write completed with error (sct=0, sc=8) 00:39:53.660 Write completed with error (sct=0, sc=8) 00:39:53.660 Write completed with error (sct=0, sc=8) 00:39:53.660 Read completed with error (sct=0, sc=8) 00:39:53.660 Write completed with error (sct=0, sc=8) 00:39:53.660 Write completed with error (sct=0, sc=8) 00:39:53.660 Read completed with error (sct=0, sc=8) 00:39:53.660 Read completed with error (sct=0, sc=8) 00:39:53.660 Write completed with error (sct=0, sc=8) 00:39:53.660 Read completed with error (sct=0, sc=8) 00:39:53.660 Read completed with error (sct=0, sc=8) 00:39:53.660 Read completed with error (sct=0, sc=8) 00:39:53.661 Write completed with error (sct=0, sc=8) 00:39:53.661 Read completed with error (sct=0, sc=8) 00:39:53.661 Write completed with error (sct=0, sc=8) 00:39:53.661 Write completed with error (sct=0, sc=8) 00:39:53.661 Read completed with error (sct=0, sc=8) 00:39:53.661 Read completed with error (sct=0, sc=8) 00:39:53.661 Read completed with error (sct=0, sc=8) 00:39:53.661 [2024-10-01 17:39:52.058474] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70b0 is same with the state(6) to be set 00:39:53.661 Write completed with error (sct=0, sc=8) 00:39:53.661 Read completed with error (sct=0, sc=8) 00:39:53.661 Read completed with error (sct=0, sc=8) 00:39:53.661 Read completed with error (sct=0, sc=8) 00:39:53.661 Read completed with error (sct=0, sc=8) 00:39:53.661 Write completed with error (sct=0, sc=8) 00:39:53.661 Write completed with error (sct=0, sc=8) 00:39:53.661 Read completed with error (sct=0, sc=8) 00:39:53.661 Read completed with error (sct=0, sc=8) 00:39:53.661 Read completed with error (sct=0, sc=8) 00:39:53.661 Read completed with error (sct=0, sc=8) 00:39:53.661 Write completed with error (sct=0, sc=8) 00:39:53.661 Read completed with error (sct=0, sc=8) 00:39:53.661 Read completed with error (sct=0, sc=8) 00:39:53.661 Read completed with error (sct=0, sc=8) 00:39:53.661 Read completed with error (sct=0, sc=8) 00:39:53.661 Read completed with error (sct=0, sc=8) 00:39:53.661 Write completed with error (sct=0, sc=8) 00:39:53.661 Read completed with error (sct=0, sc=8) 00:39:53.661 Read completed with error (sct=0, sc=8) 00:39:53.661 Write completed with error (sct=0, sc=8) 00:39:53.661 Read completed with error (sct=0, sc=8) 00:39:53.661 [2024-10-01 17:39:52.060726] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa41000cfe0 is same with the state(6) to be set 00:39:53.661 Write completed with error (sct=0, sc=8) 00:39:53.661 Read completed with error (sct=0, sc=8) 00:39:53.661 Read completed with error (sct=0, sc=8) 00:39:53.661 Read completed with error (sct=0, sc=8) 00:39:53.661 Read completed with error (sct=0, sc=8) 00:39:53.661 Read completed with error (sct=0, sc=8) 00:39:53.661 Read completed with error (sct=0, sc=8) 00:39:53.661 Read completed with error (sct=0, sc=8) 00:39:53.661 Read completed with error (sct=0, sc=8) 00:39:53.661 Read completed with error (sct=0, sc=8) 00:39:53.661 Read completed with error (sct=0, sc=8) 00:39:53.661 Write completed with error (sct=0, sc=8) 00:39:53.661 Read completed with error (sct=0, sc=8) 00:39:53.661 Read completed with error (sct=0, sc=8) 00:39:53.661 Read completed with error (sct=0, sc=8) 00:39:53.661 Write completed with error (sct=0, sc=8) 00:39:53.661 Read completed with error (sct=0, sc=8) 00:39:53.661 Read completed with error (sct=0, sc=8) 00:39:53.661 Write completed with error (sct=0, sc=8) 00:39:53.661 Read completed with error (sct=0, sc=8) 00:39:53.661 Read completed with error (sct=0, sc=8) 00:39:53.661 Read completed with error (sct=0, sc=8) 00:39:53.661 [2024-10-01 17:39:52.060818] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa41000d780 is same with the state(6) to be set 00:39:53.661 Initializing NVMe Controllers 00:39:53.661 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:53.661 Controller IO queue size 128, less than required. 00:39:53.661 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:53.661 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:39:53.661 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:39:53.661 Initialization complete. Launching workers. 00:39:53.661 ======================================================== 00:39:53.661 Latency(us) 00:39:53.661 Device Information : IOPS MiB/s Average min max 00:39:53.661 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 173.75 0.08 885888.36 263.07 1007290.36 00:39:53.661 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 165.78 0.08 905406.65 307.03 1010508.59 00:39:53.661 ======================================================== 00:39:53.661 Total : 339.53 0.17 895418.55 263.07 1010508.59 00:39:53.661 00:39:53.661 [2024-10-01 17:39:52.061537] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4b20 (9): Bad file descriptor 00:39:53.661 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:39:53.661 17:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:53.661 17:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:39:53.661 17:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3321061 00:39:53.661 17:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:39:54.326 17:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:39:54.326 17:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3321061 00:39:54.326 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3321061) - No such process 00:39:54.326 17:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3321061 00:39:54.326 17:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:39:54.326 17:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3321061 00:39:54.326 17:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:39:54.326 17:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:54.326 17:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:39:54.326 17:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:54.326 17:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3321061 00:39:54.326 17:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:39:54.326 17:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:54.326 17:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:54.326 17:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:54.326 17:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:54.326 17:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:54.326 17:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:54.326 17:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:54.326 17:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:54.326 17:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:54.326 17:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:54.326 [2024-10-01 17:39:52.596824] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:54.326 17:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:54.326 17:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:54.326 17:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:54.326 17:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:54.326 17:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:54.326 17:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3321732 00:39:54.326 17:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:39:54.326 17:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3321732 00:39:54.326 17:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:39:54.326 17:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:54.326 [2024-10-01 17:39:52.661208] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:39:54.586 17:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:54.586 17:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3321732 00:39:54.586 17:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:55.157 17:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:55.157 17:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3321732 00:39:55.157 17:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:55.728 17:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:55.728 17:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3321732 00:39:55.728 17:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:56.298 17:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:56.298 17:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3321732 00:39:56.298 17:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:56.868 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:56.868 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3321732 00:39:56.868 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:57.128 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:57.128 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3321732 00:39:57.128 17:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:57.389 Initializing NVMe Controllers 00:39:57.389 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:57.389 Controller IO queue size 128, less than required. 00:39:57.389 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:57.389 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:39:57.389 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:39:57.389 Initialization complete. Launching workers. 00:39:57.389 ======================================================== 00:39:57.389 Latency(us) 00:39:57.389 Device Information : IOPS MiB/s Average min max 00:39:57.389 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002403.38 1000207.10 1041002.04 00:39:57.389 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004233.93 1000263.58 1042468.17 00:39:57.389 ======================================================== 00:39:57.389 Total : 256.00 0.12 1003318.65 1000207.10 1042468.17 00:39:57.389 00:39:57.650 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:57.650 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3321732 00:39:57.650 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3321732) - No such process 00:39:57.650 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3321732 00:39:57.650 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:39:57.650 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:39:57.650 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:39:57.650 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:39:57.650 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:57.650 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:39:57.650 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:57.650 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:57.650 rmmod nvme_tcp 00:39:57.650 rmmod nvme_fabrics 00:39:57.650 rmmod nvme_keyring 00:39:57.909 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:57.909 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:39:57.909 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:39:57.909 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 3320713 ']' 00:39:57.909 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 3320713 00:39:57.909 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 3320713 ']' 00:39:57.909 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 3320713 00:39:57.909 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:39:57.909 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:57.909 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3320713 00:39:57.909 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:57.909 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:57.909 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3320713' 00:39:57.909 killing process with pid 3320713 00:39:57.909 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 3320713 00:39:57.909 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 3320713 00:39:57.909 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:39:57.909 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:39:57.909 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:39:57.909 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:39:57.909 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:39:57.909 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:39:57.909 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:39:57.909 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:57.909 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:57.909 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:57.909 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:57.909 17:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:00.449 00:40:00.449 real 0m18.112s 00:40:00.449 user 0m26.377s 00:40:00.449 sys 0m7.373s 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:00.449 ************************************ 00:40:00.449 END TEST nvmf_delete_subsystem 00:40:00.449 ************************************ 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:00.449 ************************************ 00:40:00.449 START TEST nvmf_host_management 00:40:00.449 ************************************ 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:40:00.449 * Looking for test storage... 00:40:00.449 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:00.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:00.449 --rc genhtml_branch_coverage=1 00:40:00.449 --rc genhtml_function_coverage=1 00:40:00.449 --rc genhtml_legend=1 00:40:00.449 --rc geninfo_all_blocks=1 00:40:00.449 --rc geninfo_unexecuted_blocks=1 00:40:00.449 00:40:00.449 ' 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:00.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:00.449 --rc genhtml_branch_coverage=1 00:40:00.449 --rc genhtml_function_coverage=1 00:40:00.449 --rc genhtml_legend=1 00:40:00.449 --rc geninfo_all_blocks=1 00:40:00.449 --rc geninfo_unexecuted_blocks=1 00:40:00.449 00:40:00.449 ' 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:00.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:00.449 --rc genhtml_branch_coverage=1 00:40:00.449 --rc genhtml_function_coverage=1 00:40:00.449 --rc genhtml_legend=1 00:40:00.449 --rc geninfo_all_blocks=1 00:40:00.449 --rc geninfo_unexecuted_blocks=1 00:40:00.449 00:40:00.449 ' 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:00.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:00.449 --rc genhtml_branch_coverage=1 00:40:00.449 --rc genhtml_function_coverage=1 00:40:00.449 --rc genhtml_legend=1 00:40:00.449 --rc geninfo_all_blocks=1 00:40:00.449 --rc geninfo_unexecuted_blocks=1 00:40:00.449 00:40:00.449 ' 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:00.449 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:00.450 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:00.450 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:00.450 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:40:00.450 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:00.450 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:40:00.450 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:00.450 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:00.450 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:00.450 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:00.450 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:00.450 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:00.450 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:00.450 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:00.450 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:00.450 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:00.450 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:00.450 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:00.450 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:40:00.450 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:40:00.450 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:00.450 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:40:00.450 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:40:00.450 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:40:00.450 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:00.450 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:00.450 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:00.450 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:40:00.450 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:40:00.450 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:40:00.450 17:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:40:08.590 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:40:08.590 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:40:08.590 Found net devices under 0000:4b:00.0: cvl_0_0 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:08.590 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:08.591 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:08.591 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:08.591 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:40:08.591 Found net devices under 0000:4b:00.1: cvl_0_1 00:40:08.591 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:08.591 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:40:08.591 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:40:08.591 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:40:08.591 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:40:08.591 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:40:08.591 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:08.591 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:08.591 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:08.591 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:08.591 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:08.591 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:08.591 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:08.591 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:08.591 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:08.591 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:08.591 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:08.591 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:08.591 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:08.591 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:08.591 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:08.591 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:08.591 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:08.591 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:08.591 17:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:08.591 17:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:08.591 17:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:08.591 17:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:08.591 17:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:08.591 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:08.591 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.697 ms 00:40:08.591 00:40:08.591 --- 10.0.0.2 ping statistics --- 00:40:08.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:08.591 rtt min/avg/max/mdev = 0.697/0.697/0.697/0.000 ms 00:40:08.591 17:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:08.591 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:08.591 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:40:08.591 00:40:08.591 --- 10.0.0.1 ping statistics --- 00:40:08.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:08.591 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:40:08.591 17:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:08.591 17:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:40:08.591 17:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:40:08.591 17:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:08.591 17:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:40:08.591 17:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:40:08.591 17:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:08.591 17:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:40:08.591 17:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:40:08.591 17:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:40:08.591 17:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:40:08.591 17:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:40:08.591 17:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:40:08.591 17:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:08.591 17:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:08.591 17:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=3326429 00:40:08.591 17:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 3326429 00:40:08.591 17:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:40:08.591 17:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3326429 ']' 00:40:08.591 17:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:08.591 17:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:08.591 17:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:08.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:08.591 17:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:08.591 17:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:08.591 [2024-10-01 17:40:06.160738] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:08.591 [2024-10-01 17:40:06.161865] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:40:08.591 [2024-10-01 17:40:06.161919] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:08.591 [2024-10-01 17:40:06.250149] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:08.591 [2024-10-01 17:40:06.299643] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:08.591 [2024-10-01 17:40:06.299698] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:08.591 [2024-10-01 17:40:06.299711] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:08.591 [2024-10-01 17:40:06.299718] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:08.591 [2024-10-01 17:40:06.299724] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:08.591 [2024-10-01 17:40:06.299865] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:40:08.591 [2024-10-01 17:40:06.300046] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:40:08.591 [2024-10-01 17:40:06.300211] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:08.591 [2024-10-01 17:40:06.300211] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:40:08.591 [2024-10-01 17:40:06.376780] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:08.591 [2024-10-01 17:40:06.377339] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:08.591 [2024-10-01 17:40:06.378339] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:08.591 [2024-10-01 17:40:06.378361] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:08.592 [2024-10-01 17:40:06.378543] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:08.592 17:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:08.592 17:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:40:08.592 17:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:40:08.592 17:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:08.592 17:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:08.592 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:08.592 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:08.592 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:08.592 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:08.592 [2024-10-01 17:40:07.025089] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:08.592 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:08.592 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:40:08.592 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:08.592 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:08.592 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:40:08.592 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:40:08.592 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:40:08.592 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:08.592 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:08.592 Malloc0 00:40:08.592 [2024-10-01 17:40:07.109290] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:08.592 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:08.592 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:40:08.592 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:08.592 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:08.853 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3326780 00:40:08.854 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3326780 /var/tmp/bdevperf.sock 00:40:08.854 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3326780 ']' 00:40:08.854 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:08.854 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:08.854 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:08.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:08.854 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:40:08.854 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:40:08.854 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:08.854 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:08.854 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:40:08.854 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:40:08.854 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:40:08.854 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:40:08.854 { 00:40:08.854 "params": { 00:40:08.854 "name": "Nvme$subsystem", 00:40:08.854 "trtype": "$TEST_TRANSPORT", 00:40:08.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:08.854 "adrfam": "ipv4", 00:40:08.854 "trsvcid": "$NVMF_PORT", 00:40:08.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:08.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:08.854 "hdgst": ${hdgst:-false}, 00:40:08.854 "ddgst": ${ddgst:-false} 00:40:08.854 }, 00:40:08.854 "method": "bdev_nvme_attach_controller" 00:40:08.854 } 00:40:08.854 EOF 00:40:08.854 )") 00:40:08.854 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:40:08.854 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:40:08.854 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:40:08.854 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:40:08.854 "params": { 00:40:08.854 "name": "Nvme0", 00:40:08.854 "trtype": "tcp", 00:40:08.854 "traddr": "10.0.0.2", 00:40:08.854 "adrfam": "ipv4", 00:40:08.854 "trsvcid": "4420", 00:40:08.854 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:08.854 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:08.854 "hdgst": false, 00:40:08.854 "ddgst": false 00:40:08.854 }, 00:40:08.854 "method": "bdev_nvme_attach_controller" 00:40:08.854 }' 00:40:08.854 [2024-10-01 17:40:07.212878] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:40:08.854 [2024-10-01 17:40:07.212933] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3326780 ] 00:40:08.854 [2024-10-01 17:40:07.273674] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:08.854 [2024-10-01 17:40:07.304848] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:40:09.115 Running I/O for 10 seconds... 00:40:09.115 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:09.115 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:40:09.115 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:40:09.115 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:09.115 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:09.375 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:09.375 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:09.375 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:40:09.375 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:40:09.375 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:40:09.375 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:40:09.375 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:40:09.375 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:40:09.375 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:40:09.375 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:40:09.375 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:40:09.375 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:09.375 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:09.375 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:09.375 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:40:09.375 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:40:09.375 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:40:09.636 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:40:09.636 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:40:09.636 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:40:09.636 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:40:09.636 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:09.636 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:09.636 17:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:09.636 17:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=615 00:40:09.636 17:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 615 -ge 100 ']' 00:40:09.636 17:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:40:09.636 17:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:40:09.636 17:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:40:09.636 17:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:40:09.636 17:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:09.636 17:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:09.636 [2024-10-01 17:40:08.028679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20060 is same with the state(6) to be set 00:40:09.636 [2024-10-01 17:40:08.028725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20060 is same with the state(6) to be set 00:40:09.636 [2024-10-01 17:40:08.028736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20060 is same with the state(6) to be set 00:40:09.636 [2024-10-01 17:40:08.028744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20060 is same with the state(6) to be set 00:40:09.636 [2024-10-01 17:40:08.028751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20060 is same with the state(6) to be set 00:40:09.636 [2024-10-01 17:40:08.028759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20060 is same with the state(6) to be set 00:40:09.636 [2024-10-01 17:40:08.028765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20060 is same with the state(6) to be set 00:40:09.636 [2024-10-01 17:40:08.028772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20060 is same with the state(6) to be set 00:40:09.636 [2024-10-01 17:40:08.028779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20060 is same with the state(6) to be set 00:40:09.636 [2024-10-01 17:40:08.028785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20060 is same with the state(6) to be set 00:40:09.636 [2024-10-01 17:40:08.028792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20060 is same with the state(6) to be set 00:40:09.636 [2024-10-01 17:40:08.028798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20060 is same with the state(6) to be set 00:40:09.636 [2024-10-01 17:40:08.028805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20060 is same with the state(6) to be set 00:40:09.636 [2024-10-01 17:40:08.028812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20060 is same with the state(6) to be set 00:40:09.636 [2024-10-01 17:40:08.028818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20060 is same with the state(6) to be set 00:40:09.636 [2024-10-01 17:40:08.028826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20060 is same with the state(6) to be set 00:40:09.636 [2024-10-01 17:40:08.028832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20060 is same with the state(6) to be set 00:40:09.636 [2024-10-01 17:40:08.031196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:40:09.636 [2024-10-01 17:40:08.031238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.636 [2024-10-01 17:40:08.031249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:40:09.636 [2024-10-01 17:40:08.031262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.637 [2024-10-01 17:40:08.031271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:40:09.637 [2024-10-01 17:40:08.031279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.637 [2024-10-01 17:40:08.031287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:40:09.637 [2024-10-01 17:40:08.031294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.637 [2024-10-01 17:40:08.031302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1342ed0 is same with the state(6) to be set 00:40:09.637 [2024-10-01 17:40:08.031808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.637 [2024-10-01 17:40:08.031825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.637 [2024-10-01 17:40:08.031841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.637 [2024-10-01 17:40:08.031849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.637 [2024-10-01 17:40:08.031859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.637 [2024-10-01 17:40:08.031867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.637 [2024-10-01 17:40:08.031877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.637 [2024-10-01 17:40:08.031885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.637 [2024-10-01 17:40:08.031895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.637 [2024-10-01 17:40:08.031903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.637 [2024-10-01 17:40:08.031912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.637 [2024-10-01 17:40:08.031920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.637 [2024-10-01 17:40:08.031929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.637 [2024-10-01 17:40:08.031936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.637 [2024-10-01 17:40:08.031947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.637 [2024-10-01 17:40:08.031954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.637 [2024-10-01 17:40:08.031964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.637 [2024-10-01 17:40:08.031971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.637 [2024-10-01 17:40:08.031981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.637 [2024-10-01 17:40:08.031992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.637 [2024-10-01 17:40:08.032008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.637 [2024-10-01 17:40:08.032016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.637 [2024-10-01 17:40:08.032026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.637 [2024-10-01 17:40:08.032034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.637 [2024-10-01 17:40:08.032044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.637 [2024-10-01 17:40:08.032051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.637 [2024-10-01 17:40:08.032061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.637 [2024-10-01 17:40:08.032068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.637 [2024-10-01 17:40:08.032078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.637 [2024-10-01 17:40:08.032086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.637 [2024-10-01 17:40:08.032096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.637 [2024-10-01 17:40:08.032103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.637 [2024-10-01 17:40:08.032113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.637 [2024-10-01 17:40:08.032120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.637 [2024-10-01 17:40:08.032130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.637 [2024-10-01 17:40:08.032138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.637 [2024-10-01 17:40:08.032148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.637 [2024-10-01 17:40:08.032155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.637 [2024-10-01 17:40:08.032164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.637 [2024-10-01 17:40:08.032172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.637 [2024-10-01 17:40:08.032181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.637 [2024-10-01 17:40:08.032189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.637 [2024-10-01 17:40:08.032198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.637 [2024-10-01 17:40:08.032206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.637 [2024-10-01 17:40:08.032217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.637 [2024-10-01 17:40:08.032224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.637 [2024-10-01 17:40:08.032234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.637 [2024-10-01 17:40:08.032242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.637 [2024-10-01 17:40:08.032252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.637 [2024-10-01 17:40:08.032259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.637 [2024-10-01 17:40:08.032269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.637 [2024-10-01 17:40:08.032276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.637 [2024-10-01 17:40:08.032286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.637 [2024-10-01 17:40:08.032293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.637 [2024-10-01 17:40:08.032302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.637 [2024-10-01 17:40:08.032310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.637 [2024-10-01 17:40:08.032320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.637 [2024-10-01 17:40:08.032328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.637 [2024-10-01 17:40:08.032337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.637 [2024-10-01 17:40:08.032344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.637 [2024-10-01 17:40:08.032353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.637 [2024-10-01 17:40:08.032362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.637 [2024-10-01 17:40:08.032371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.637 [2024-10-01 17:40:08.032378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.637 [2024-10-01 17:40:08.032388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.637 [2024-10-01 17:40:08.032395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.637 [2024-10-01 17:40:08.032404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.637 [2024-10-01 17:40:08.032413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.637 [2024-10-01 17:40:08.032422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.637 [2024-10-01 17:40:08.032431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.637 [2024-10-01 17:40:08.032441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.637 [2024-10-01 17:40:08.032447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.637 [2024-10-01 17:40:08.032457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.637 [2024-10-01 17:40:08.032464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.638 [2024-10-01 17:40:08.032476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.638 [2024-10-01 17:40:08.032484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.638 [2024-10-01 17:40:08.032493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.638 [2024-10-01 17:40:08.032501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.638 [2024-10-01 17:40:08.032510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.638 [2024-10-01 17:40:08.032518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.638 [2024-10-01 17:40:08.032528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.638 [2024-10-01 17:40:08.032535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.638 [2024-10-01 17:40:08.032545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.638 [2024-10-01 17:40:08.032552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.638 [2024-10-01 17:40:08.032561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.638 [2024-10-01 17:40:08.032569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.638 [2024-10-01 17:40:08.032578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.638 [2024-10-01 17:40:08.032586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.638 [2024-10-01 17:40:08.032596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.638 [2024-10-01 17:40:08.032603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.638 [2024-10-01 17:40:08.032612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.638 [2024-10-01 17:40:08.032620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.638 [2024-10-01 17:40:08.032629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.638 [2024-10-01 17:40:08.032636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.638 [2024-10-01 17:40:08.032648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.638 [2024-10-01 17:40:08.032655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.638 [2024-10-01 17:40:08.032666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.638 [2024-10-01 17:40:08.032673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.638 [2024-10-01 17:40:08.032682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.638 [2024-10-01 17:40:08.032690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.638 [2024-10-01 17:40:08.032700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.638 [2024-10-01 17:40:08.032707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.638 [2024-10-01 17:40:08.032716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.638 [2024-10-01 17:40:08.032724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.638 [2024-10-01 17:40:08.032733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.638 [2024-10-01 17:40:08.032742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.638 [2024-10-01 17:40:08.032752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.638 [2024-10-01 17:40:08.032759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.638 [2024-10-01 17:40:08.032769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.638 [2024-10-01 17:40:08.032776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.638 [2024-10-01 17:40:08.032785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.638 [2024-10-01 17:40:08.032792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.638 [2024-10-01 17:40:08.032801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.638 [2024-10-01 17:40:08.032809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.638 [2024-10-01 17:40:08.032819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.638 [2024-10-01 17:40:08.032826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.638 [2024-10-01 17:40:08.032835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.638 [2024-10-01 17:40:08.032842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.638 [2024-10-01 17:40:08.032852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.638 [2024-10-01 17:40:08.032861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.638 [2024-10-01 17:40:08.032871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.638 [2024-10-01 17:40:08.032878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.638 [2024-10-01 17:40:08.032888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.638 [2024-10-01 17:40:08.032895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.638 [2024-10-01 17:40:08.032904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.638 [2024-10-01 17:40:08.032912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.638 [2024-10-01 17:40:08.032922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.638 [2024-10-01 17:40:08.032929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.638 [2024-10-01 17:40:08.032987] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x155bf20 was disconnected and freed. reset controller. 00:40:09.638 17:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:09.638 [2024-10-01 17:40:08.034194] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:40:09.638 17:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:40:09.638 17:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:09.638 17:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:09.638 task offset: 93696 on job bdev=Nvme0n1 fails 00:40:09.638 00:40:09.638 Latency(us) 00:40:09.638 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:09.638 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:09.638 Job: Nvme0n1 ended in about 0.43 seconds with error 00:40:09.638 Verification LBA range: start 0x0 length 0x400 00:40:09.638 Nvme0n1 : 0.43 1653.31 103.33 149.88 0.00 34421.49 1529.17 38229.33 00:40:09.638 =================================================================================================================== 00:40:09.638 Total : 1653.31 103.33 149.88 0.00 34421.49 1529.17 38229.33 00:40:09.638 [2024-10-01 17:40:08.036224] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:09.638 [2024-10-01 17:40:08.036245] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1342ed0 (9): Bad file descriptor 00:40:09.638 [2024-10-01 17:40:08.037457] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:40:09.638 [2024-10-01 17:40:08.037535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:40:09.638 [2024-10-01 17:40:08.037556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.638 [2024-10-01 17:40:08.037570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:40:09.638 [2024-10-01 17:40:08.037579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:40:09.638 [2024-10-01 17:40:08.037591] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.638 [2024-10-01 17:40:08.037598] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1342ed0 00:40:09.638 [2024-10-01 17:40:08.037617] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1342ed0 (9): Bad file descriptor 00:40:09.638 [2024-10-01 17:40:08.037629] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:40:09.638 [2024-10-01 17:40:08.037637] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:40:09.638 [2024-10-01 17:40:08.037645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:40:09.638 [2024-10-01 17:40:08.037658] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:09.638 17:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:09.638 17:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:40:10.579 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3326780 00:40:10.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3326780) - No such process 00:40:10.579 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:40:10.579 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:40:10.579 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:40:10.579 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:40:10.579 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:40:10.579 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:40:10.579 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:40:10.579 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:40:10.579 { 00:40:10.579 "params": { 00:40:10.579 "name": "Nvme$subsystem", 00:40:10.579 "trtype": "$TEST_TRANSPORT", 00:40:10.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:10.579 "adrfam": "ipv4", 00:40:10.579 "trsvcid": "$NVMF_PORT", 00:40:10.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:10.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:10.579 "hdgst": ${hdgst:-false}, 00:40:10.579 "ddgst": ${ddgst:-false} 00:40:10.579 }, 00:40:10.579 "method": "bdev_nvme_attach_controller" 00:40:10.579 } 00:40:10.579 EOF 00:40:10.579 )") 00:40:10.579 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:40:10.579 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:40:10.579 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:40:10.579 17:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:40:10.579 "params": { 00:40:10.579 "name": "Nvme0", 00:40:10.579 "trtype": "tcp", 00:40:10.579 "traddr": "10.0.0.2", 00:40:10.579 "adrfam": "ipv4", 00:40:10.579 "trsvcid": "4420", 00:40:10.579 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:10.579 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:10.579 "hdgst": false, 00:40:10.579 "ddgst": false 00:40:10.579 }, 00:40:10.579 "method": "bdev_nvme_attach_controller" 00:40:10.579 }' 00:40:10.579 [2024-10-01 17:40:09.105273] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:40:10.579 [2024-10-01 17:40:09.105329] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3327132 ] 00:40:10.839 [2024-10-01 17:40:09.166419] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:10.839 [2024-10-01 17:40:09.196042] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:40:11.099 Running I/O for 1 seconds... 00:40:12.039 1600.00 IOPS, 100.00 MiB/s 00:40:12.039 Latency(us) 00:40:12.039 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:12.039 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:12.039 Verification LBA range: start 0x0 length 0x400 00:40:12.039 Nvme0n1 : 1.03 1612.29 100.77 0.00 0.00 38940.80 2102.61 36263.25 00:40:12.039 =================================================================================================================== 00:40:12.039 Total : 1612.29 100.77 0.00 0.00 38940.80 2102.61 36263.25 00:40:12.299 17:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:40:12.299 17:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:40:12.299 17:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:40:12.299 17:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:40:12.299 17:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:40:12.299 17:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:40:12.299 17:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:40:12.299 17:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:12.299 17:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:40:12.299 17:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:12.299 17:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:12.299 rmmod nvme_tcp 00:40:12.299 rmmod nvme_fabrics 00:40:12.299 rmmod nvme_keyring 00:40:12.299 17:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:12.299 17:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:40:12.299 17:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:40:12.299 17:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 3326429 ']' 00:40:12.300 17:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 3326429 00:40:12.300 17:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 3326429 ']' 00:40:12.300 17:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 3326429 00:40:12.300 17:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:40:12.300 17:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:12.300 17:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3326429 00:40:12.300 17:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:40:12.300 17:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:40:12.300 17:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3326429' 00:40:12.300 killing process with pid 3326429 00:40:12.300 17:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 3326429 00:40:12.300 17:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 3326429 00:40:12.560 [2024-10-01 17:40:10.878960] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:40:12.560 17:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:40:12.560 17:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:40:12.560 17:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:40:12.560 17:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:40:12.560 17:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:40:12.560 17:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:40:12.560 17:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:40:12.560 17:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:12.560 17:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:12.560 17:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:12.560 17:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:12.560 17:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:14.473 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:14.473 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:40:14.473 00:40:14.473 real 0m14.414s 00:40:14.473 user 0m18.841s 00:40:14.473 sys 0m7.321s 00:40:14.473 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:14.473 17:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:14.473 ************************************ 00:40:14.473 END TEST nvmf_host_management 00:40:14.473 ************************************ 00:40:14.735 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:40:14.735 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:40:14.735 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:14.735 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:14.735 ************************************ 00:40:14.735 START TEST nvmf_lvol 00:40:14.735 ************************************ 00:40:14.735 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:40:14.735 * Looking for test storage... 00:40:14.735 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:14.735 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:14.735 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:40:14.735 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:14.735 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:14.735 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:14.735 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:14.735 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:14.735 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:40:14.735 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:40:14.735 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:40:14.735 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:40:14.735 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:40:14.735 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:40:14.735 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:40:14.735 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:14.735 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:40:14.735 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:40:14.735 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:14.735 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:14.735 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:40:14.735 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:40:14.735 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:14.735 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:40:14.735 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:40:14.735 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:40:14.735 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:40:14.735 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:14.735 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:40:14.735 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:40:14.735 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:14.735 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:14.735 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:40:14.735 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:14.735 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:14.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:14.735 --rc genhtml_branch_coverage=1 00:40:14.735 --rc genhtml_function_coverage=1 00:40:14.735 --rc genhtml_legend=1 00:40:14.735 --rc geninfo_all_blocks=1 00:40:14.735 --rc geninfo_unexecuted_blocks=1 00:40:14.735 00:40:14.735 ' 00:40:14.735 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:14.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:14.735 --rc genhtml_branch_coverage=1 00:40:14.735 --rc genhtml_function_coverage=1 00:40:14.735 --rc genhtml_legend=1 00:40:14.735 --rc geninfo_all_blocks=1 00:40:14.735 --rc geninfo_unexecuted_blocks=1 00:40:14.735 00:40:14.735 ' 00:40:14.735 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:14.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:14.735 --rc genhtml_branch_coverage=1 00:40:14.735 --rc genhtml_function_coverage=1 00:40:14.735 --rc genhtml_legend=1 00:40:14.735 --rc geninfo_all_blocks=1 00:40:14.735 --rc geninfo_unexecuted_blocks=1 00:40:14.735 00:40:14.735 ' 00:40:14.735 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:14.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:14.735 --rc genhtml_branch_coverage=1 00:40:14.735 --rc genhtml_function_coverage=1 00:40:14.735 --rc genhtml_legend=1 00:40:14.735 --rc geninfo_all_blocks=1 00:40:14.735 --rc geninfo_unexecuted_blocks=1 00:40:14.735 00:40:14.735 ' 00:40:14.735 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:14.735 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:40:14.735 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:14.736 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:14.736 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:14.736 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:14.736 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:14.736 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:14.736 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:14.736 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:14.736 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:14.997 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:14.997 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:14.997 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:14.997 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:14.997 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:14.997 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:14.997 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:14.997 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:14.997 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:40:14.997 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:14.997 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:14.997 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:14.997 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:14.997 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:14.998 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:14.998 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:40:14.998 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:14.998 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:40:14.998 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:14.998 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:14.998 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:14.998 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:14.998 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:14.998 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:14.998 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:14.998 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:14.998 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:14.998 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:14.998 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:14.998 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:14.998 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:40:14.998 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:40:14.998 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:14.998 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:40:14.998 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:40:14.998 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:14.998 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:40:14.998 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:40:14.998 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:40:14.998 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:14.998 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:14.998 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:14.998 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:40:14.998 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:40:14.998 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:40:14.998 17:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:23.142 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:23.142 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:40:23.142 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:23.142 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:23.142 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:23.142 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:23.142 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:23.142 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:40:23.142 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:23.142 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:40:23.142 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:40:23.142 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:40:23.142 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:40:23.142 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:40:23.142 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:40:23.142 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:23.142 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:23.142 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:23.142 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:23.142 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:23.142 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:23.142 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:23.142 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:23.142 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:23.142 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:23.142 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:23.142 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:23.142 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:23.142 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:23.142 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:23.142 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:23.142 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:23.142 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:23.142 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:23.142 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:40:23.142 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:40:23.142 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:23.142 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:23.142 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:23.142 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:23.142 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:23.142 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:23.142 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:40:23.142 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:40:23.142 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:23.142 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:40:23.143 Found net devices under 0000:4b:00.0: cvl_0_0 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:40:23.143 Found net devices under 0000:4b:00.1: cvl_0_1 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:23.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:23.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:40:23.143 00:40:23.143 --- 10.0.0.2 ping statistics --- 00:40:23.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:23.143 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:23.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:23.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:40:23.143 00:40:23.143 --- 10.0.0.1 ping statistics --- 00:40:23.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:23.143 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=3331589 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 3331589 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 3331589 ']' 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:23.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:23.143 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:23.143 [2024-10-01 17:40:20.668398] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:23.143 [2024-10-01 17:40:20.669373] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:40:23.143 [2024-10-01 17:40:20.669412] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:23.143 [2024-10-01 17:40:20.735307] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:23.143 [2024-10-01 17:40:20.765873] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:23.143 [2024-10-01 17:40:20.765912] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:23.143 [2024-10-01 17:40:20.765920] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:23.143 [2024-10-01 17:40:20.765926] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:23.143 [2024-10-01 17:40:20.765932] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:23.143 [2024-10-01 17:40:20.766037] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:23.143 [2024-10-01 17:40:20.766104] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:40:23.143 [2024-10-01 17:40:20.766107] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:40:23.143 [2024-10-01 17:40:20.832775] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:23.143 [2024-10-01 17:40:20.833089] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:23.143 [2024-10-01 17:40:20.833341] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:23.143 [2024-10-01 17:40:20.833670] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:23.144 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:23.144 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:40:23.144 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:40:23.144 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:23.144 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:23.144 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:23.144 17:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:23.144 [2024-10-01 17:40:21.063122] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:23.144 17:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:23.144 17:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:40:23.144 17:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:23.144 17:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:40:23.144 17:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:40:23.144 17:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:40:23.405 17:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=e64729b5-93d2-40e4-a4e5-3dd12d090f45 00:40:23.405 17:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e64729b5-93d2-40e4-a4e5-3dd12d090f45 lvol 20 00:40:23.666 17:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=26440bb3-0ff8-4454-909a-edde12ad7ec7 00:40:23.666 17:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:40:23.666 17:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 26440bb3-0ff8-4454-909a-edde12ad7ec7 00:40:23.927 17:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:23.927 [2024-10-01 17:40:22.458923] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:24.187 17:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:24.187 17:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3332036 00:40:24.187 17:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:40:24.187 17:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:40:25.128 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 26440bb3-0ff8-4454-909a-edde12ad7ec7 MY_SNAPSHOT 00:40:25.388 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=893e12f9-5b24-48bb-964b-6b180144af65 00:40:25.388 17:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 26440bb3-0ff8-4454-909a-edde12ad7ec7 30 00:40:25.648 17:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 893e12f9-5b24-48bb-964b-6b180144af65 MY_CLONE 00:40:25.908 17:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=6fc0a6b9-1414-40bc-a37a-157c6d58f104 00:40:25.908 17:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 6fc0a6b9-1414-40bc-a37a-157c6d58f104 00:40:26.479 17:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3332036 00:40:34.615 Initializing NVMe Controllers 00:40:34.615 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:40:34.615 Controller IO queue size 128, less than required. 00:40:34.615 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:34.615 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:40:34.615 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:40:34.615 Initialization complete. Launching workers. 00:40:34.615 ======================================================== 00:40:34.615 Latency(us) 00:40:34.615 Device Information : IOPS MiB/s Average min max 00:40:34.615 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12316.80 48.11 10393.15 1610.81 54746.41 00:40:34.615 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15800.90 61.72 8101.52 1709.06 60964.77 00:40:34.615 ======================================================== 00:40:34.615 Total : 28117.70 109.83 9105.35 1610.81 60964.77 00:40:34.615 00:40:34.615 17:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:34.615 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 26440bb3-0ff8-4454-909a-edde12ad7ec7 00:40:34.876 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e64729b5-93d2-40e4-a4e5-3dd12d090f45 00:40:35.137 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:40:35.137 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:40:35.137 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:40:35.137 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:40:35.137 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:40:35.137 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:35.137 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:40:35.137 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:35.137 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:35.137 rmmod nvme_tcp 00:40:35.137 rmmod nvme_fabrics 00:40:35.137 rmmod nvme_keyring 00:40:35.137 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:35.137 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:40:35.137 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:40:35.137 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 3331589 ']' 00:40:35.137 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 3331589 00:40:35.137 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 3331589 ']' 00:40:35.137 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 3331589 00:40:35.137 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:40:35.137 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:35.137 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3331589 00:40:35.137 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:35.137 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:35.137 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3331589' 00:40:35.137 killing process with pid 3331589 00:40:35.137 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 3331589 00:40:35.137 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 3331589 00:40:35.398 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:40:35.398 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:40:35.398 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:40:35.398 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:40:35.398 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:40:35.398 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:40:35.398 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:40:35.398 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:35.398 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:35.398 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:35.398 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:35.398 17:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:37.942 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:37.942 00:40:37.942 real 0m22.796s 00:40:37.942 user 0m55.257s 00:40:37.942 sys 0m10.271s 00:40:37.942 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:37.942 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:37.942 ************************************ 00:40:37.942 END TEST nvmf_lvol 00:40:37.942 ************************************ 00:40:37.942 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:40:37.942 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:40:37.942 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:37.942 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:37.942 ************************************ 00:40:37.942 START TEST nvmf_lvs_grow 00:40:37.942 ************************************ 00:40:37.942 17:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:40:37.942 * Looking for test storage... 00:40:37.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:37.942 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:37.942 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:40:37.942 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:37.942 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:37.942 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:37.942 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:37.942 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:37.942 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:40:37.942 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:40:37.942 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:40:37.942 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:40:37.942 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:40:37.942 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:40:37.942 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:40:37.942 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:37.942 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:40:37.942 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:40:37.942 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:37.942 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:37.942 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:40:37.942 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:40:37.942 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:37.942 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:40:37.942 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:40:37.942 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:40:37.942 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:40:37.942 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:37.942 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:40:37.942 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:40:37.942 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:37.942 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:37.942 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:40:37.942 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:37.942 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:37.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:37.942 --rc genhtml_branch_coverage=1 00:40:37.942 --rc genhtml_function_coverage=1 00:40:37.942 --rc genhtml_legend=1 00:40:37.942 --rc geninfo_all_blocks=1 00:40:37.942 --rc geninfo_unexecuted_blocks=1 00:40:37.942 00:40:37.942 ' 00:40:37.942 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:37.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:37.942 --rc genhtml_branch_coverage=1 00:40:37.942 --rc genhtml_function_coverage=1 00:40:37.942 --rc genhtml_legend=1 00:40:37.942 --rc geninfo_all_blocks=1 00:40:37.942 --rc geninfo_unexecuted_blocks=1 00:40:37.942 00:40:37.942 ' 00:40:37.942 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:37.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:37.942 --rc genhtml_branch_coverage=1 00:40:37.942 --rc genhtml_function_coverage=1 00:40:37.942 --rc genhtml_legend=1 00:40:37.942 --rc geninfo_all_blocks=1 00:40:37.943 --rc geninfo_unexecuted_blocks=1 00:40:37.943 00:40:37.943 ' 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:37.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:37.943 --rc genhtml_branch_coverage=1 00:40:37.943 --rc genhtml_function_coverage=1 00:40:37.943 --rc genhtml_legend=1 00:40:37.943 --rc geninfo_all_blocks=1 00:40:37.943 --rc geninfo_unexecuted_blocks=1 00:40:37.943 00:40:37.943 ' 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:40:37.943 17:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:40:46.082 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:40:46.082 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:40:46.082 Found net devices under 0000:4b:00.0: cvl_0_0 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:40:46.082 Found net devices under 0000:4b:00.1: cvl_0_1 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:46.082 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:46.083 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:46.083 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:46.083 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:46.083 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:46.083 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:46.083 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:46.083 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:46.083 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:46.083 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:46.083 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:46.083 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:46.083 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:46.083 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:46.083 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:46.083 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:46.083 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:40:46.083 00:40:46.083 --- 10.0.0.2 ping statistics --- 00:40:46.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:46.083 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:40:46.083 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:46.083 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:46.083 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:40:46.083 00:40:46.083 --- 10.0.0.1 ping statistics --- 00:40:46.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:46.083 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:40:46.083 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:46.083 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:40:46.083 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:40:46.083 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:46.083 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:40:46.083 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:40:46.083 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:46.083 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:40:46.083 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:40:46.083 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:40:46.083 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:40:46.083 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:46.083 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:46.083 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=3338170 00:40:46.083 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 3338170 00:40:46.083 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:40:46.083 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 3338170 ']' 00:40:46.083 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:46.083 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:46.083 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:46.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:46.083 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:46.083 17:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:46.083 [2024-10-01 17:40:43.598445] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:46.083 [2024-10-01 17:40:43.599562] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:40:46.083 [2024-10-01 17:40:43.599615] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:46.083 [2024-10-01 17:40:43.670906] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:46.083 [2024-10-01 17:40:43.708484] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:46.083 [2024-10-01 17:40:43.708530] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:46.083 [2024-10-01 17:40:43.708539] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:46.083 [2024-10-01 17:40:43.708546] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:46.083 [2024-10-01 17:40:43.708552] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:46.083 [2024-10-01 17:40:43.708576] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:40:46.083 [2024-10-01 17:40:43.758359] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:46.083 [2024-10-01 17:40:43.758615] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:46.083 17:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:46.083 17:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:40:46.083 17:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:40:46.083 17:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:46.083 17:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:46.083 17:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:46.083 17:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:46.083 [2024-10-01 17:40:44.601127] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:46.083 17:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:40:46.083 17:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:40:46.083 17:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:46.083 17:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:46.343 ************************************ 00:40:46.343 START TEST lvs_grow_clean 00:40:46.343 ************************************ 00:40:46.343 17:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:40:46.343 17:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:40:46.343 17:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:40:46.343 17:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:40:46.343 17:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:40:46.343 17:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:40:46.343 17:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:40:46.343 17:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:46.343 17:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:46.343 17:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:46.343 17:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:40:46.343 17:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:40:46.603 17:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=54de413a-196f-46f1-ad35-da137af19b21 00:40:46.603 17:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54de413a-196f-46f1-ad35-da137af19b21 00:40:46.603 17:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:40:46.864 17:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:40:46.864 17:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:40:46.864 17:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 54de413a-196f-46f1-ad35-da137af19b21 lvol 150 00:40:46.864 17:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=1e9b4044-a272-475c-b57e-4a8321445efc 00:40:46.864 17:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:46.864 17:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:40:47.125 [2024-10-01 17:40:45.553122] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:40:47.125 [2024-10-01 17:40:45.553289] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:40:47.125 true 00:40:47.125 17:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54de413a-196f-46f1-ad35-da137af19b21 00:40:47.125 17:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:40:47.386 17:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:40:47.386 17:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:40:47.386 17:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1e9b4044-a272-475c-b57e-4a8321445efc 00:40:47.648 17:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:47.909 [2024-10-01 17:40:46.229402] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:47.909 17:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:47.909 17:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3338777 00:40:47.909 17:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:47.909 17:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:40:47.909 17:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3338777 /var/tmp/bdevperf.sock 00:40:47.909 17:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 3338777 ']' 00:40:47.909 17:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:47.909 17:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:47.909 17:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:47.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:47.909 17:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:47.909 17:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:40:48.169 [2024-10-01 17:40:46.486050] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:40:48.169 [2024-10-01 17:40:46.486125] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3338777 ] 00:40:48.169 [2024-10-01 17:40:46.568782] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:48.169 [2024-10-01 17:40:46.617452] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:49.111 17:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:49.111 17:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:40:49.111 17:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:40:49.371 Nvme0n1 00:40:49.371 17:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:40:49.371 [ 00:40:49.371 { 00:40:49.371 "name": "Nvme0n1", 00:40:49.371 "aliases": [ 00:40:49.371 "1e9b4044-a272-475c-b57e-4a8321445efc" 00:40:49.372 ], 00:40:49.372 "product_name": "NVMe disk", 00:40:49.372 "block_size": 4096, 00:40:49.372 "num_blocks": 38912, 00:40:49.372 "uuid": "1e9b4044-a272-475c-b57e-4a8321445efc", 00:40:49.372 "numa_id": 0, 00:40:49.372 "assigned_rate_limits": { 00:40:49.372 "rw_ios_per_sec": 0, 00:40:49.372 "rw_mbytes_per_sec": 0, 00:40:49.372 "r_mbytes_per_sec": 0, 00:40:49.372 "w_mbytes_per_sec": 0 00:40:49.372 }, 00:40:49.372 "claimed": false, 00:40:49.372 "zoned": false, 00:40:49.372 "supported_io_types": { 00:40:49.372 "read": true, 00:40:49.372 "write": true, 00:40:49.372 "unmap": true, 00:40:49.372 "flush": true, 00:40:49.372 "reset": true, 00:40:49.372 "nvme_admin": true, 00:40:49.372 "nvme_io": true, 00:40:49.372 "nvme_io_md": false, 00:40:49.372 "write_zeroes": true, 00:40:49.372 "zcopy": false, 00:40:49.372 "get_zone_info": false, 00:40:49.372 "zone_management": false, 00:40:49.372 "zone_append": false, 00:40:49.372 "compare": true, 00:40:49.372 "compare_and_write": true, 00:40:49.372 "abort": true, 00:40:49.372 "seek_hole": false, 00:40:49.372 "seek_data": false, 00:40:49.372 "copy": true, 00:40:49.372 "nvme_iov_md": false 00:40:49.372 }, 00:40:49.372 "memory_domains": [ 00:40:49.372 { 00:40:49.372 "dma_device_id": "system", 00:40:49.372 "dma_device_type": 1 00:40:49.372 } 00:40:49.372 ], 00:40:49.372 "driver_specific": { 00:40:49.372 "nvme": [ 00:40:49.372 { 00:40:49.372 "trid": { 00:40:49.372 "trtype": "TCP", 00:40:49.372 "adrfam": "IPv4", 00:40:49.372 "traddr": "10.0.0.2", 00:40:49.372 "trsvcid": "4420", 00:40:49.372 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:40:49.372 }, 00:40:49.372 "ctrlr_data": { 00:40:49.372 "cntlid": 1, 00:40:49.372 "vendor_id": "0x8086", 00:40:49.372 "model_number": "SPDK bdev Controller", 00:40:49.372 "serial_number": "SPDK0", 00:40:49.372 "firmware_revision": "25.01", 00:40:49.372 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:49.372 "oacs": { 00:40:49.372 "security": 0, 00:40:49.372 "format": 0, 00:40:49.372 "firmware": 0, 00:40:49.372 "ns_manage": 0 00:40:49.372 }, 00:40:49.372 "multi_ctrlr": true, 00:40:49.372 "ana_reporting": false 00:40:49.372 }, 00:40:49.372 "vs": { 00:40:49.372 "nvme_version": "1.3" 00:40:49.372 }, 00:40:49.372 "ns_data": { 00:40:49.372 "id": 1, 00:40:49.372 "can_share": true 00:40:49.372 } 00:40:49.372 } 00:40:49.372 ], 00:40:49.372 "mp_policy": "active_passive" 00:40:49.372 } 00:40:49.372 } 00:40:49.372 ] 00:40:49.372 17:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3338953 00:40:49.372 17:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:40:49.372 17:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:49.632 Running I/O for 10 seconds... 00:40:50.573 Latency(us) 00:40:50.573 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:50.573 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:50.573 Nvme0n1 : 1.00 17735.00 69.28 0.00 0.00 0.00 0.00 0.00 00:40:50.573 =================================================================================================================== 00:40:50.573 Total : 17735.00 69.28 0.00 0.00 0.00 0.00 0.00 00:40:50.573 00:40:51.513 17:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 54de413a-196f-46f1-ad35-da137af19b21 00:40:51.513 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:51.513 Nvme0n1 : 2.00 17859.00 69.76 0.00 0.00 0.00 0.00 0.00 00:40:51.513 =================================================================================================================== 00:40:51.513 Total : 17859.00 69.76 0.00 0.00 0.00 0.00 0.00 00:40:51.513 00:40:51.513 true 00:40:51.774 17:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54de413a-196f-46f1-ad35-da137af19b21 00:40:51.774 17:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:40:51.774 17:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:40:51.774 17:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:40:51.774 17:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3338953 00:40:52.715 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:52.715 Nvme0n1 : 3.00 17901.00 69.93 0.00 0.00 0.00 0.00 0.00 00:40:52.715 =================================================================================================================== 00:40:52.715 Total : 17901.00 69.93 0.00 0.00 0.00 0.00 0.00 00:40:52.715 00:40:53.655 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:53.655 Nvme0n1 : 4.00 17937.75 70.07 0.00 0.00 0.00 0.00 0.00 00:40:53.655 =================================================================================================================== 00:40:53.655 Total : 17937.75 70.07 0.00 0.00 0.00 0.00 0.00 00:40:53.655 00:40:54.592 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:54.592 Nvme0n1 : 5.00 17972.40 70.20 0.00 0.00 0.00 0.00 0.00 00:40:54.592 =================================================================================================================== 00:40:54.592 Total : 17972.40 70.20 0.00 0.00 0.00 0.00 0.00 00:40:54.592 00:40:55.534 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:55.534 Nvme0n1 : 6.00 17985.00 70.25 0.00 0.00 0.00 0.00 0.00 00:40:55.534 =================================================================================================================== 00:40:55.534 Total : 17985.00 70.25 0.00 0.00 0.00 0.00 0.00 00:40:55.534 00:40:56.472 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:56.472 Nvme0n1 : 7.00 18012.29 70.36 0.00 0.00 0.00 0.00 0.00 00:40:56.472 =================================================================================================================== 00:40:56.472 Total : 18012.29 70.36 0.00 0.00 0.00 0.00 0.00 00:40:56.472 00:40:57.853 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:57.853 Nvme0n1 : 8.00 18024.88 70.41 0.00 0.00 0.00 0.00 0.00 00:40:57.853 =================================================================================================================== 00:40:57.853 Total : 18024.88 70.41 0.00 0.00 0.00 0.00 0.00 00:40:57.853 00:40:58.818 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:58.818 Nvme0n1 : 9.00 18036.33 70.45 0.00 0.00 0.00 0.00 0.00 00:40:58.818 =================================================================================================================== 00:40:58.818 Total : 18036.33 70.45 0.00 0.00 0.00 0.00 0.00 00:40:58.818 00:40:59.490 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:59.490 Nvme0n1 : 10.00 18048.60 70.50 0.00 0.00 0.00 0.00 0.00 00:40:59.490 =================================================================================================================== 00:40:59.490 Total : 18048.60 70.50 0.00 0.00 0.00 0.00 0.00 00:40:59.490 00:40:59.490 00:40:59.490 Latency(us) 00:40:59.490 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:59.490 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:59.490 Nvme0n1 : 10.01 18049.14 70.50 0.00 0.00 7088.28 2430.29 13052.59 00:40:59.490 =================================================================================================================== 00:40:59.490 Total : 18049.14 70.50 0.00 0.00 7088.28 2430.29 13052.59 00:40:59.490 { 00:40:59.490 "results": [ 00:40:59.490 { 00:40:59.490 "job": "Nvme0n1", 00:40:59.490 "core_mask": "0x2", 00:40:59.490 "workload": "randwrite", 00:40:59.490 "status": "finished", 00:40:59.490 "queue_depth": 128, 00:40:59.490 "io_size": 4096, 00:40:59.490 "runtime": 10.006794, 00:40:59.490 "iops": 18049.137416039543, 00:40:59.490 "mibps": 70.50444303140446, 00:40:59.490 "io_failed": 0, 00:40:59.490 "io_timeout": 0, 00:40:59.490 "avg_latency_us": 7088.280314335176, 00:40:59.490 "min_latency_us": 2430.2933333333335, 00:40:59.490 "max_latency_us": 13052.586666666666 00:40:59.490 } 00:40:59.490 ], 00:40:59.490 "core_count": 1 00:40:59.490 } 00:40:59.490 17:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3338777 00:40:59.490 17:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 3338777 ']' 00:40:59.490 17:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 3338777 00:40:59.490 17:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:40:59.490 17:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:59.490 17:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3338777 00:40:59.750 17:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:40:59.750 17:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:40:59.750 17:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3338777' 00:40:59.750 killing process with pid 3338777 00:40:59.750 17:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 3338777 00:40:59.750 Received shutdown signal, test time was about 10.000000 seconds 00:40:59.750 00:40:59.750 Latency(us) 00:40:59.750 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:59.750 =================================================================================================================== 00:40:59.750 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:59.750 17:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 3338777 00:40:59.750 17:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:00.010 17:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:00.010 17:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54de413a-196f-46f1-ad35-da137af19b21 00:41:00.010 17:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:41:00.271 17:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:41:00.271 17:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:41:00.271 17:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:41:00.534 [2024-10-01 17:40:58.885065] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:41:00.534 17:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54de413a-196f-46f1-ad35-da137af19b21 00:41:00.534 17:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:41:00.534 17:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54de413a-196f-46f1-ad35-da137af19b21 00:41:00.534 17:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:00.534 17:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:00.534 17:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:00.534 17:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:00.534 17:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:00.534 17:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:00.534 17:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:00.534 17:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:41:00.534 17:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54de413a-196f-46f1-ad35-da137af19b21 00:41:00.796 request: 00:41:00.796 { 00:41:00.796 "uuid": "54de413a-196f-46f1-ad35-da137af19b21", 00:41:00.796 "method": "bdev_lvol_get_lvstores", 00:41:00.796 "req_id": 1 00:41:00.796 } 00:41:00.796 Got JSON-RPC error response 00:41:00.796 response: 00:41:00.796 { 00:41:00.796 "code": -19, 00:41:00.796 "message": "No such device" 00:41:00.796 } 00:41:00.796 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:41:00.796 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:41:00.796 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:41:00.796 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:41:00.796 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:00.796 aio_bdev 00:41:00.797 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1e9b4044-a272-475c-b57e-4a8321445efc 00:41:00.797 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=1e9b4044-a272-475c-b57e-4a8321445efc 00:41:00.797 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:41:00.797 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:41:00.797 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:41:00.797 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:41:00.797 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:41:01.056 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1e9b4044-a272-475c-b57e-4a8321445efc -t 2000 00:41:01.318 [ 00:41:01.318 { 00:41:01.318 "name": "1e9b4044-a272-475c-b57e-4a8321445efc", 00:41:01.318 "aliases": [ 00:41:01.318 "lvs/lvol" 00:41:01.318 ], 00:41:01.318 "product_name": "Logical Volume", 00:41:01.318 "block_size": 4096, 00:41:01.318 "num_blocks": 38912, 00:41:01.318 "uuid": "1e9b4044-a272-475c-b57e-4a8321445efc", 00:41:01.318 "assigned_rate_limits": { 00:41:01.318 "rw_ios_per_sec": 0, 00:41:01.318 "rw_mbytes_per_sec": 0, 00:41:01.318 "r_mbytes_per_sec": 0, 00:41:01.318 "w_mbytes_per_sec": 0 00:41:01.318 }, 00:41:01.318 "claimed": false, 00:41:01.318 "zoned": false, 00:41:01.318 "supported_io_types": { 00:41:01.318 "read": true, 00:41:01.318 "write": true, 00:41:01.318 "unmap": true, 00:41:01.318 "flush": false, 00:41:01.318 "reset": true, 00:41:01.318 "nvme_admin": false, 00:41:01.318 "nvme_io": false, 00:41:01.318 "nvme_io_md": false, 00:41:01.318 "write_zeroes": true, 00:41:01.318 "zcopy": false, 00:41:01.318 "get_zone_info": false, 00:41:01.318 "zone_management": false, 00:41:01.318 "zone_append": false, 00:41:01.318 "compare": false, 00:41:01.318 "compare_and_write": false, 00:41:01.318 "abort": false, 00:41:01.318 "seek_hole": true, 00:41:01.318 "seek_data": true, 00:41:01.318 "copy": false, 00:41:01.318 "nvme_iov_md": false 00:41:01.318 }, 00:41:01.318 "driver_specific": { 00:41:01.318 "lvol": { 00:41:01.318 "lvol_store_uuid": "54de413a-196f-46f1-ad35-da137af19b21", 00:41:01.318 "base_bdev": "aio_bdev", 00:41:01.318 "thin_provision": false, 00:41:01.318 "num_allocated_clusters": 38, 00:41:01.318 "snapshot": false, 00:41:01.318 "clone": false, 00:41:01.318 "esnap_clone": false 00:41:01.318 } 00:41:01.318 } 00:41:01.318 } 00:41:01.318 ] 00:41:01.318 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:41:01.318 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54de413a-196f-46f1-ad35-da137af19b21 00:41:01.318 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:41:01.318 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:41:01.318 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54de413a-196f-46f1-ad35-da137af19b21 00:41:01.318 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:41:01.578 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:41:01.578 17:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1e9b4044-a272-475c-b57e-4a8321445efc 00:41:01.856 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 54de413a-196f-46f1-ad35-da137af19b21 00:41:01.856 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:41:02.117 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:02.117 00:41:02.117 real 0m15.842s 00:41:02.117 user 0m15.519s 00:41:02.117 sys 0m1.450s 00:41:02.117 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:02.117 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:41:02.117 ************************************ 00:41:02.117 END TEST lvs_grow_clean 00:41:02.117 ************************************ 00:41:02.117 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:41:02.117 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:41:02.117 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:02.117 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:02.117 ************************************ 00:41:02.117 START TEST lvs_grow_dirty 00:41:02.117 ************************************ 00:41:02.117 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:41:02.117 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:41:02.117 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:41:02.117 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:41:02.117 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:41:02.117 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:41:02.117 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:41:02.117 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:02.117 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:02.117 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:02.377 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:41:02.377 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:41:02.639 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=aa48545d-8549-4b08-875f-3ef5f08e31f8 00:41:02.639 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa48545d-8549-4b08-875f-3ef5f08e31f8 00:41:02.639 17:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:41:02.639 17:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:41:02.639 17:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:41:02.639 17:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u aa48545d-8549-4b08-875f-3ef5f08e31f8 lvol 150 00:41:02.900 17:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=468deb01-dce0-4e3f-b8b7-ed602d9ef6da 00:41:02.900 17:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:02.900 17:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:41:03.160 [2024-10-01 17:41:01.501021] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:41:03.160 [2024-10-01 17:41:01.501116] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:41:03.160 true 00:41:03.160 17:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa48545d-8549-4b08-875f-3ef5f08e31f8 00:41:03.160 17:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:41:03.160 17:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:41:03.160 17:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:41:03.421 17:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 468deb01-dce0-4e3f-b8b7-ed602d9ef6da 00:41:03.682 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:03.682 [2024-10-01 17:41:02.197218] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:03.682 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:03.942 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3341658 00:41:03.942 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:03.942 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:41:03.942 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3341658 /var/tmp/bdevperf.sock 00:41:03.942 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3341658 ']' 00:41:03.942 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:41:03.942 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:03.942 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:41:03.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:41:03.942 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:03.942 17:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:03.942 [2024-10-01 17:41:02.411281] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:41:03.942 [2024-10-01 17:41:02.411335] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3341658 ] 00:41:03.942 [2024-10-01 17:41:02.487426] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:04.202 [2024-10-01 17:41:02.515919] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:41:04.787 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:04.787 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:41:04.787 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:41:05.047 Nvme0n1 00:41:05.047 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:41:05.307 [ 00:41:05.307 { 00:41:05.307 "name": "Nvme0n1", 00:41:05.307 "aliases": [ 00:41:05.307 "468deb01-dce0-4e3f-b8b7-ed602d9ef6da" 00:41:05.307 ], 00:41:05.307 "product_name": "NVMe disk", 00:41:05.307 "block_size": 4096, 00:41:05.307 "num_blocks": 38912, 00:41:05.307 "uuid": "468deb01-dce0-4e3f-b8b7-ed602d9ef6da", 00:41:05.307 "numa_id": 0, 00:41:05.307 "assigned_rate_limits": { 00:41:05.307 "rw_ios_per_sec": 0, 00:41:05.307 "rw_mbytes_per_sec": 0, 00:41:05.308 "r_mbytes_per_sec": 0, 00:41:05.308 "w_mbytes_per_sec": 0 00:41:05.308 }, 00:41:05.308 "claimed": false, 00:41:05.308 "zoned": false, 00:41:05.308 "supported_io_types": { 00:41:05.308 "read": true, 00:41:05.308 "write": true, 00:41:05.308 "unmap": true, 00:41:05.308 "flush": true, 00:41:05.308 "reset": true, 00:41:05.308 "nvme_admin": true, 00:41:05.308 "nvme_io": true, 00:41:05.308 "nvme_io_md": false, 00:41:05.308 "write_zeroes": true, 00:41:05.308 "zcopy": false, 00:41:05.308 "get_zone_info": false, 00:41:05.308 "zone_management": false, 00:41:05.308 "zone_append": false, 00:41:05.308 "compare": true, 00:41:05.308 "compare_and_write": true, 00:41:05.308 "abort": true, 00:41:05.308 "seek_hole": false, 00:41:05.308 "seek_data": false, 00:41:05.308 "copy": true, 00:41:05.308 "nvme_iov_md": false 00:41:05.308 }, 00:41:05.308 "memory_domains": [ 00:41:05.308 { 00:41:05.308 "dma_device_id": "system", 00:41:05.308 "dma_device_type": 1 00:41:05.308 } 00:41:05.308 ], 00:41:05.308 "driver_specific": { 00:41:05.308 "nvme": [ 00:41:05.308 { 00:41:05.308 "trid": { 00:41:05.308 "trtype": "TCP", 00:41:05.308 "adrfam": "IPv4", 00:41:05.308 "traddr": "10.0.0.2", 00:41:05.308 "trsvcid": "4420", 00:41:05.308 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:41:05.308 }, 00:41:05.308 "ctrlr_data": { 00:41:05.308 "cntlid": 1, 00:41:05.308 "vendor_id": "0x8086", 00:41:05.308 "model_number": "SPDK bdev Controller", 00:41:05.308 "serial_number": "SPDK0", 00:41:05.308 "firmware_revision": "25.01", 00:41:05.308 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:05.308 "oacs": { 00:41:05.308 "security": 0, 00:41:05.308 "format": 0, 00:41:05.308 "firmware": 0, 00:41:05.308 "ns_manage": 0 00:41:05.308 }, 00:41:05.308 "multi_ctrlr": true, 00:41:05.308 "ana_reporting": false 00:41:05.308 }, 00:41:05.308 "vs": { 00:41:05.308 "nvme_version": "1.3" 00:41:05.308 }, 00:41:05.308 "ns_data": { 00:41:05.308 "id": 1, 00:41:05.308 "can_share": true 00:41:05.308 } 00:41:05.308 } 00:41:05.308 ], 00:41:05.308 "mp_policy": "active_passive" 00:41:05.308 } 00:41:05.308 } 00:41:05.308 ] 00:41:05.308 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3341971 00:41:05.308 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:41:05.308 17:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:41:05.308 Running I/O for 10 seconds... 00:41:06.692 Latency(us) 00:41:06.692 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:06.692 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:06.692 Nvme0n1 : 1.00 17783.00 69.46 0.00 0.00 0.00 0.00 0.00 00:41:06.692 =================================================================================================================== 00:41:06.692 Total : 17783.00 69.46 0.00 0.00 0.00 0.00 0.00 00:41:06.692 00:41:07.261 17:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u aa48545d-8549-4b08-875f-3ef5f08e31f8 00:41:07.261 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:07.261 Nvme0n1 : 2.00 17884.00 69.86 0.00 0.00 0.00 0.00 0.00 00:41:07.261 =================================================================================================================== 00:41:07.261 Total : 17884.00 69.86 0.00 0.00 0.00 0.00 0.00 00:41:07.261 00:41:07.521 true 00:41:07.521 17:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa48545d-8549-4b08-875f-3ef5f08e31f8 00:41:07.521 17:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:41:07.783 17:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:41:07.783 17:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:41:07.783 17:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3341971 00:41:08.353 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:08.353 Nvme0n1 : 3.00 17917.00 69.99 0.00 0.00 0.00 0.00 0.00 00:41:08.353 =================================================================================================================== 00:41:08.353 Total : 17917.00 69.99 0.00 0.00 0.00 0.00 0.00 00:41:08.353 00:41:09.293 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:09.293 Nvme0n1 : 4.00 17949.75 70.12 0.00 0.00 0.00 0.00 0.00 00:41:09.293 =================================================================================================================== 00:41:09.293 Total : 17949.75 70.12 0.00 0.00 0.00 0.00 0.00 00:41:09.293 00:41:10.678 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:10.678 Nvme0n1 : 5.00 17982.40 70.24 0.00 0.00 0.00 0.00 0.00 00:41:10.678 =================================================================================================================== 00:41:10.678 Total : 17982.40 70.24 0.00 0.00 0.00 0.00 0.00 00:41:10.678 00:41:11.616 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:11.616 Nvme0n1 : 6.00 18003.67 70.33 0.00 0.00 0.00 0.00 0.00 00:41:11.616 =================================================================================================================== 00:41:11.616 Total : 18003.67 70.33 0.00 0.00 0.00 0.00 0.00 00:41:11.616 00:41:12.558 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:12.558 Nvme0n1 : 7.00 18019.00 70.39 0.00 0.00 0.00 0.00 0.00 00:41:12.558 =================================================================================================================== 00:41:12.558 Total : 18019.00 70.39 0.00 0.00 0.00 0.00 0.00 00:41:12.558 00:41:13.500 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:13.500 Nvme0n1 : 8.00 18030.75 70.43 0.00 0.00 0.00 0.00 0.00 00:41:13.500 =================================================================================================================== 00:41:13.500 Total : 18030.75 70.43 0.00 0.00 0.00 0.00 0.00 00:41:13.500 00:41:14.443 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:14.443 Nvme0n1 : 9.00 18039.67 70.47 0.00 0.00 0.00 0.00 0.00 00:41:14.443 =================================================================================================================== 00:41:14.443 Total : 18039.67 70.47 0.00 0.00 0.00 0.00 0.00 00:41:14.443 00:41:15.387 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:15.387 Nvme0n1 : 10.00 18047.90 70.50 0.00 0.00 0.00 0.00 0.00 00:41:15.387 =================================================================================================================== 00:41:15.387 Total : 18047.90 70.50 0.00 0.00 0.00 0.00 0.00 00:41:15.387 00:41:15.387 00:41:15.387 Latency(us) 00:41:15.387 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:15.387 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:15.387 Nvme0n1 : 10.01 18052.25 70.52 0.00 0.00 7087.53 4396.37 14527.15 00:41:15.387 =================================================================================================================== 00:41:15.387 Total : 18052.25 70.52 0.00 0.00 7087.53 4396.37 14527.15 00:41:15.387 { 00:41:15.387 "results": [ 00:41:15.387 { 00:41:15.387 "job": "Nvme0n1", 00:41:15.387 "core_mask": "0x2", 00:41:15.387 "workload": "randwrite", 00:41:15.387 "status": "finished", 00:41:15.387 "queue_depth": 128, 00:41:15.387 "io_size": 4096, 00:41:15.387 "runtime": 10.007342, 00:41:15.387 "iops": 18052.24604095673, 00:41:15.387 "mibps": 70.51658609748722, 00:41:15.387 "io_failed": 0, 00:41:15.387 "io_timeout": 0, 00:41:15.387 "avg_latency_us": 7087.529790447722, 00:41:15.387 "min_latency_us": 4396.373333333333, 00:41:15.387 "max_latency_us": 14527.146666666667 00:41:15.387 } 00:41:15.387 ], 00:41:15.387 "core_count": 1 00:41:15.387 } 00:41:15.387 17:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3341658 00:41:15.387 17:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 3341658 ']' 00:41:15.387 17:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 3341658 00:41:15.387 17:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:41:15.387 17:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:15.387 17:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3341658 00:41:15.387 17:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:41:15.387 17:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:41:15.387 17:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3341658' 00:41:15.387 killing process with pid 3341658 00:41:15.387 17:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 3341658 00:41:15.387 Received shutdown signal, test time was about 10.000000 seconds 00:41:15.387 00:41:15.387 Latency(us) 00:41:15.387 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:15.387 =================================================================================================================== 00:41:15.387 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:15.387 17:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 3341658 00:41:15.648 17:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:15.909 17:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:15.909 17:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa48545d-8549-4b08-875f-3ef5f08e31f8 00:41:15.909 17:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:41:16.170 17:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:41:16.170 17:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:41:16.170 17:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3338170 00:41:16.170 17:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3338170 00:41:16.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3338170 Killed "${NVMF_APP[@]}" "$@" 00:41:16.170 17:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:41:16.170 17:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:41:16.170 17:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:41:16.170 17:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:16.170 17:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:16.170 17:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=3343989 00:41:16.170 17:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 3343989 00:41:16.170 17:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:41:16.170 17:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3343989 ']' 00:41:16.170 17:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:16.170 17:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:16.171 17:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:16.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:16.171 17:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:16.171 17:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:16.171 [2024-10-01 17:41:14.658468] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:16.171 [2024-10-01 17:41:14.659458] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:41:16.171 [2024-10-01 17:41:14.659502] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:16.431 [2024-10-01 17:41:14.727260] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:16.431 [2024-10-01 17:41:14.757343] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:16.431 [2024-10-01 17:41:14.757381] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:16.431 [2024-10-01 17:41:14.757389] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:16.431 [2024-10-01 17:41:14.757400] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:16.431 [2024-10-01 17:41:14.757407] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:16.431 [2024-10-01 17:41:14.757426] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:41:16.431 [2024-10-01 17:41:14.804776] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:16.431 [2024-10-01 17:41:14.805038] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:16.432 17:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:16.432 17:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:41:16.432 17:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:41:16.432 17:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:16.432 17:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:16.432 17:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:16.432 17:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:16.692 [2024-10-01 17:41:15.036376] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:41:16.692 [2024-10-01 17:41:15.036495] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:41:16.692 [2024-10-01 17:41:15.036526] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:41:16.692 17:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:41:16.692 17:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 468deb01-dce0-4e3f-b8b7-ed602d9ef6da 00:41:16.692 17:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=468deb01-dce0-4e3f-b8b7-ed602d9ef6da 00:41:16.692 17:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:41:16.692 17:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:41:16.692 17:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:41:16.692 17:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:41:16.692 17:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:41:16.692 17:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 468deb01-dce0-4e3f-b8b7-ed602d9ef6da -t 2000 00:41:16.953 [ 00:41:16.953 { 00:41:16.953 "name": "468deb01-dce0-4e3f-b8b7-ed602d9ef6da", 00:41:16.953 "aliases": [ 00:41:16.953 "lvs/lvol" 00:41:16.953 ], 00:41:16.953 "product_name": "Logical Volume", 00:41:16.953 "block_size": 4096, 00:41:16.953 "num_blocks": 38912, 00:41:16.953 "uuid": "468deb01-dce0-4e3f-b8b7-ed602d9ef6da", 00:41:16.953 "assigned_rate_limits": { 00:41:16.953 "rw_ios_per_sec": 0, 00:41:16.953 "rw_mbytes_per_sec": 0, 00:41:16.953 "r_mbytes_per_sec": 0, 00:41:16.953 "w_mbytes_per_sec": 0 00:41:16.953 }, 00:41:16.953 "claimed": false, 00:41:16.953 "zoned": false, 00:41:16.953 "supported_io_types": { 00:41:16.953 "read": true, 00:41:16.953 "write": true, 00:41:16.953 "unmap": true, 00:41:16.953 "flush": false, 00:41:16.953 "reset": true, 00:41:16.953 "nvme_admin": false, 00:41:16.953 "nvme_io": false, 00:41:16.953 "nvme_io_md": false, 00:41:16.953 "write_zeroes": true, 00:41:16.953 "zcopy": false, 00:41:16.953 "get_zone_info": false, 00:41:16.953 "zone_management": false, 00:41:16.953 "zone_append": false, 00:41:16.953 "compare": false, 00:41:16.953 "compare_and_write": false, 00:41:16.953 "abort": false, 00:41:16.953 "seek_hole": true, 00:41:16.953 "seek_data": true, 00:41:16.953 "copy": false, 00:41:16.953 "nvme_iov_md": false 00:41:16.953 }, 00:41:16.953 "driver_specific": { 00:41:16.953 "lvol": { 00:41:16.953 "lvol_store_uuid": "aa48545d-8549-4b08-875f-3ef5f08e31f8", 00:41:16.953 "base_bdev": "aio_bdev", 00:41:16.953 "thin_provision": false, 00:41:16.953 "num_allocated_clusters": 38, 00:41:16.953 "snapshot": false, 00:41:16.953 "clone": false, 00:41:16.953 "esnap_clone": false 00:41:16.953 } 00:41:16.953 } 00:41:16.953 } 00:41:16.953 ] 00:41:16.953 17:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:41:16.953 17:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa48545d-8549-4b08-875f-3ef5f08e31f8 00:41:16.953 17:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:41:17.215 17:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:41:17.215 17:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa48545d-8549-4b08-875f-3ef5f08e31f8 00:41:17.215 17:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:41:17.215 17:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:41:17.215 17:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:41:17.475 [2024-10-01 17:41:15.889911] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:41:17.475 17:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa48545d-8549-4b08-875f-3ef5f08e31f8 00:41:17.475 17:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:41:17.475 17:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa48545d-8549-4b08-875f-3ef5f08e31f8 00:41:17.475 17:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:17.475 17:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:17.475 17:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:17.475 17:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:17.475 17:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:17.475 17:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:17.475 17:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:17.475 17:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:41:17.475 17:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa48545d-8549-4b08-875f-3ef5f08e31f8 00:41:17.737 request: 00:41:17.737 { 00:41:17.737 "uuid": "aa48545d-8549-4b08-875f-3ef5f08e31f8", 00:41:17.737 "method": "bdev_lvol_get_lvstores", 00:41:17.737 "req_id": 1 00:41:17.737 } 00:41:17.737 Got JSON-RPC error response 00:41:17.737 response: 00:41:17.737 { 00:41:17.737 "code": -19, 00:41:17.737 "message": "No such device" 00:41:17.737 } 00:41:17.737 17:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:41:17.737 17:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:41:17.737 17:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:41:17.737 17:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:41:17.737 17:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:17.737 aio_bdev 00:41:17.737 17:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 468deb01-dce0-4e3f-b8b7-ed602d9ef6da 00:41:17.737 17:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=468deb01-dce0-4e3f-b8b7-ed602d9ef6da 00:41:17.737 17:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:41:17.737 17:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:41:17.737 17:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:41:17.737 17:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:41:17.737 17:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:41:17.998 17:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 468deb01-dce0-4e3f-b8b7-ed602d9ef6da -t 2000 00:41:18.260 [ 00:41:18.260 { 00:41:18.260 "name": "468deb01-dce0-4e3f-b8b7-ed602d9ef6da", 00:41:18.260 "aliases": [ 00:41:18.260 "lvs/lvol" 00:41:18.260 ], 00:41:18.260 "product_name": "Logical Volume", 00:41:18.260 "block_size": 4096, 00:41:18.260 "num_blocks": 38912, 00:41:18.260 "uuid": "468deb01-dce0-4e3f-b8b7-ed602d9ef6da", 00:41:18.260 "assigned_rate_limits": { 00:41:18.260 "rw_ios_per_sec": 0, 00:41:18.260 "rw_mbytes_per_sec": 0, 00:41:18.260 "r_mbytes_per_sec": 0, 00:41:18.260 "w_mbytes_per_sec": 0 00:41:18.260 }, 00:41:18.260 "claimed": false, 00:41:18.260 "zoned": false, 00:41:18.260 "supported_io_types": { 00:41:18.260 "read": true, 00:41:18.260 "write": true, 00:41:18.260 "unmap": true, 00:41:18.260 "flush": false, 00:41:18.260 "reset": true, 00:41:18.260 "nvme_admin": false, 00:41:18.260 "nvme_io": false, 00:41:18.260 "nvme_io_md": false, 00:41:18.260 "write_zeroes": true, 00:41:18.260 "zcopy": false, 00:41:18.260 "get_zone_info": false, 00:41:18.260 "zone_management": false, 00:41:18.260 "zone_append": false, 00:41:18.260 "compare": false, 00:41:18.260 "compare_and_write": false, 00:41:18.260 "abort": false, 00:41:18.260 "seek_hole": true, 00:41:18.260 "seek_data": true, 00:41:18.260 "copy": false, 00:41:18.260 "nvme_iov_md": false 00:41:18.260 }, 00:41:18.260 "driver_specific": { 00:41:18.260 "lvol": { 00:41:18.260 "lvol_store_uuid": "aa48545d-8549-4b08-875f-3ef5f08e31f8", 00:41:18.260 "base_bdev": "aio_bdev", 00:41:18.260 "thin_provision": false, 00:41:18.260 "num_allocated_clusters": 38, 00:41:18.260 "snapshot": false, 00:41:18.260 "clone": false, 00:41:18.260 "esnap_clone": false 00:41:18.260 } 00:41:18.260 } 00:41:18.260 } 00:41:18.260 ] 00:41:18.260 17:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:41:18.260 17:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa48545d-8549-4b08-875f-3ef5f08e31f8 00:41:18.260 17:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:41:18.260 17:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:41:18.260 17:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa48545d-8549-4b08-875f-3ef5f08e31f8 00:41:18.260 17:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:41:18.522 17:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:41:18.522 17:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 468deb01-dce0-4e3f-b8b7-ed602d9ef6da 00:41:18.783 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u aa48545d-8549-4b08-875f-3ef5f08e31f8 00:41:18.783 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:41:19.045 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:19.045 00:41:19.045 real 0m16.895s 00:41:19.045 user 0m35.342s 00:41:19.045 sys 0m2.848s 00:41:19.045 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:19.045 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:19.045 ************************************ 00:41:19.045 END TEST lvs_grow_dirty 00:41:19.045 ************************************ 00:41:19.045 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:41:19.045 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:41:19.045 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:41:19.045 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:41:19.045 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:41:19.045 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:41:19.045 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:41:19.045 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:41:19.045 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:41:19.045 nvmf_trace.0 00:41:19.045 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:41:19.045 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:41:19.045 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:41:19.045 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:41:19.045 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:19.045 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:41:19.045 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:19.045 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:19.306 rmmod nvme_tcp 00:41:19.306 rmmod nvme_fabrics 00:41:19.306 rmmod nvme_keyring 00:41:19.306 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:19.306 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:41:19.306 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:41:19.306 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 3343989 ']' 00:41:19.306 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 3343989 00:41:19.306 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 3343989 ']' 00:41:19.306 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 3343989 00:41:19.306 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:41:19.306 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:19.306 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3343989 00:41:19.306 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:19.306 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:19.306 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3343989' 00:41:19.306 killing process with pid 3343989 00:41:19.306 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 3343989 00:41:19.306 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 3343989 00:41:19.306 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:41:19.306 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:41:19.306 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:41:19.306 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:41:19.306 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:41:19.306 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:41:19.306 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:41:19.306 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:19.306 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:19.306 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:19.306 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:19.306 17:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:21.851 17:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:21.851 00:41:21.851 real 0m43.971s 00:41:21.851 user 0m53.753s 00:41:21.851 sys 0m10.296s 00:41:21.851 17:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:21.851 17:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:21.851 ************************************ 00:41:21.851 END TEST nvmf_lvs_grow 00:41:21.851 ************************************ 00:41:21.851 17:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:41:21.851 17:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:41:21.851 17:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:21.851 17:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:21.851 ************************************ 00:41:21.851 START TEST nvmf_bdev_io_wait 00:41:21.851 ************************************ 00:41:21.851 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:41:21.851 * Looking for test storage... 00:41:21.851 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:21.851 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:41:21.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:21.852 --rc genhtml_branch_coverage=1 00:41:21.852 --rc genhtml_function_coverage=1 00:41:21.852 --rc genhtml_legend=1 00:41:21.852 --rc geninfo_all_blocks=1 00:41:21.852 --rc geninfo_unexecuted_blocks=1 00:41:21.852 00:41:21.852 ' 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:41:21.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:21.852 --rc genhtml_branch_coverage=1 00:41:21.852 --rc genhtml_function_coverage=1 00:41:21.852 --rc genhtml_legend=1 00:41:21.852 --rc geninfo_all_blocks=1 00:41:21.852 --rc geninfo_unexecuted_blocks=1 00:41:21.852 00:41:21.852 ' 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:41:21.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:21.852 --rc genhtml_branch_coverage=1 00:41:21.852 --rc genhtml_function_coverage=1 00:41:21.852 --rc genhtml_legend=1 00:41:21.852 --rc geninfo_all_blocks=1 00:41:21.852 --rc geninfo_unexecuted_blocks=1 00:41:21.852 00:41:21.852 ' 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:41:21.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:21.852 --rc genhtml_branch_coverage=1 00:41:21.852 --rc genhtml_function_coverage=1 00:41:21.852 --rc genhtml_legend=1 00:41:21.852 --rc geninfo_all_blocks=1 00:41:21.852 --rc geninfo_unexecuted_blocks=1 00:41:21.852 00:41:21.852 ' 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:21.852 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:21.853 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:21.853 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:41:21.853 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:21.853 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:41:21.853 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:21.853 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:21.853 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:21.853 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:21.853 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:21.853 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:21.853 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:21.853 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:21.853 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:21.853 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:21.853 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:21.853 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:21.853 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:41:21.853 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:41:21.853 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:21.853 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:41:21.853 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:41:21.853 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:41:21.853 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:21.853 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:21.853 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:21.853 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:41:21.853 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:41:21.853 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:41:21.853 17:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:29.994 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:29.994 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:41:29.994 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:29.994 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:29.994 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:29.994 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:29.994 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:29.994 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:41:29.994 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:29.994 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:41:29.994 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:41:29.994 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:41:29.994 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:41:29.994 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:41:29.994 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:41:29.994 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:29.994 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:29.994 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:29.994 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:29.994 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:29.994 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:29.994 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:29.994 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:29.994 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:29.994 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:29.994 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:29.994 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:29.994 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:29.994 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:29.994 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:29.994 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:29.994 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:29.994 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:29.994 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:29.994 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:41:29.994 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:41:29.994 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:29.994 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:29.994 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:29.994 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:29.994 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:29.994 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:41:29.995 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:41:29.995 Found net devices under 0000:4b:00.0: cvl_0_0 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:41:29.995 Found net devices under 0000:4b:00.1: cvl_0_1 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:29.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:29.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.584 ms 00:41:29.995 00:41:29.995 --- 10.0.0.2 ping statistics --- 00:41:29.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:29.995 rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:29.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:29.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:41:29.995 00:41:29.995 --- 10.0.0.1 ping statistics --- 00:41:29.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:29.995 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=3348713 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 3348713 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 3348713 ']' 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:29.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:29.995 17:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:29.995 [2024-10-01 17:41:27.428442] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:29.995 [2024-10-01 17:41:27.429687] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:41:29.995 [2024-10-01 17:41:27.429743] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:29.995 [2024-10-01 17:41:27.502339] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:29.995 [2024-10-01 17:41:27.543397] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:29.995 [2024-10-01 17:41:27.543442] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:29.995 [2024-10-01 17:41:27.543450] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:29.995 [2024-10-01 17:41:27.543457] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:29.995 [2024-10-01 17:41:27.543463] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:29.996 [2024-10-01 17:41:27.543617] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:41:29.996 [2024-10-01 17:41:27.543737] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:41:29.996 [2024-10-01 17:41:27.543899] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:41:29.996 [2024-10-01 17:41:27.543900] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:41:29.996 [2024-10-01 17:41:27.544243] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:29.996 [2024-10-01 17:41:28.331071] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:29.996 [2024-10-01 17:41:28.331316] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:29.996 [2024-10-01 17:41:28.332087] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:29.996 [2024-10-01 17:41:28.332113] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:29.996 [2024-10-01 17:41:28.344749] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:29.996 Malloc0 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:29.996 [2024-10-01 17:41:28.420615] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3349018 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3349021 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:29.996 { 00:41:29.996 "params": { 00:41:29.996 "name": "Nvme$subsystem", 00:41:29.996 "trtype": "$TEST_TRANSPORT", 00:41:29.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:29.996 "adrfam": "ipv4", 00:41:29.996 "trsvcid": "$NVMF_PORT", 00:41:29.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:29.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:29.996 "hdgst": ${hdgst:-false}, 00:41:29.996 "ddgst": ${ddgst:-false} 00:41:29.996 }, 00:41:29.996 "method": "bdev_nvme_attach_controller" 00:41:29.996 } 00:41:29.996 EOF 00:41:29.996 )") 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3349024 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3349027 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:29.996 { 00:41:29.996 "params": { 00:41:29.996 "name": "Nvme$subsystem", 00:41:29.996 "trtype": "$TEST_TRANSPORT", 00:41:29.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:29.996 "adrfam": "ipv4", 00:41:29.996 "trsvcid": "$NVMF_PORT", 00:41:29.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:29.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:29.996 "hdgst": ${hdgst:-false}, 00:41:29.996 "ddgst": ${ddgst:-false} 00:41:29.996 }, 00:41:29.996 "method": "bdev_nvme_attach_controller" 00:41:29.996 } 00:41:29.996 EOF 00:41:29.996 )") 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:29.996 { 00:41:29.996 "params": { 00:41:29.996 "name": "Nvme$subsystem", 00:41:29.996 "trtype": "$TEST_TRANSPORT", 00:41:29.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:29.996 "adrfam": "ipv4", 00:41:29.996 "trsvcid": "$NVMF_PORT", 00:41:29.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:29.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:29.996 "hdgst": ${hdgst:-false}, 00:41:29.996 "ddgst": ${ddgst:-false} 00:41:29.996 }, 00:41:29.996 "method": "bdev_nvme_attach_controller" 00:41:29.996 } 00:41:29.996 EOF 00:41:29.996 )") 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:29.996 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:29.996 { 00:41:29.996 "params": { 00:41:29.996 "name": "Nvme$subsystem", 00:41:29.997 "trtype": "$TEST_TRANSPORT", 00:41:29.997 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:29.997 "adrfam": "ipv4", 00:41:29.997 "trsvcid": "$NVMF_PORT", 00:41:29.997 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:29.997 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:29.997 "hdgst": ${hdgst:-false}, 00:41:29.997 "ddgst": ${ddgst:-false} 00:41:29.997 }, 00:41:29.997 "method": "bdev_nvme_attach_controller" 00:41:29.997 } 00:41:29.997 EOF 00:41:29.997 )") 00:41:29.997 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:41:29.997 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3349018 00:41:29.997 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:41:29.997 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:41:29.997 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:41:29.997 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:41:29.997 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:41:29.997 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:41:29.997 "params": { 00:41:29.997 "name": "Nvme1", 00:41:29.997 "trtype": "tcp", 00:41:29.997 "traddr": "10.0.0.2", 00:41:29.997 "adrfam": "ipv4", 00:41:29.997 "trsvcid": "4420", 00:41:29.997 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:29.997 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:29.997 "hdgst": false, 00:41:29.997 "ddgst": false 00:41:29.997 }, 00:41:29.997 "method": "bdev_nvme_attach_controller" 00:41:29.997 }' 00:41:29.997 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:41:29.997 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:41:29.997 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:41:29.997 "params": { 00:41:29.997 "name": "Nvme1", 00:41:29.997 "trtype": "tcp", 00:41:29.997 "traddr": "10.0.0.2", 00:41:29.997 "adrfam": "ipv4", 00:41:29.997 "trsvcid": "4420", 00:41:29.997 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:29.997 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:29.997 "hdgst": false, 00:41:29.997 "ddgst": false 00:41:29.997 }, 00:41:29.997 "method": "bdev_nvme_attach_controller" 00:41:29.997 }' 00:41:29.997 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:41:29.997 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:41:29.997 "params": { 00:41:29.997 "name": "Nvme1", 00:41:29.997 "trtype": "tcp", 00:41:29.997 "traddr": "10.0.0.2", 00:41:29.997 "adrfam": "ipv4", 00:41:29.997 "trsvcid": "4420", 00:41:29.997 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:29.997 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:29.997 "hdgst": false, 00:41:29.997 "ddgst": false 00:41:29.997 }, 00:41:29.997 "method": "bdev_nvme_attach_controller" 00:41:29.997 }' 00:41:29.997 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:41:29.997 17:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:41:29.997 "params": { 00:41:29.997 "name": "Nvme1", 00:41:29.997 "trtype": "tcp", 00:41:29.997 "traddr": "10.0.0.2", 00:41:29.997 "adrfam": "ipv4", 00:41:29.997 "trsvcid": "4420", 00:41:29.997 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:29.997 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:29.997 "hdgst": false, 00:41:29.997 "ddgst": false 00:41:29.997 }, 00:41:29.997 "method": "bdev_nvme_attach_controller" 00:41:29.997 }' 00:41:29.997 [2024-10-01 17:41:28.473504] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:41:29.997 [2024-10-01 17:41:28.473559] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:41:29.997 [2024-10-01 17:41:28.477947] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:41:29.997 [2024-10-01 17:41:28.477992] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:41:29.997 [2024-10-01 17:41:28.479118] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:41:29.997 [2024-10-01 17:41:28.479167] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:41:29.997 [2024-10-01 17:41:28.480459] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:41:29.997 [2024-10-01 17:41:28.480507] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:41:30.257 [2024-10-01 17:41:28.614893] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:30.257 [2024-10-01 17:41:28.632833] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:41:30.257 [2024-10-01 17:41:28.674863] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:30.257 [2024-10-01 17:41:28.694469] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:41:30.257 [2024-10-01 17:41:28.718985] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:30.257 [2024-10-01 17:41:28.739431] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:41:30.257 [2024-10-01 17:41:28.765685] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:30.257 [2024-10-01 17:41:28.782648] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:41:30.517 Running I/O for 1 seconds... 00:41:30.517 Running I/O for 1 seconds... 00:41:30.776 Running I/O for 1 seconds... 00:41:30.776 Running I/O for 1 seconds... 00:41:31.719 187960.00 IOPS, 734.22 MiB/s 00:41:31.719 Latency(us) 00:41:31.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:31.719 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:41:31.719 Nvme1n1 : 1.00 187585.33 732.76 0.00 0.00 678.55 312.32 1979.73 00:41:31.719 =================================================================================================================== 00:41:31.719 Total : 187585.33 732.76 0.00 0.00 678.55 312.32 1979.73 00:41:31.719 8161.00 IOPS, 31.88 MiB/s 00:41:31.719 Latency(us) 00:41:31.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:31.719 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:41:31.719 Nvme1n1 : 1.02 8168.28 31.91 0.00 0.00 15544.56 2061.65 22719.15 00:41:31.719 =================================================================================================================== 00:41:31.719 Total : 8168.28 31.91 0.00 0.00 15544.56 2061.65 22719.15 00:41:31.719 20404.00 IOPS, 79.70 MiB/s 00:41:31.719 Latency(us) 00:41:31.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:31.719 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:41:31.719 Nvme1n1 : 1.01 20458.41 79.92 0.00 0.00 6240.12 3099.31 10321.92 00:41:31.719 =================================================================================================================== 00:41:31.719 Total : 20458.41 79.92 0.00 0.00 6240.12 3099.31 10321.92 00:41:31.719 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3349021 00:41:31.719 8043.00 IOPS, 31.42 MiB/s 00:41:31.719 Latency(us) 00:41:31.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:31.719 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:41:31.719 Nvme1n1 : 1.01 8130.83 31.76 0.00 0.00 15698.98 3850.24 31894.19 00:41:31.719 =================================================================================================================== 00:41:31.719 Total : 8130.83 31.76 0.00 0.00 15698.98 3850.24 31894.19 00:41:31.979 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3349024 00:41:31.979 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3349027 00:41:31.980 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:31.980 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:31.980 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:31.980 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:31.980 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:41:31.980 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:41:31.980 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:41:31.980 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:41:31.980 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:31.980 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:41:31.980 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:31.980 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:31.980 rmmod nvme_tcp 00:41:31.980 rmmod nvme_fabrics 00:41:31.980 rmmod nvme_keyring 00:41:31.980 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:31.980 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:41:31.980 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:41:31.980 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 3348713 ']' 00:41:31.980 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 3348713 00:41:31.980 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 3348713 ']' 00:41:31.980 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 3348713 00:41:31.980 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:41:31.980 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:31.980 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3348713 00:41:31.980 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:31.980 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:31.980 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3348713' 00:41:31.980 killing process with pid 3348713 00:41:31.980 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 3348713 00:41:31.980 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 3348713 00:41:32.240 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:41:32.240 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:41:32.240 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:41:32.240 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:41:32.240 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:41:32.240 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:41:32.240 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:41:32.240 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:32.240 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:32.240 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:32.240 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:32.240 17:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:34.154 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:34.154 00:41:34.154 real 0m12.649s 00:41:34.154 user 0m15.470s 00:41:34.154 sys 0m7.331s 00:41:34.154 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:34.154 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:34.154 ************************************ 00:41:34.154 END TEST nvmf_bdev_io_wait 00:41:34.154 ************************************ 00:41:34.154 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:41:34.154 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:41:34.154 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:34.154 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:34.415 ************************************ 00:41:34.415 START TEST nvmf_queue_depth 00:41:34.415 ************************************ 00:41:34.415 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:41:34.415 * Looking for test storage... 00:41:34.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:34.415 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:41:34.415 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:41:34.415 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:41:34.415 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:41:34.415 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:34.415 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:34.415 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:34.415 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:41:34.415 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:41:34.415 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:41:34.415 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:41:34.415 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:41:34.415 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:41:34.415 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:41:34.415 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:34.415 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:41:34.415 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:41:34.415 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:34.415 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:34.415 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:41:34.415 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:41:34.415 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:34.415 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:41:34.415 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:41:34.415 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:41:34.415 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:41:34.415 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:34.415 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:41:34.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:34.416 --rc genhtml_branch_coverage=1 00:41:34.416 --rc genhtml_function_coverage=1 00:41:34.416 --rc genhtml_legend=1 00:41:34.416 --rc geninfo_all_blocks=1 00:41:34.416 --rc geninfo_unexecuted_blocks=1 00:41:34.416 00:41:34.416 ' 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:41:34.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:34.416 --rc genhtml_branch_coverage=1 00:41:34.416 --rc genhtml_function_coverage=1 00:41:34.416 --rc genhtml_legend=1 00:41:34.416 --rc geninfo_all_blocks=1 00:41:34.416 --rc geninfo_unexecuted_blocks=1 00:41:34.416 00:41:34.416 ' 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:41:34.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:34.416 --rc genhtml_branch_coverage=1 00:41:34.416 --rc genhtml_function_coverage=1 00:41:34.416 --rc genhtml_legend=1 00:41:34.416 --rc geninfo_all_blocks=1 00:41:34.416 --rc geninfo_unexecuted_blocks=1 00:41:34.416 00:41:34.416 ' 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:41:34.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:34.416 --rc genhtml_branch_coverage=1 00:41:34.416 --rc genhtml_function_coverage=1 00:41:34.416 --rc genhtml_legend=1 00:41:34.416 --rc geninfo_all_blocks=1 00:41:34.416 --rc geninfo_unexecuted_blocks=1 00:41:34.416 00:41:34.416 ' 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:34.416 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:34.678 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:41:34.678 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:41:34.678 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:41:34.678 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:41:34.678 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:41:34.678 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:34.678 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:41:34.678 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:41:34.678 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:41:34.678 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:34.678 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:34.678 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:34.678 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:41:34.678 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:41:34.678 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:41:34.678 17:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:41.265 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:41.265 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:41:41.265 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:41.265 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:41.265 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:41.265 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:41.265 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:41.265 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:41:41.265 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:41.265 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:41:41.265 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:41:41.265 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:41:41.265 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:41:41.265 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:41:41.265 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:41:41.265 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:41.265 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:41.265 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:41.265 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:41.265 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:41.265 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:41:41.266 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:41:41.266 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:41:41.266 Found net devices under 0000:4b:00.0: cvl_0_0 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:41:41.266 Found net devices under 0000:4b:00.1: cvl_0_1 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:41.266 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:41.527 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:41.527 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:41.527 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:41.527 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:41.527 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:41.527 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:41.527 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:41.527 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:41.527 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:41.527 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.570 ms 00:41:41.527 00:41:41.527 --- 10.0.0.2 ping statistics --- 00:41:41.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:41.527 rtt min/avg/max/mdev = 0.570/0.570/0.570/0.000 ms 00:41:41.527 17:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:41.527 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:41.527 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:41:41.527 00:41:41.527 --- 10.0.0.1 ping statistics --- 00:41:41.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:41.527 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:41:41.527 17:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:41.527 17:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:41:41.527 17:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:41:41.527 17:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:41.527 17:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:41:41.527 17:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:41:41.527 17:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:41.527 17:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:41:41.527 17:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:41:41.527 17:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:41:41.527 17:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:41:41.527 17:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:41.527 17:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:41.527 17:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=3353429 00:41:41.527 17:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 3353429 00:41:41.527 17:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:41:41.527 17:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3353429 ']' 00:41:41.527 17:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:41.527 17:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:41.527 17:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:41.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:41.527 17:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:41.527 17:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:41.789 [2024-10-01 17:41:40.121730] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:41.789 [2024-10-01 17:41:40.122865] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:41:41.789 [2024-10-01 17:41:40.122912] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:41.789 [2024-10-01 17:41:40.208113] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:41.789 [2024-10-01 17:41:40.238787] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:41.789 [2024-10-01 17:41:40.238823] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:41.789 [2024-10-01 17:41:40.238832] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:41.789 [2024-10-01 17:41:40.238838] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:41.789 [2024-10-01 17:41:40.238844] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:41.789 [2024-10-01 17:41:40.238865] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:41:41.789 [2024-10-01 17:41:40.286506] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:41.789 [2024-10-01 17:41:40.286757] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:42.360 17:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:42.360 17:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:41:42.360 17:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:41:42.360 17:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:42.360 17:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:42.621 17:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:42.621 17:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:42.621 17:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:42.621 17:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:42.621 [2024-10-01 17:41:40.943623] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:42.621 17:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:42.621 17:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:42.621 17:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:42.621 17:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:42.621 Malloc0 00:41:42.621 17:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:42.621 17:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:42.621 17:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:42.621 17:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:42.621 17:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:42.621 17:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:42.621 17:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:42.621 17:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:42.621 17:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:42.621 17:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:42.621 17:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:42.621 17:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:42.621 [2024-10-01 17:41:41.023760] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:42.621 17:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:42.621 17:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3353650 00:41:42.621 17:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:42.621 17:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:41:42.622 17:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3353650 /var/tmp/bdevperf.sock 00:41:42.622 17:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3353650 ']' 00:41:42.622 17:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:41:42.622 17:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:42.622 17:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:41:42.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:41:42.622 17:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:42.622 17:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:42.622 [2024-10-01 17:41:41.079932] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:41:42.622 [2024-10-01 17:41:41.080003] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3353650 ] 00:41:42.622 [2024-10-01 17:41:41.144472] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:42.882 [2024-10-01 17:41:41.184593] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:41:42.882 17:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:42.882 17:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:41:42.882 17:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:41:42.882 17:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:42.882 17:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:42.882 NVMe0n1 00:41:42.882 17:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:42.882 17:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:41:42.882 Running I/O for 10 seconds... 00:41:53.317 9080.00 IOPS, 35.47 MiB/s 9220.50 IOPS, 36.02 MiB/s 9402.00 IOPS, 36.73 MiB/s 9476.25 IOPS, 37.02 MiB/s 9810.20 IOPS, 38.32 MiB/s 10227.33 IOPS, 39.95 MiB/s 10516.43 IOPS, 41.08 MiB/s 10746.12 IOPS, 41.98 MiB/s 10924.11 IOPS, 42.67 MiB/s 11070.70 IOPS, 43.24 MiB/s 00:41:53.317 Latency(us) 00:41:53.317 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:53.317 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:41:53.317 Verification LBA range: start 0x0 length 0x4000 00:41:53.317 NVMe0n1 : 10.05 11113.24 43.41 0.00 0.00 91833.98 14964.05 67283.63 00:41:53.317 =================================================================================================================== 00:41:53.317 Total : 11113.24 43.41 0.00 0.00 91833.98 14964.05 67283.63 00:41:53.317 { 00:41:53.317 "results": [ 00:41:53.317 { 00:41:53.317 "job": "NVMe0n1", 00:41:53.317 "core_mask": "0x1", 00:41:53.317 "workload": "verify", 00:41:53.317 "status": "finished", 00:41:53.317 "verify_range": { 00:41:53.317 "start": 0, 00:41:53.317 "length": 16384 00:41:53.317 }, 00:41:53.317 "queue_depth": 1024, 00:41:53.317 "io_size": 4096, 00:41:53.317 "runtime": 10.050268, 00:41:53.317 "iops": 11113.235985348849, 00:41:53.317 "mibps": 43.41107806776894, 00:41:53.317 "io_failed": 0, 00:41:53.317 "io_timeout": 0, 00:41:53.317 "avg_latency_us": 91833.98469181343, 00:41:53.317 "min_latency_us": 14964.053333333333, 00:41:53.317 "max_latency_us": 67283.62666666666 00:41:53.317 } 00:41:53.317 ], 00:41:53.317 "core_count": 1 00:41:53.317 } 00:41:53.317 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3353650 00:41:53.317 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3353650 ']' 00:41:53.317 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3353650 00:41:53.317 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:41:53.317 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:53.317 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3353650 00:41:53.317 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:53.317 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:53.317 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3353650' 00:41:53.317 killing process with pid 3353650 00:41:53.317 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3353650 00:41:53.317 Received shutdown signal, test time was about 10.000000 seconds 00:41:53.317 00:41:53.317 Latency(us) 00:41:53.317 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:53.317 =================================================================================================================== 00:41:53.317 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:53.317 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3353650 00:41:53.317 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:41:53.317 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:41:53.317 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:41:53.317 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:41:53.317 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:53.317 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:41:53.317 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:53.317 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:53.317 rmmod nvme_tcp 00:41:53.317 rmmod nvme_fabrics 00:41:53.317 rmmod nvme_keyring 00:41:53.317 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:53.317 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:41:53.317 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:41:53.317 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 3353429 ']' 00:41:53.317 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 3353429 00:41:53.317 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3353429 ']' 00:41:53.317 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3353429 00:41:53.317 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:41:53.317 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:53.317 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3353429 00:41:53.317 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:41:53.317 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:41:53.317 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3353429' 00:41:53.317 killing process with pid 3353429 00:41:53.317 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3353429 00:41:53.317 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3353429 00:41:53.578 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:41:53.578 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:41:53.578 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:41:53.578 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:41:53.578 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:41:53.578 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:41:53.578 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:41:53.578 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:53.578 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:53.578 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:53.578 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:53.578 17:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:56.121 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:56.121 00:41:56.121 real 0m21.320s 00:41:56.121 user 0m23.249s 00:41:56.121 sys 0m6.778s 00:41:56.121 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:56.121 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:56.121 ************************************ 00:41:56.121 END TEST nvmf_queue_depth 00:41:56.121 ************************************ 00:41:56.121 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:41:56.121 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:41:56.121 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:56.121 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:56.121 ************************************ 00:41:56.121 START TEST nvmf_target_multipath 00:41:56.121 ************************************ 00:41:56.121 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:41:56.121 * Looking for test storage... 00:41:56.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:56.121 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:41:56.121 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:41:56.121 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:41:56.121 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:41:56.121 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:56.121 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:56.121 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:56.121 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:41:56.121 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:41:56.121 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:41:56.121 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:41:56.121 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:41:56.121 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:41:56.121 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:41:56.121 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:56.121 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:41:56.121 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:41:56.121 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:56.121 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:56.121 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:41:56.121 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:41:56.121 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:56.121 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:41:56.121 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:41:56.121 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:41:56.121 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:41:56.121 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:56.121 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:41:56.121 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:41:56.121 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:56.121 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:56.121 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:41:56.121 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:56.121 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:41:56.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:56.121 --rc genhtml_branch_coverage=1 00:41:56.121 --rc genhtml_function_coverage=1 00:41:56.121 --rc genhtml_legend=1 00:41:56.121 --rc geninfo_all_blocks=1 00:41:56.121 --rc geninfo_unexecuted_blocks=1 00:41:56.121 00:41:56.121 ' 00:41:56.121 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:41:56.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:56.121 --rc genhtml_branch_coverage=1 00:41:56.121 --rc genhtml_function_coverage=1 00:41:56.121 --rc genhtml_legend=1 00:41:56.121 --rc geninfo_all_blocks=1 00:41:56.121 --rc geninfo_unexecuted_blocks=1 00:41:56.121 00:41:56.121 ' 00:41:56.121 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:41:56.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:56.121 --rc genhtml_branch_coverage=1 00:41:56.121 --rc genhtml_function_coverage=1 00:41:56.121 --rc genhtml_legend=1 00:41:56.121 --rc geninfo_all_blocks=1 00:41:56.122 --rc geninfo_unexecuted_blocks=1 00:41:56.122 00:41:56.122 ' 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:41:56.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:56.122 --rc genhtml_branch_coverage=1 00:41:56.122 --rc genhtml_function_coverage=1 00:41:56.122 --rc genhtml_legend=1 00:41:56.122 --rc geninfo_all_blocks=1 00:41:56.122 --rc geninfo_unexecuted_blocks=1 00:41:56.122 00:41:56.122 ' 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:41:56.122 17:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:42:02.733 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:42:02.734 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:42:02.734 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:42:02.734 Found net devices under 0000:4b:00.0: cvl_0_0 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:42:02.734 Found net devices under 0000:4b:00.1: cvl_0_1 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:02.734 17:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:02.734 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:02.734 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:02.734 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:02.734 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:02.734 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:02.734 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:02.735 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:02.735 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:02.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:02.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:42:02.735 00:42:02.735 --- 10.0.0.2 ping statistics --- 00:42:02.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:02.735 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:42:02.735 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:02.735 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:02.735 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:42:02.735 00:42:02.735 --- 10.0.0.1 ping statistics --- 00:42:02.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:02.735 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:42:02.735 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:02.735 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:42:02.735 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:42:02.735 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:02.735 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:42:02.735 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:42:02.735 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:02.735 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:42:02.735 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:42:02.735 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:42:02.735 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:42:02.735 only one NIC for nvmf test 00:42:02.735 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:42:02.735 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:42:02.735 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:42:02.735 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:02.735 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:42:02.735 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:02.735 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:02.735 rmmod nvme_tcp 00:42:02.735 rmmod nvme_fabrics 00:42:02.995 rmmod nvme_keyring 00:42:02.995 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:02.995 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:42:02.995 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:42:02.995 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:42:02.995 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:42:02.995 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:42:02.995 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:42:02.995 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:42:02.995 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:42:02.995 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:42:02.995 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:42:02.995 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:02.995 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:02.995 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:02.995 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:02.995 17:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:04.910 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:04.910 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:42:04.910 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:42:04.910 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:42:04.910 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:42:04.910 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:04.910 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:42:04.910 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:04.910 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:04.910 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:04.910 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:42:04.910 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:42:04.910 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:42:04.910 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:42:04.910 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:42:04.910 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:42:04.910 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:42:04.910 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:42:04.910 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:42:04.910 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:42:04.910 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:04.910 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:04.910 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:04.910 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:04.910 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:04.910 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:04.910 00:42:04.910 real 0m9.314s 00:42:04.910 user 0m2.043s 00:42:04.910 sys 0m5.200s 00:42:04.910 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:04.910 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:42:04.910 ************************************ 00:42:04.910 END TEST nvmf_target_multipath 00:42:04.910 ************************************ 00:42:05.171 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:42:05.171 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:42:05.172 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:05.172 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:05.172 ************************************ 00:42:05.172 START TEST nvmf_zcopy 00:42:05.172 ************************************ 00:42:05.172 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:42:05.172 * Looking for test storage... 00:42:05.172 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:05.172 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:42:05.172 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:42:05.172 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:42:05.172 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:42:05.172 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:05.172 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:05.172 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:05.172 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:42:05.172 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:42:05.172 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:42:05.172 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:42:05.172 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:42:05.172 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:42:05.172 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:42:05.172 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:05.172 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:42:05.172 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:42:05.172 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:05.172 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:05.172 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:42:05.172 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:42:05.172 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:05.172 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:42:05.172 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:42:05.172 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:42:05.172 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:42:05.172 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:05.172 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:42:05.172 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:42:05.172 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:05.172 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:05.433 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:42:05.433 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:05.433 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:42:05.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:05.433 --rc genhtml_branch_coverage=1 00:42:05.433 --rc genhtml_function_coverage=1 00:42:05.433 --rc genhtml_legend=1 00:42:05.433 --rc geninfo_all_blocks=1 00:42:05.433 --rc geninfo_unexecuted_blocks=1 00:42:05.433 00:42:05.433 ' 00:42:05.433 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:42:05.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:05.433 --rc genhtml_branch_coverage=1 00:42:05.433 --rc genhtml_function_coverage=1 00:42:05.433 --rc genhtml_legend=1 00:42:05.433 --rc geninfo_all_blocks=1 00:42:05.433 --rc geninfo_unexecuted_blocks=1 00:42:05.433 00:42:05.433 ' 00:42:05.433 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:42:05.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:05.433 --rc genhtml_branch_coverage=1 00:42:05.433 --rc genhtml_function_coverage=1 00:42:05.433 --rc genhtml_legend=1 00:42:05.433 --rc geninfo_all_blocks=1 00:42:05.433 --rc geninfo_unexecuted_blocks=1 00:42:05.433 00:42:05.433 ' 00:42:05.433 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:42:05.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:05.433 --rc genhtml_branch_coverage=1 00:42:05.433 --rc genhtml_function_coverage=1 00:42:05.433 --rc genhtml_legend=1 00:42:05.433 --rc geninfo_all_blocks=1 00:42:05.433 --rc geninfo_unexecuted_blocks=1 00:42:05.433 00:42:05.433 ' 00:42:05.433 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:05.433 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:42:05.433 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:05.433 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:05.433 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:05.433 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:05.433 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:05.433 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:05.433 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:05.433 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:05.433 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:05.433 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:05.433 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:42:05.434 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:42:05.434 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:05.434 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:05.434 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:05.434 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:05.434 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:05.434 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:42:05.434 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:05.434 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:05.434 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:05.434 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:05.434 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:05.434 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:05.434 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:42:05.434 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:05.434 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:42:05.434 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:05.434 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:05.434 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:05.434 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:05.434 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:05.434 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:05.434 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:05.434 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:05.434 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:05.434 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:05.434 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:42:05.434 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:42:05.434 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:05.434 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:42:05.434 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:42:05.434 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:42:05.434 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:05.434 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:05.434 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:05.434 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:42:05.434 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:42:05.434 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:42:05.434 17:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:42:13.581 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:42:13.581 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:42:13.581 Found net devices under 0000:4b:00.0: cvl_0_0 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:42:13.581 Found net devices under 0000:4b:00.1: cvl_0_1 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:42:13.581 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:13.582 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:13.582 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.563 ms 00:42:13.582 00:42:13.582 --- 10.0.0.2 ping statistics --- 00:42:13.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:13.582 rtt min/avg/max/mdev = 0.563/0.563/0.563/0.000 ms 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:13.582 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:13.582 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:42:13.582 00:42:13.582 --- 10.0.0.1 ping statistics --- 00:42:13.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:13.582 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=3364338 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 3364338 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 3364338 ']' 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:13.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:13.582 17:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:13.582 [2024-10-01 17:42:11.038935] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:13.582 [2024-10-01 17:42:11.039909] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:42:13.582 [2024-10-01 17:42:11.039948] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:13.582 [2024-10-01 17:42:11.123351] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:13.582 [2024-10-01 17:42:11.153899] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:13.582 [2024-10-01 17:42:11.153935] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:13.582 [2024-10-01 17:42:11.153944] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:13.582 [2024-10-01 17:42:11.153950] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:13.582 [2024-10-01 17:42:11.153956] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:13.582 [2024-10-01 17:42:11.153976] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:42:13.582 [2024-10-01 17:42:11.201781] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:13.582 [2024-10-01 17:42:11.202034] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:13.582 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:13.582 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:42:13.582 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:42:13.582 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:13.582 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:13.582 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:13.582 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:42:13.582 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:42:13.582 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:13.582 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:13.582 [2024-10-01 17:42:11.870705] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:13.582 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:13.582 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:42:13.582 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:13.582 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:13.582 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:13.582 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:13.582 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:13.582 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:13.582 [2024-10-01 17:42:11.898907] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:13.582 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:13.582 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:42:13.582 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:13.582 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:13.582 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:13.582 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:42:13.582 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:13.582 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:13.582 malloc0 00:42:13.582 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:13.582 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:42:13.582 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:13.582 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:13.583 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:13.583 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:42:13.583 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:42:13.583 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:42:13.583 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:42:13.583 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:42:13.583 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:42:13.583 { 00:42:13.583 "params": { 00:42:13.583 "name": "Nvme$subsystem", 00:42:13.583 "trtype": "$TEST_TRANSPORT", 00:42:13.583 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:13.583 "adrfam": "ipv4", 00:42:13.583 "trsvcid": "$NVMF_PORT", 00:42:13.583 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:13.583 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:13.583 "hdgst": ${hdgst:-false}, 00:42:13.583 "ddgst": ${ddgst:-false} 00:42:13.583 }, 00:42:13.583 "method": "bdev_nvme_attach_controller" 00:42:13.583 } 00:42:13.583 EOF 00:42:13.583 )") 00:42:13.583 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:42:13.583 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:42:13.583 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:42:13.583 17:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:42:13.583 "params": { 00:42:13.583 "name": "Nvme1", 00:42:13.583 "trtype": "tcp", 00:42:13.583 "traddr": "10.0.0.2", 00:42:13.583 "adrfam": "ipv4", 00:42:13.583 "trsvcid": "4420", 00:42:13.583 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:13.583 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:13.583 "hdgst": false, 00:42:13.583 "ddgst": false 00:42:13.583 }, 00:42:13.583 "method": "bdev_nvme_attach_controller" 00:42:13.583 }' 00:42:13.583 [2024-10-01 17:42:12.009683] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:42:13.583 [2024-10-01 17:42:12.009738] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3364610 ] 00:42:13.583 [2024-10-01 17:42:12.070643] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:13.583 [2024-10-01 17:42:12.103710] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:42:14.155 Running I/O for 10 seconds... 00:42:24.036 6525.00 IOPS, 50.98 MiB/s 6571.50 IOPS, 51.34 MiB/s 6582.00 IOPS, 51.42 MiB/s 6593.00 IOPS, 51.51 MiB/s 6810.00 IOPS, 53.20 MiB/s 7269.00 IOPS, 56.79 MiB/s 7595.00 IOPS, 59.34 MiB/s 7830.62 IOPS, 61.18 MiB/s 8019.00 IOPS, 62.65 MiB/s 8174.30 IOPS, 63.86 MiB/s 00:42:24.036 Latency(us) 00:42:24.036 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:24.036 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:42:24.036 Verification LBA range: start 0x0 length 0x1000 00:42:24.036 Nvme1n1 : 10.01 8178.33 63.89 0.00 0.00 15599.24 1481.39 27525.12 00:42:24.036 =================================================================================================================== 00:42:24.036 Total : 8178.33 63.89 0.00 0.00 15599.24 1481.39 27525.12 00:42:24.298 17:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3366507 00:42:24.298 17:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:42:24.298 17:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:24.298 17:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:42:24.298 17:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:42:24.298 17:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:42:24.298 17:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:42:24.298 17:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:42:24.298 17:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:42:24.298 { 00:42:24.298 "params": { 00:42:24.298 "name": "Nvme$subsystem", 00:42:24.298 "trtype": "$TEST_TRANSPORT", 00:42:24.298 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:24.298 "adrfam": "ipv4", 00:42:24.298 "trsvcid": "$NVMF_PORT", 00:42:24.298 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:24.298 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:24.298 "hdgst": ${hdgst:-false}, 00:42:24.298 "ddgst": ${ddgst:-false} 00:42:24.298 }, 00:42:24.298 "method": "bdev_nvme_attach_controller" 00:42:24.298 } 00:42:24.298 EOF 00:42:24.298 )") 00:42:24.298 [2024-10-01 17:42:22.602298] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.298 [2024-10-01 17:42:22.602327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.298 17:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:42:24.298 17:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:42:24.298 17:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:42:24.298 17:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:42:24.298 "params": { 00:42:24.298 "name": "Nvme1", 00:42:24.298 "trtype": "tcp", 00:42:24.298 "traddr": "10.0.0.2", 00:42:24.298 "adrfam": "ipv4", 00:42:24.298 "trsvcid": "4420", 00:42:24.298 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:24.298 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:24.298 "hdgst": false, 00:42:24.298 "ddgst": false 00:42:24.298 }, 00:42:24.298 "method": "bdev_nvme_attach_controller" 00:42:24.299 }' 00:42:24.299 [2024-10-01 17:42:22.614265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.299 [2024-10-01 17:42:22.614274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.299 [2024-10-01 17:42:22.626264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.299 [2024-10-01 17:42:22.626272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.299 [2024-10-01 17:42:22.638263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.299 [2024-10-01 17:42:22.638271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.299 [2024-10-01 17:42:22.647806] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:42:24.299 [2024-10-01 17:42:22.647857] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3366507 ] 00:42:24.299 [2024-10-01 17:42:22.650263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.299 [2024-10-01 17:42:22.650271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.299 [2024-10-01 17:42:22.662263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.299 [2024-10-01 17:42:22.662275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.299 [2024-10-01 17:42:22.674263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.299 [2024-10-01 17:42:22.674271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.299 [2024-10-01 17:42:22.686263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.299 [2024-10-01 17:42:22.686270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.299 [2024-10-01 17:42:22.698263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.299 [2024-10-01 17:42:22.698270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.299 [2024-10-01 17:42:22.706648] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:24.299 [2024-10-01 17:42:22.710264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.299 [2024-10-01 17:42:22.710273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.299 [2024-10-01 17:42:22.722265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.299 [2024-10-01 17:42:22.722277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.299 [2024-10-01 17:42:22.734265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.299 [2024-10-01 17:42:22.734280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.299 [2024-10-01 17:42:22.737331] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:42:24.299 [2024-10-01 17:42:22.746264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.299 [2024-10-01 17:42:22.746270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.299 [2024-10-01 17:42:22.758268] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.299 [2024-10-01 17:42:22.758281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.299 [2024-10-01 17:42:22.770267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.299 [2024-10-01 17:42:22.770277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.299 [2024-10-01 17:42:22.782266] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.299 [2024-10-01 17:42:22.782276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.299 [2024-10-01 17:42:22.794263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.299 [2024-10-01 17:42:22.794272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.299 [2024-10-01 17:42:22.806271] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.299 [2024-10-01 17:42:22.806287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.299 [2024-10-01 17:42:22.818266] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.299 [2024-10-01 17:42:22.818275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.299 [2024-10-01 17:42:22.830266] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.299 [2024-10-01 17:42:22.830277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.299 [2024-10-01 17:42:22.842266] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.299 [2024-10-01 17:42:22.842275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.560 [2024-10-01 17:42:22.854264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.560 [2024-10-01 17:42:22.854273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.560 [2024-10-01 17:42:22.866264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.560 [2024-10-01 17:42:22.866271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.560 [2024-10-01 17:42:22.878264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.560 [2024-10-01 17:42:22.878276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.560 [2024-10-01 17:42:22.890264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.560 [2024-10-01 17:42:22.890274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.560 [2024-10-01 17:42:22.902263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.560 [2024-10-01 17:42:22.902271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.560 [2024-10-01 17:42:22.914263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.560 [2024-10-01 17:42:22.914271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.560 [2024-10-01 17:42:22.926264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.560 [2024-10-01 17:42:22.926272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.560 [2024-10-01 17:42:22.938263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.560 [2024-10-01 17:42:22.938272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.560 [2024-10-01 17:42:22.950263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.560 [2024-10-01 17:42:22.950271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.560 [2024-10-01 17:42:22.962263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.561 [2024-10-01 17:42:22.962270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.561 [2024-10-01 17:42:22.974264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.561 [2024-10-01 17:42:22.974272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.561 [2024-10-01 17:42:22.986269] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.561 [2024-10-01 17:42:22.986284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.561 Running I/O for 5 seconds... 00:42:24.561 [2024-10-01 17:42:23.003293] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.561 [2024-10-01 17:42:23.003309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.561 [2024-10-01 17:42:23.017746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.561 [2024-10-01 17:42:23.017764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.561 [2024-10-01 17:42:23.030154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.561 [2024-10-01 17:42:23.030170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.561 [2024-10-01 17:42:23.042945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.561 [2024-10-01 17:42:23.042960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.561 [2024-10-01 17:42:23.057781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.561 [2024-10-01 17:42:23.057797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.561 [2024-10-01 17:42:23.070655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.561 [2024-10-01 17:42:23.070670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.561 [2024-10-01 17:42:23.085703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.561 [2024-10-01 17:42:23.085718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.561 [2024-10-01 17:42:23.099249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.561 [2024-10-01 17:42:23.099264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.821 [2024-10-01 17:42:23.113244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.821 [2024-10-01 17:42:23.113260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.821 [2024-10-01 17:42:23.126476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.821 [2024-10-01 17:42:23.126496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.821 [2024-10-01 17:42:23.138374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.821 [2024-10-01 17:42:23.138390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.821 [2024-10-01 17:42:23.151346] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.821 [2024-10-01 17:42:23.151361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.821 [2024-10-01 17:42:23.166286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.821 [2024-10-01 17:42:23.166301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.821 [2024-10-01 17:42:23.179072] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.821 [2024-10-01 17:42:23.179086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.821 [2024-10-01 17:42:23.193410] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.821 [2024-10-01 17:42:23.193425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.821 [2024-10-01 17:42:23.206228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.821 [2024-10-01 17:42:23.206243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.821 [2024-10-01 17:42:23.218526] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.821 [2024-10-01 17:42:23.218540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.821 [2024-10-01 17:42:23.233606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.821 [2024-10-01 17:42:23.233622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.821 [2024-10-01 17:42:23.246761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.821 [2024-10-01 17:42:23.246776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.821 [2024-10-01 17:42:23.261957] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.821 [2024-10-01 17:42:23.261972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.821 [2024-10-01 17:42:23.274422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.821 [2024-10-01 17:42:23.274437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.821 [2024-10-01 17:42:23.289692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.821 [2024-10-01 17:42:23.289707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.821 [2024-10-01 17:42:23.302987] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.821 [2024-10-01 17:42:23.303006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.821 [2024-10-01 17:42:23.317442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.821 [2024-10-01 17:42:23.317457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.821 [2024-10-01 17:42:23.330401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.821 [2024-10-01 17:42:23.330416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.821 [2024-10-01 17:42:23.345377] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.821 [2024-10-01 17:42:23.345392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.821 [2024-10-01 17:42:23.358201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.821 [2024-10-01 17:42:23.358217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.082 [2024-10-01 17:42:23.370627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.082 [2024-10-01 17:42:23.370642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.082 [2024-10-01 17:42:23.385845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.082 [2024-10-01 17:42:23.385865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.082 [2024-10-01 17:42:23.398375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.082 [2024-10-01 17:42:23.398391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.082 [2024-10-01 17:42:23.410749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.082 [2024-10-01 17:42:23.410764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.082 [2024-10-01 17:42:23.425385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.083 [2024-10-01 17:42:23.425399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.083 [2024-10-01 17:42:23.438254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.083 [2024-10-01 17:42:23.438269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.083 [2024-10-01 17:42:23.450463] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.083 [2024-10-01 17:42:23.450477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.083 [2024-10-01 17:42:23.465410] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.083 [2024-10-01 17:42:23.465425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.083 [2024-10-01 17:42:23.478517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.083 [2024-10-01 17:42:23.478532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.083 [2024-10-01 17:42:23.493000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.083 [2024-10-01 17:42:23.493015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.083 [2024-10-01 17:42:23.506105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.083 [2024-10-01 17:42:23.506121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.083 [2024-10-01 17:42:23.518267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.083 [2024-10-01 17:42:23.518282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.083 [2024-10-01 17:42:23.530919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.083 [2024-10-01 17:42:23.530934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.083 [2024-10-01 17:42:23.546045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.083 [2024-10-01 17:42:23.546060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.083 [2024-10-01 17:42:23.559120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.083 [2024-10-01 17:42:23.559135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.083 [2024-10-01 17:42:23.573553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.083 [2024-10-01 17:42:23.573568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.083 [2024-10-01 17:42:23.586629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.083 [2024-10-01 17:42:23.586644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.083 [2024-10-01 17:42:23.601799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.083 [2024-10-01 17:42:23.601815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.083 [2024-10-01 17:42:23.614414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.083 [2024-10-01 17:42:23.614429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.083 [2024-10-01 17:42:23.627055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.083 [2024-10-01 17:42:23.627070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.344 [2024-10-01 17:42:23.641238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.344 [2024-10-01 17:42:23.641253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.344 [2024-10-01 17:42:23.653868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.344 [2024-10-01 17:42:23.653882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.344 [2024-10-01 17:42:23.666775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.344 [2024-10-01 17:42:23.666789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.344 [2024-10-01 17:42:23.682017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.344 [2024-10-01 17:42:23.682032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.344 [2024-10-01 17:42:23.694484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.344 [2024-10-01 17:42:23.694498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.344 [2024-10-01 17:42:23.709413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.344 [2024-10-01 17:42:23.709428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.344 [2024-10-01 17:42:23.722279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.344 [2024-10-01 17:42:23.722293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.344 [2024-10-01 17:42:23.734541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.344 [2024-10-01 17:42:23.734555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.344 [2024-10-01 17:42:23.749311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.344 [2024-10-01 17:42:23.749326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.344 [2024-10-01 17:42:23.762365] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.344 [2024-10-01 17:42:23.762380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.344 [2024-10-01 17:42:23.775296] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.344 [2024-10-01 17:42:23.775310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.344 [2024-10-01 17:42:23.788945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.344 [2024-10-01 17:42:23.788960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.344 [2024-10-01 17:42:23.801827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.344 [2024-10-01 17:42:23.801841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.344 [2024-10-01 17:42:23.814835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.344 [2024-10-01 17:42:23.814850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.344 [2024-10-01 17:42:23.829176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.344 [2024-10-01 17:42:23.829191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.344 [2024-10-01 17:42:23.841847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.344 [2024-10-01 17:42:23.841861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.344 [2024-10-01 17:42:23.854992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.344 [2024-10-01 17:42:23.855012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.344 [2024-10-01 17:42:23.869368] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.344 [2024-10-01 17:42:23.869383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.344 [2024-10-01 17:42:23.882260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.344 [2024-10-01 17:42:23.882275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.605 [2024-10-01 17:42:23.894818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.605 [2024-10-01 17:42:23.894833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.605 [2024-10-01 17:42:23.909297] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.605 [2024-10-01 17:42:23.909312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.605 [2024-10-01 17:42:23.922607] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.605 [2024-10-01 17:42:23.922623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.605 [2024-10-01 17:42:23.937293] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.605 [2024-10-01 17:42:23.937309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.605 [2024-10-01 17:42:23.950851] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.605 [2024-10-01 17:42:23.950865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.605 [2024-10-01 17:42:23.964960] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.605 [2024-10-01 17:42:23.964975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.605 [2024-10-01 17:42:23.977866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.605 [2024-10-01 17:42:23.977881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.605 [2024-10-01 17:42:23.990570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.605 [2024-10-01 17:42:23.990584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.605 18779.00 IOPS, 146.71 MiB/s [2024-10-01 17:42:24.005638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.605 [2024-10-01 17:42:24.005653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.605 [2024-10-01 17:42:24.018757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.605 [2024-10-01 17:42:24.018771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.605 [2024-10-01 17:42:24.033381] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.605 [2024-10-01 17:42:24.033396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.605 [2024-10-01 17:42:24.046291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.605 [2024-10-01 17:42:24.046306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.605 [2024-10-01 17:42:24.058485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.605 [2024-10-01 17:42:24.058500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.605 [2024-10-01 17:42:24.071119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.605 [2024-10-01 17:42:24.071133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.605 [2024-10-01 17:42:24.084942] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.606 [2024-10-01 17:42:24.084958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.606 [2024-10-01 17:42:24.098001] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.606 [2024-10-01 17:42:24.098018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.606 [2024-10-01 17:42:24.110794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.606 [2024-10-01 17:42:24.110809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.606 [2024-10-01 17:42:24.125385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.606 [2024-10-01 17:42:24.125401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.606 [2024-10-01 17:42:24.138451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.606 [2024-10-01 17:42:24.138466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.606 [2024-10-01 17:42:24.150146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.606 [2024-10-01 17:42:24.150162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.867 [2024-10-01 17:42:24.162684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.867 [2024-10-01 17:42:24.162700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.867 [2024-10-01 17:42:24.177477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.867 [2024-10-01 17:42:24.177492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.867 [2024-10-01 17:42:24.190032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.867 [2024-10-01 17:42:24.190048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.867 [2024-10-01 17:42:24.202350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.867 [2024-10-01 17:42:24.202365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.867 [2024-10-01 17:42:24.214725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.867 [2024-10-01 17:42:24.214739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.867 [2024-10-01 17:42:24.229616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.867 [2024-10-01 17:42:24.229632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.867 [2024-10-01 17:42:24.242531] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.867 [2024-10-01 17:42:24.242546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.867 [2024-10-01 17:42:24.257376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.867 [2024-10-01 17:42:24.257392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.867 [2024-10-01 17:42:24.270560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.867 [2024-10-01 17:42:24.270574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.867 [2024-10-01 17:42:24.285400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.867 [2024-10-01 17:42:24.285415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.867 [2024-10-01 17:42:24.298579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.867 [2024-10-01 17:42:24.298594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.867 [2024-10-01 17:42:24.313396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.867 [2024-10-01 17:42:24.313411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.867 [2024-10-01 17:42:24.326637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.867 [2024-10-01 17:42:24.326651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.867 [2024-10-01 17:42:24.341670] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.867 [2024-10-01 17:42:24.341686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.867 [2024-10-01 17:42:24.354608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.867 [2024-10-01 17:42:24.354623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.867 [2024-10-01 17:42:24.369102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.867 [2024-10-01 17:42:24.369118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.867 [2024-10-01 17:42:24.382200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.867 [2024-10-01 17:42:24.382216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.867 [2024-10-01 17:42:24.394105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.867 [2024-10-01 17:42:24.394124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.867 [2024-10-01 17:42:24.407302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.867 [2024-10-01 17:42:24.407317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.128 [2024-10-01 17:42:24.421132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.128 [2024-10-01 17:42:24.421148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.128 [2024-10-01 17:42:24.433860] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.128 [2024-10-01 17:42:24.433875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.128 [2024-10-01 17:42:24.446760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.128 [2024-10-01 17:42:24.446775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.128 [2024-10-01 17:42:24.461370] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.128 [2024-10-01 17:42:24.461385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.128 [2024-10-01 17:42:24.474688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.128 [2024-10-01 17:42:24.474703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.128 [2024-10-01 17:42:24.489310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.128 [2024-10-01 17:42:24.489325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.128 [2024-10-01 17:42:24.502122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.128 [2024-10-01 17:42:24.502137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.128 [2024-10-01 17:42:24.514439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.128 [2024-10-01 17:42:24.514453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.128 [2024-10-01 17:42:24.529855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.128 [2024-10-01 17:42:24.529870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.128 [2024-10-01 17:42:24.542310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.128 [2024-10-01 17:42:24.542325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.128 [2024-10-01 17:42:24.555193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.128 [2024-10-01 17:42:24.555208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.128 [2024-10-01 17:42:24.570143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.128 [2024-10-01 17:42:24.570158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.128 [2024-10-01 17:42:24.583454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.128 [2024-10-01 17:42:24.583469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.128 [2024-10-01 17:42:24.597255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.128 [2024-10-01 17:42:24.597270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.128 [2024-10-01 17:42:24.610395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.128 [2024-10-01 17:42:24.610409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.128 [2024-10-01 17:42:24.625322] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.128 [2024-10-01 17:42:24.625337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.129 [2024-10-01 17:42:24.638169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.129 [2024-10-01 17:42:24.638184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.129 [2024-10-01 17:42:24.650393] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.129 [2024-10-01 17:42:24.650412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.129 [2024-10-01 17:42:24.663354] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.129 [2024-10-01 17:42:24.663370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.390 [2024-10-01 17:42:24.678143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.390 [2024-10-01 17:42:24.678159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.390 [2024-10-01 17:42:24.690794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.390 [2024-10-01 17:42:24.690809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.390 [2024-10-01 17:42:24.705765] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.390 [2024-10-01 17:42:24.705780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.390 [2024-10-01 17:42:24.718394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.390 [2024-10-01 17:42:24.718408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.390 [2024-10-01 17:42:24.733611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.390 [2024-10-01 17:42:24.733626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.390 [2024-10-01 17:42:24.746517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.390 [2024-10-01 17:42:24.746531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.390 [2024-10-01 17:42:24.761339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.390 [2024-10-01 17:42:24.761354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.390 [2024-10-01 17:42:24.774205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.390 [2024-10-01 17:42:24.774221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.390 [2024-10-01 17:42:24.786980] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.390 [2024-10-01 17:42:24.787001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.390 [2024-10-01 17:42:24.801659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.390 [2024-10-01 17:42:24.801674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.390 [2024-10-01 17:42:24.814491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.390 [2024-10-01 17:42:24.814505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.390 [2024-10-01 17:42:24.829453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.390 [2024-10-01 17:42:24.829468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.390 [2024-10-01 17:42:24.842382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.390 [2024-10-01 17:42:24.842396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.390 [2024-10-01 17:42:24.854026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.390 [2024-10-01 17:42:24.854041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.390 [2024-10-01 17:42:24.866653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.390 [2024-10-01 17:42:24.866668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.390 [2024-10-01 17:42:24.881372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.390 [2024-10-01 17:42:24.881387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.390 [2024-10-01 17:42:24.894383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.390 [2024-10-01 17:42:24.894398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.390 [2024-10-01 17:42:24.907164] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.390 [2024-10-01 17:42:24.907186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.390 [2024-10-01 17:42:24.921285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.390 [2024-10-01 17:42:24.921300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.390 [2024-10-01 17:42:24.934083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.390 [2024-10-01 17:42:24.934098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.651 [2024-10-01 17:42:24.946740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.651 [2024-10-01 17:42:24.946754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.651 [2024-10-01 17:42:24.961287] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.651 [2024-10-01 17:42:24.961303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.651 [2024-10-01 17:42:24.974666] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.651 [2024-10-01 17:42:24.974680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.651 [2024-10-01 17:42:24.989593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.651 [2024-10-01 17:42:24.989608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.651 18787.00 IOPS, 146.77 MiB/s [2024-10-01 17:42:25.002258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.651 [2024-10-01 17:42:25.002273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.651 [2024-10-01 17:42:25.017814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.651 [2024-10-01 17:42:25.017829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.651 [2024-10-01 17:42:25.030524] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.651 [2024-10-01 17:42:25.030539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.651 [2024-10-01 17:42:25.045573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.651 [2024-10-01 17:42:25.045588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.651 [2024-10-01 17:42:25.058386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.651 [2024-10-01 17:42:25.058401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.651 [2024-10-01 17:42:25.073856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.651 [2024-10-01 17:42:25.073872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.651 [2024-10-01 17:42:25.086112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.651 [2024-10-01 17:42:25.086127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.651 [2024-10-01 17:42:25.097832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.651 [2024-10-01 17:42:25.097846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.651 [2024-10-01 17:42:25.110825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.651 [2024-10-01 17:42:25.110839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.651 [2024-10-01 17:42:25.125162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.651 [2024-10-01 17:42:25.125177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.651 [2024-10-01 17:42:25.138823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.651 [2024-10-01 17:42:25.138837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.651 [2024-10-01 17:42:25.153615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.652 [2024-10-01 17:42:25.153631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.652 [2024-10-01 17:42:25.166548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.652 [2024-10-01 17:42:25.166562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.652 [2024-10-01 17:42:25.181419] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.652 [2024-10-01 17:42:25.181434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.652 [2024-10-01 17:42:25.193965] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.652 [2024-10-01 17:42:25.193980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.911 [2024-10-01 17:42:25.206907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.911 [2024-10-01 17:42:25.206921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.911 [2024-10-01 17:42:25.221937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.911 [2024-10-01 17:42:25.221952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.911 [2024-10-01 17:42:25.234637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.911 [2024-10-01 17:42:25.234651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.911 [2024-10-01 17:42:25.249438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.911 [2024-10-01 17:42:25.249453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.911 [2024-10-01 17:42:25.262219] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.912 [2024-10-01 17:42:25.262234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.912 [2024-10-01 17:42:25.274705] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.912 [2024-10-01 17:42:25.274720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.912 [2024-10-01 17:42:25.289718] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.912 [2024-10-01 17:42:25.289732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.912 [2024-10-01 17:42:25.302497] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.912 [2024-10-01 17:42:25.302511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.912 [2024-10-01 17:42:25.317174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.912 [2024-10-01 17:42:25.317189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.912 [2024-10-01 17:42:25.330442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.912 [2024-10-01 17:42:25.330457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.912 [2024-10-01 17:42:25.342711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.912 [2024-10-01 17:42:25.342725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.912 [2024-10-01 17:42:25.357527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.912 [2024-10-01 17:42:25.357542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.912 [2024-10-01 17:42:25.370118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.912 [2024-10-01 17:42:25.370133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.912 [2024-10-01 17:42:25.383160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.912 [2024-10-01 17:42:25.383174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.912 [2024-10-01 17:42:25.398352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.912 [2024-10-01 17:42:25.398367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.912 [2024-10-01 17:42:25.411024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.912 [2024-10-01 17:42:25.411038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.912 [2024-10-01 17:42:25.424801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.912 [2024-10-01 17:42:25.424817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.912 [2024-10-01 17:42:25.438228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.912 [2024-10-01 17:42:25.438242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.912 [2024-10-01 17:42:25.450506] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.912 [2024-10-01 17:42:25.450521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.172 [2024-10-01 17:42:25.462736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.172 [2024-10-01 17:42:25.462752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.172 [2024-10-01 17:42:25.477414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.172 [2024-10-01 17:42:25.477429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.172 [2024-10-01 17:42:25.490475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.172 [2024-10-01 17:42:25.490490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.172 [2024-10-01 17:42:25.502218] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.172 [2024-10-01 17:42:25.502232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.172 [2024-10-01 17:42:25.514253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.172 [2024-10-01 17:42:25.514269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.172 [2024-10-01 17:42:25.526375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.172 [2024-10-01 17:42:25.526390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.172 [2024-10-01 17:42:25.539588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.172 [2024-10-01 17:42:25.539602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.172 [2024-10-01 17:42:25.553112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.172 [2024-10-01 17:42:25.553127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.172 [2024-10-01 17:42:25.566490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.172 [2024-10-01 17:42:25.566504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.172 [2024-10-01 17:42:25.581105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.172 [2024-10-01 17:42:25.581120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.172 [2024-10-01 17:42:25.594097] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.172 [2024-10-01 17:42:25.594112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.172 [2024-10-01 17:42:25.606827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.172 [2024-10-01 17:42:25.606841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.172 [2024-10-01 17:42:25.621410] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.172 [2024-10-01 17:42:25.621424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.172 [2024-10-01 17:42:25.634274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.172 [2024-10-01 17:42:25.634289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.172 [2024-10-01 17:42:25.646646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.172 [2024-10-01 17:42:25.646660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.172 [2024-10-01 17:42:25.661722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.172 [2024-10-01 17:42:25.661737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.172 [2024-10-01 17:42:25.674934] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.172 [2024-10-01 17:42:25.674949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.172 [2024-10-01 17:42:25.689744] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.172 [2024-10-01 17:42:25.689759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.172 [2024-10-01 17:42:25.702434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.172 [2024-10-01 17:42:25.702449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.172 [2024-10-01 17:42:25.717507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.172 [2024-10-01 17:42:25.717522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.433 [2024-10-01 17:42:25.730285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.433 [2024-10-01 17:42:25.730300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.433 [2024-10-01 17:42:25.742326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.433 [2024-10-01 17:42:25.742341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.433 [2024-10-01 17:42:25.755074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.433 [2024-10-01 17:42:25.755088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.433 [2024-10-01 17:42:25.769580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.433 [2024-10-01 17:42:25.769594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.433 [2024-10-01 17:42:25.782413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.433 [2024-10-01 17:42:25.782427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.433 [2024-10-01 17:42:25.797120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.433 [2024-10-01 17:42:25.797135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.433 [2024-10-01 17:42:25.810393] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.433 [2024-10-01 17:42:25.810408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.433 [2024-10-01 17:42:25.823023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.433 [2024-10-01 17:42:25.823037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.433 [2024-10-01 17:42:25.837985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.433 [2024-10-01 17:42:25.838004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.433 [2024-10-01 17:42:25.851032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.433 [2024-10-01 17:42:25.851046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.433 [2024-10-01 17:42:25.865841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.433 [2024-10-01 17:42:25.865857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.433 [2024-10-01 17:42:25.878459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.433 [2024-10-01 17:42:25.878473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.433 [2024-10-01 17:42:25.893599] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.433 [2024-10-01 17:42:25.893614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.433 [2024-10-01 17:42:25.906383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.433 [2024-10-01 17:42:25.906398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.433 [2024-10-01 17:42:25.918229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.433 [2024-10-01 17:42:25.918248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.433 [2024-10-01 17:42:25.931289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.433 [2024-10-01 17:42:25.931304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.433 [2024-10-01 17:42:25.944943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.433 [2024-10-01 17:42:25.944958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.433 [2024-10-01 17:42:25.958103] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.433 [2024-10-01 17:42:25.958119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.433 [2024-10-01 17:42:25.970687] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.433 [2024-10-01 17:42:25.970701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.694 [2024-10-01 17:42:25.985694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.694 [2024-10-01 17:42:25.985710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.694 [2024-10-01 17:42:25.998752] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.694 [2024-10-01 17:42:25.998767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.694 18810.67 IOPS, 146.96 MiB/s [2024-10-01 17:42:26.013396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.694 [2024-10-01 17:42:26.013412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.694 [2024-10-01 17:42:26.026211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.694 [2024-10-01 17:42:26.026227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.694 [2024-10-01 17:42:26.038354] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.694 [2024-10-01 17:42:26.038369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.694 [2024-10-01 17:42:26.051290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.694 [2024-10-01 17:42:26.051305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.694 [2024-10-01 17:42:26.065870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.694 [2024-10-01 17:42:26.065885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.694 [2024-10-01 17:42:26.078973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.694 [2024-10-01 17:42:26.078988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.694 [2024-10-01 17:42:26.093563] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.694 [2024-10-01 17:42:26.093580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.694 [2024-10-01 17:42:26.106418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.694 [2024-10-01 17:42:26.106433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.694 [2024-10-01 17:42:26.118803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.694 [2024-10-01 17:42:26.118818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.694 [2024-10-01 17:42:26.133476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.694 [2024-10-01 17:42:26.133492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.694 [2024-10-01 17:42:26.146713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.694 [2024-10-01 17:42:26.146728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.694 [2024-10-01 17:42:26.161821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.694 [2024-10-01 17:42:26.161837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.694 [2024-10-01 17:42:26.174457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.694 [2024-10-01 17:42:26.174476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.694 [2024-10-01 17:42:26.186030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.694 [2024-10-01 17:42:26.186045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.694 [2024-10-01 17:42:26.199310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.694 [2024-10-01 17:42:26.199325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.694 [2024-10-01 17:42:26.213854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.694 [2024-10-01 17:42:26.213869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.694 [2024-10-01 17:42:26.226775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.694 [2024-10-01 17:42:26.226791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.955 [2024-10-01 17:42:26.241743] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.955 [2024-10-01 17:42:26.241759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.955 [2024-10-01 17:42:26.254967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.955 [2024-10-01 17:42:26.254982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.955 [2024-10-01 17:42:26.269560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.955 [2024-10-01 17:42:26.269576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.955 [2024-10-01 17:42:26.282323] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.955 [2024-10-01 17:42:26.282339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.955 [2024-10-01 17:42:26.294969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.955 [2024-10-01 17:42:26.294986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.955 [2024-10-01 17:42:26.309885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.955 [2024-10-01 17:42:26.309901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.955 [2024-10-01 17:42:26.323013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.955 [2024-10-01 17:42:26.323028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.955 [2024-10-01 17:42:26.337728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.955 [2024-10-01 17:42:26.337743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.955 [2024-10-01 17:42:26.350448] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.955 [2024-10-01 17:42:26.350463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.955 [2024-10-01 17:42:26.365416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.956 [2024-10-01 17:42:26.365432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.956 [2024-10-01 17:42:26.378124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.956 [2024-10-01 17:42:26.378139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.956 [2024-10-01 17:42:26.390577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.956 [2024-10-01 17:42:26.390592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.956 [2024-10-01 17:42:26.405310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.956 [2024-10-01 17:42:26.405326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.956 [2024-10-01 17:42:26.418185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.956 [2024-10-01 17:42:26.418201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.956 [2024-10-01 17:42:26.430115] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.956 [2024-10-01 17:42:26.430134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.956 [2024-10-01 17:42:26.443096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.956 [2024-10-01 17:42:26.443112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.956 [2024-10-01 17:42:26.456724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.956 [2024-10-01 17:42:26.456739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.956 [2024-10-01 17:42:26.469394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.956 [2024-10-01 17:42:26.469409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.956 [2024-10-01 17:42:26.481859] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.956 [2024-10-01 17:42:26.481874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.956 [2024-10-01 17:42:26.495089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.956 [2024-10-01 17:42:26.495105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.217 [2024-10-01 17:42:26.510110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.217 [2024-10-01 17:42:26.510134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.217 [2024-10-01 17:42:26.523122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.217 [2024-10-01 17:42:26.523138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.217 [2024-10-01 17:42:26.537606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.217 [2024-10-01 17:42:26.537622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.217 [2024-10-01 17:42:26.550692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.217 [2024-10-01 17:42:26.550707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.217 [2024-10-01 17:42:26.565695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.217 [2024-10-01 17:42:26.565710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.217 [2024-10-01 17:42:26.579014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.217 [2024-10-01 17:42:26.579029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.217 [2024-10-01 17:42:26.593494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.217 [2024-10-01 17:42:26.593509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.217 [2024-10-01 17:42:26.606321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.217 [2024-10-01 17:42:26.606337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.217 [2024-10-01 17:42:26.619419] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.217 [2024-10-01 17:42:26.619434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.217 [2024-10-01 17:42:26.634166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.217 [2024-10-01 17:42:26.634181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.217 [2024-10-01 17:42:26.646858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.217 [2024-10-01 17:42:26.646872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.217 [2024-10-01 17:42:26.661544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.217 [2024-10-01 17:42:26.661560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.217 [2024-10-01 17:42:26.674089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.217 [2024-10-01 17:42:26.674104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.217 [2024-10-01 17:42:26.686919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.217 [2024-10-01 17:42:26.686938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.217 [2024-10-01 17:42:26.702002] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.217 [2024-10-01 17:42:26.702018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.217 [2024-10-01 17:42:26.714679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.217 [2024-10-01 17:42:26.714695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.217 [2024-10-01 17:42:26.730081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.217 [2024-10-01 17:42:26.730097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.217 [2024-10-01 17:42:26.742973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.217 [2024-10-01 17:42:26.742988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.217 [2024-10-01 17:42:26.757070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.217 [2024-10-01 17:42:26.757085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.478 [2024-10-01 17:42:26.770112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.478 [2024-10-01 17:42:26.770127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.478 [2024-10-01 17:42:26.782063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.478 [2024-10-01 17:42:26.782078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.478 [2024-10-01 17:42:26.795287] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.478 [2024-10-01 17:42:26.795302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.478 [2024-10-01 17:42:26.808471] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.478 [2024-10-01 17:42:26.808486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.478 [2024-10-01 17:42:26.821180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.478 [2024-10-01 17:42:26.821194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.478 [2024-10-01 17:42:26.834103] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.478 [2024-10-01 17:42:26.834118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.478 [2024-10-01 17:42:26.847041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.478 [2024-10-01 17:42:26.847057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.478 [2024-10-01 17:42:26.861750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.478 [2024-10-01 17:42:26.861765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.478 [2024-10-01 17:42:26.875051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.478 [2024-10-01 17:42:26.875066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.478 [2024-10-01 17:42:26.889691] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.478 [2024-10-01 17:42:26.889706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.478 [2024-10-01 17:42:26.902650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.478 [2024-10-01 17:42:26.902665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.478 [2024-10-01 17:42:26.917736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.478 [2024-10-01 17:42:26.917751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.478 [2024-10-01 17:42:26.930914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.478 [2024-10-01 17:42:26.930929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.478 [2024-10-01 17:42:26.945371] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.478 [2024-10-01 17:42:26.945387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.478 [2024-10-01 17:42:26.957981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.478 [2024-10-01 17:42:26.958001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.478 [2024-10-01 17:42:26.970975] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.478 [2024-10-01 17:42:26.970990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.478 [2024-10-01 17:42:26.985932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.478 [2024-10-01 17:42:26.985948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.478 [2024-10-01 17:42:26.999179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.478 [2024-10-01 17:42:26.999194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.478 18810.00 IOPS, 146.95 MiB/s [2024-10-01 17:42:27.013556] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.478 [2024-10-01 17:42:27.013572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.738 [2024-10-01 17:42:27.026425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.738 [2024-10-01 17:42:27.026440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.738 [2024-10-01 17:42:27.041268] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.738 [2024-10-01 17:42:27.041283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.738 [2024-10-01 17:42:27.054421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.738 [2024-10-01 17:42:27.054436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.738 [2024-10-01 17:42:27.066829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.738 [2024-10-01 17:42:27.066844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.738 [2024-10-01 17:42:27.081092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.739 [2024-10-01 17:42:27.081108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.739 [2024-10-01 17:42:27.094104] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.739 [2024-10-01 17:42:27.094119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.739 [2024-10-01 17:42:27.107013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.739 [2024-10-01 17:42:27.107028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.739 [2024-10-01 17:42:27.121803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.739 [2024-10-01 17:42:27.121819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.739 [2024-10-01 17:42:27.134339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.739 [2024-10-01 17:42:27.134355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.739 [2024-10-01 17:42:27.147159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.739 [2024-10-01 17:42:27.147173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.739 [2024-10-01 17:42:27.161530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.739 [2024-10-01 17:42:27.161545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.739 [2024-10-01 17:42:27.174393] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.739 [2024-10-01 17:42:27.174408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.739 [2024-10-01 17:42:27.186739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.739 [2024-10-01 17:42:27.186754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.739 [2024-10-01 17:42:27.201285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.739 [2024-10-01 17:42:27.201299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.739 [2024-10-01 17:42:27.214419] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.739 [2024-10-01 17:42:27.214434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.739 [2024-10-01 17:42:27.226170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.739 [2024-10-01 17:42:27.226185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.739 [2024-10-01 17:42:27.238866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.739 [2024-10-01 17:42:27.238881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.739 [2024-10-01 17:42:27.253867] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.739 [2024-10-01 17:42:27.253882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.739 [2024-10-01 17:42:27.266623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.739 [2024-10-01 17:42:27.266638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.739 [2024-10-01 17:42:27.281484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.739 [2024-10-01 17:42:27.281499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.999 [2024-10-01 17:42:27.294362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.999 [2024-10-01 17:42:27.294378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.999 [2024-10-01 17:42:27.305912] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.999 [2024-10-01 17:42:27.305927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.999 [2024-10-01 17:42:27.318933] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.999 [2024-10-01 17:42:27.318947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.999 [2024-10-01 17:42:27.333817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.999 [2024-10-01 17:42:27.333834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.999 [2024-10-01 17:42:27.346687] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.999 [2024-10-01 17:42:27.346701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.999 [2024-10-01 17:42:27.361906] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.999 [2024-10-01 17:42:27.361921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.999 [2024-10-01 17:42:27.374477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.999 [2024-10-01 17:42:27.374491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.999 [2024-10-01 17:42:27.388989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.999 [2024-10-01 17:42:27.389009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.999 [2024-10-01 17:42:27.402573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.999 [2024-10-01 17:42:27.402587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.999 [2024-10-01 17:42:27.417872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.999 [2024-10-01 17:42:27.417888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.999 [2024-10-01 17:42:27.430728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.999 [2024-10-01 17:42:27.430743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.999 [2024-10-01 17:42:27.445598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.999 [2024-10-01 17:42:27.445620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.999 [2024-10-01 17:42:27.458684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.999 [2024-10-01 17:42:27.458698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.999 [2024-10-01 17:42:27.473738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.999 [2024-10-01 17:42:27.473752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.999 [2024-10-01 17:42:27.486485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.999 [2024-10-01 17:42:27.486500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.999 [2024-10-01 17:42:27.498098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.999 [2024-10-01 17:42:27.498113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.999 [2024-10-01 17:42:27.510805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.999 [2024-10-01 17:42:27.510820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.999 [2024-10-01 17:42:27.525529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.999 [2024-10-01 17:42:27.525545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.999 [2024-10-01 17:42:27.538619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.999 [2024-10-01 17:42:27.538634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.259 [2024-10-01 17:42:27.553107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.259 [2024-10-01 17:42:27.553122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.259 [2024-10-01 17:42:27.566523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.259 [2024-10-01 17:42:27.566537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.259 [2024-10-01 17:42:27.581556] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.259 [2024-10-01 17:42:27.581571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.259 [2024-10-01 17:42:27.594341] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.259 [2024-10-01 17:42:27.594356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.259 [2024-10-01 17:42:27.607037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.259 [2024-10-01 17:42:27.607052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.259 [2024-10-01 17:42:27.622091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.259 [2024-10-01 17:42:27.622106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.259 [2024-10-01 17:42:27.634507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.259 [2024-10-01 17:42:27.634521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.259 [2024-10-01 17:42:27.649626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.259 [2024-10-01 17:42:27.649641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.259 [2024-10-01 17:42:27.662562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.259 [2024-10-01 17:42:27.662576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.259 [2024-10-01 17:42:27.677637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.259 [2024-10-01 17:42:27.677651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.259 [2024-10-01 17:42:27.690420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.259 [2024-10-01 17:42:27.690435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.259 [2024-10-01 17:42:27.703166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.259 [2024-10-01 17:42:27.703185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.259 [2024-10-01 17:42:27.717946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.259 [2024-10-01 17:42:27.717961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.259 [2024-10-01 17:42:27.730595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.259 [2024-10-01 17:42:27.730610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.259 [2024-10-01 17:42:27.745372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.260 [2024-10-01 17:42:27.745388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.260 [2024-10-01 17:42:27.758459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.260 [2024-10-01 17:42:27.758474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.260 [2024-10-01 17:42:27.770938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.260 [2024-10-01 17:42:27.770953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.260 [2024-10-01 17:42:27.785347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.260 [2024-10-01 17:42:27.785363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.260 [2024-10-01 17:42:27.797899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.260 [2024-10-01 17:42:27.797914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.520 [2024-10-01 17:42:27.810573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.520 [2024-10-01 17:42:27.810589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.520 [2024-10-01 17:42:27.825369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.520 [2024-10-01 17:42:27.825385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.520 [2024-10-01 17:42:27.838253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.520 [2024-10-01 17:42:27.838268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.520 [2024-10-01 17:42:27.850765] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.520 [2024-10-01 17:42:27.850780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.521 [2024-10-01 17:42:27.865188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.521 [2024-10-01 17:42:27.865203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.521 [2024-10-01 17:42:27.878591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.521 [2024-10-01 17:42:27.878606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.521 [2024-10-01 17:42:27.893449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.521 [2024-10-01 17:42:27.893463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.521 [2024-10-01 17:42:27.906011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.521 [2024-10-01 17:42:27.906026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.521 [2024-10-01 17:42:27.918664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.521 [2024-10-01 17:42:27.918679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.521 [2024-10-01 17:42:27.933674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.521 [2024-10-01 17:42:27.933689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.521 [2024-10-01 17:42:27.946280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.521 [2024-10-01 17:42:27.946294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.521 [2024-10-01 17:42:27.959058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.521 [2024-10-01 17:42:27.959076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.521 [2024-10-01 17:42:27.973855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.521 [2024-10-01 17:42:27.973871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.521 [2024-10-01 17:42:27.986679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.521 [2024-10-01 17:42:27.986694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.521 [2024-10-01 17:42:28.001217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.521 [2024-10-01 17:42:28.001232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.521 18817.40 IOPS, 147.01 MiB/s [2024-10-01 17:42:28.013364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.521 [2024-10-01 17:42:28.013379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.521 00:42:29.521 Latency(us) 00:42:29.521 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:29.521 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:42:29.521 Nvme1n1 : 5.01 18813.36 146.98 0.00 0.00 6796.00 2443.95 12561.07 00:42:29.521 =================================================================================================================== 00:42:29.521 Total : 18813.36 146.98 0.00 0.00 6796.00 2443.95 12561.07 00:42:29.521 [2024-10-01 17:42:28.022269] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.521 [2024-10-01 17:42:28.022284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.521 [2024-10-01 17:42:28.034271] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.521 [2024-10-01 17:42:28.034284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.521 [2024-10-01 17:42:28.046270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.521 [2024-10-01 17:42:28.046282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.521 [2024-10-01 17:42:28.058269] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.521 [2024-10-01 17:42:28.058282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.781 [2024-10-01 17:42:28.070267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.781 [2024-10-01 17:42:28.070279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.781 [2024-10-01 17:42:28.082265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.781 [2024-10-01 17:42:28.082275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.781 [2024-10-01 17:42:28.094265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.781 [2024-10-01 17:42:28.094274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.781 [2024-10-01 17:42:28.106268] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.781 [2024-10-01 17:42:28.106279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.781 [2024-10-01 17:42:28.118267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.781 [2024-10-01 17:42:28.118278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.781 [2024-10-01 17:42:28.130263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.781 [2024-10-01 17:42:28.130272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3366507) - No such process 00:42:29.781 17:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3366507 00:42:29.781 17:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:29.781 17:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:29.782 17:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:29.782 17:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:29.782 17:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:42:29.782 17:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:29.782 17:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:29.782 delay0 00:42:29.782 17:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:29.782 17:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:42:29.782 17:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:29.782 17:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:29.782 17:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:29.782 17:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:42:29.782 [2024-10-01 17:42:28.271479] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:42:37.925 [2024-10-01 17:42:35.443707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1a00 is same with the state(6) to be set 00:42:37.925 Initializing NVMe Controllers 00:42:37.925 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:42:37.925 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:42:37.925 Initialization complete. Launching workers. 00:42:37.925 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 217, failed: 37898 00:42:37.925 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 37958, failed to submit 157 00:42:37.925 success 37898, unsuccessful 60, failed 0 00:42:37.925 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:42:37.925 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:42:37.925 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:42:37.925 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:42:37.925 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:37.925 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:42:37.925 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:37.925 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:37.925 rmmod nvme_tcp 00:42:37.925 rmmod nvme_fabrics 00:42:37.925 rmmod nvme_keyring 00:42:37.925 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:37.925 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:42:37.925 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:42:37.925 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 3364338 ']' 00:42:37.925 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 3364338 00:42:37.925 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 3364338 ']' 00:42:37.925 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 3364338 00:42:37.925 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:42:37.925 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:37.925 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3364338 00:42:37.925 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:42:37.925 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:42:37.925 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3364338' 00:42:37.925 killing process with pid 3364338 00:42:37.925 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 3364338 00:42:37.925 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 3364338 00:42:37.925 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:42:37.925 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:42:37.925 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:42:37.925 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:42:37.925 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:42:37.925 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:42:37.925 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:42:37.925 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:37.925 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:37.925 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:37.925 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:37.925 17:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:39.309 17:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:39.309 00:42:39.309 real 0m34.264s 00:42:39.309 user 0m44.218s 00:42:39.309 sys 0m12.131s 00:42:39.309 17:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:39.309 17:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:39.309 ************************************ 00:42:39.309 END TEST nvmf_zcopy 00:42:39.309 ************************************ 00:42:39.309 17:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:42:39.309 17:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:42:39.309 17:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:39.309 17:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:39.571 ************************************ 00:42:39.571 START TEST nvmf_nmic 00:42:39.571 ************************************ 00:42:39.571 17:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:42:39.571 * Looking for test storage... 00:42:39.571 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:39.571 17:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:42:39.571 17:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:42:39.571 17:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:42:39.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:39.571 --rc genhtml_branch_coverage=1 00:42:39.571 --rc genhtml_function_coverage=1 00:42:39.571 --rc genhtml_legend=1 00:42:39.571 --rc geninfo_all_blocks=1 00:42:39.571 --rc geninfo_unexecuted_blocks=1 00:42:39.571 00:42:39.571 ' 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:42:39.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:39.571 --rc genhtml_branch_coverage=1 00:42:39.571 --rc genhtml_function_coverage=1 00:42:39.571 --rc genhtml_legend=1 00:42:39.571 --rc geninfo_all_blocks=1 00:42:39.571 --rc geninfo_unexecuted_blocks=1 00:42:39.571 00:42:39.571 ' 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:42:39.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:39.571 --rc genhtml_branch_coverage=1 00:42:39.571 --rc genhtml_function_coverage=1 00:42:39.571 --rc genhtml_legend=1 00:42:39.571 --rc geninfo_all_blocks=1 00:42:39.571 --rc geninfo_unexecuted_blocks=1 00:42:39.571 00:42:39.571 ' 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:42:39.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:39.571 --rc genhtml_branch_coverage=1 00:42:39.571 --rc genhtml_function_coverage=1 00:42:39.571 --rc genhtml_legend=1 00:42:39.571 --rc geninfo_all_blocks=1 00:42:39.571 --rc geninfo_unexecuted_blocks=1 00:42:39.571 00:42:39.571 ' 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:39.571 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:39.572 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:39.572 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:39.572 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:39.572 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:42:39.572 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:39.572 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:42:39.572 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:39.572 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:39.572 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:39.572 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:39.572 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:39.572 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:39.572 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:39.572 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:39.572 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:39.572 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:39.572 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:39.572 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:39.572 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:42:39.572 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:42:39.572 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:39.572 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:42:39.572 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:42:39.572 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:42:39.572 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:39.572 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:39.572 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:39.833 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:42:39.833 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:42:39.833 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:42:39.833 17:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:47.975 17:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:47.975 17:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:42:47.975 17:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:47.975 17:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:47.975 17:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:47.975 17:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:47.975 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:47.975 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:42:47.975 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:47.975 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:42:47.975 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:42:47.975 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:42:47.975 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:42:47.975 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:42:47.975 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:42:47.975 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:47.975 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:47.975 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:47.975 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:47.975 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:47.975 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:47.975 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:47.975 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:42:47.976 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:42:47.976 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:42:47.976 Found net devices under 0000:4b:00.0: cvl_0_0 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:42:47.976 Found net devices under 0000:4b:00.1: cvl_0_1 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:47.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:47.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.559 ms 00:42:47.976 00:42:47.976 --- 10.0.0.2 ping statistics --- 00:42:47.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:47.976 rtt min/avg/max/mdev = 0.559/0.559/0.559/0.000 ms 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:47.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:47.976 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:42:47.976 00:42:47.976 --- 10.0.0.1 ping statistics --- 00:42:47.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:47.976 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=3373053 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 3373053 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 3373053 ']' 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:47.976 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:47.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:47.977 [2024-10-01 17:42:45.386253] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:47.977 [2024-10-01 17:42:45.387219] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:42:47.977 [2024-10-01 17:42:45.387256] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:47.977 [2024-10-01 17:42:45.453351] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:47.977 [2024-10-01 17:42:45.485795] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:47.977 [2024-10-01 17:42:45.485834] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:47.977 [2024-10-01 17:42:45.485842] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:47.977 [2024-10-01 17:42:45.485849] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:47.977 [2024-10-01 17:42:45.485855] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:47.977 [2024-10-01 17:42:45.486007] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:42:47.977 [2024-10-01 17:42:45.486126] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:42:47.977 [2024-10-01 17:42:45.486429] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:42:47.977 [2024-10-01 17:42:45.486429] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:42:47.977 [2024-10-01 17:42:45.543102] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:47.977 [2024-10-01 17:42:45.543285] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:47.977 [2024-10-01 17:42:45.544269] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:42:47.977 [2024-10-01 17:42:45.544818] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:47.977 [2024-10-01 17:42:45.544908] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:47.977 [2024-10-01 17:42:45.627203] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:47.977 Malloc0 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:47.977 [2024-10-01 17:42:45.691056] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:42:47.977 test case1: single bdev can't be used in multiple subsystems 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:47.977 [2024-10-01 17:42:45.726805] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:42:47.977 [2024-10-01 17:42:45.726824] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:42:47.977 [2024-10-01 17:42:45.726832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:47.977 request: 00:42:47.977 { 00:42:47.977 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:42:47.977 "namespace": { 00:42:47.977 "bdev_name": "Malloc0", 00:42:47.977 "no_auto_visible": false 00:42:47.977 }, 00:42:47.977 "method": "nvmf_subsystem_add_ns", 00:42:47.977 "req_id": 1 00:42:47.977 } 00:42:47.977 Got JSON-RPC error response 00:42:47.977 response: 00:42:47.977 { 00:42:47.977 "code": -32602, 00:42:47.977 "message": "Invalid parameters" 00:42:47.977 } 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:42:47.977 Adding namespace failed - expected result. 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:42:47.977 test case2: host connect to nvmf target in multiple paths 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:47.977 [2024-10-01 17:42:45.738913] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:47.977 17:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:42:47.977 17:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:42:48.239 17:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:42:48.239 17:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:42:48.239 17:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:42:48.239 17:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:42:48.239 17:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:42:50.152 17:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:42:50.152 17:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:42:50.152 17:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:42:50.152 17:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:42:50.152 17:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:42:50.152 17:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:42:50.152 17:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:42:50.152 [global] 00:42:50.152 thread=1 00:42:50.152 invalidate=1 00:42:50.152 rw=write 00:42:50.152 time_based=1 00:42:50.152 runtime=1 00:42:50.152 ioengine=libaio 00:42:50.152 direct=1 00:42:50.152 bs=4096 00:42:50.152 iodepth=1 00:42:50.152 norandommap=0 00:42:50.152 numjobs=1 00:42:50.152 00:42:50.152 verify_dump=1 00:42:50.152 verify_backlog=512 00:42:50.152 verify_state_save=0 00:42:50.152 do_verify=1 00:42:50.152 verify=crc32c-intel 00:42:50.152 [job0] 00:42:50.152 filename=/dev/nvme0n1 00:42:50.152 Could not set queue depth (nvme0n1) 00:42:50.412 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:50.412 fio-3.35 00:42:50.412 Starting 1 thread 00:42:51.910 00:42:51.910 job0: (groupid=0, jobs=1): err= 0: pid=3373922: Tue Oct 1 17:42:50 2024 00:42:51.910 read: IOPS=16, BW=66.4KiB/s (68.0kB/s)(68.0KiB/1024msec) 00:42:51.910 slat (nsec): min=27098, max=33871, avg=28158.12, stdev=1574.67 00:42:51.910 clat (usec): min=41004, max=42011, avg=41677.92, stdev=419.63 00:42:51.910 lat (usec): min=41031, max=42039, avg=41706.07, stdev=418.87 00:42:51.910 clat percentiles (usec): 00:42:51.910 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:42:51.910 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:42:51.910 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:42:51.910 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:42:51.910 | 99.99th=[42206] 00:42:51.910 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:42:51.910 slat (usec): min=9, max=24669, avg=78.78, stdev=1088.94 00:42:51.910 clat (usec): min=222, max=758, avg=521.73, stdev=105.02 00:42:51.910 lat (usec): min=232, max=25210, avg=600.52, stdev=1095.42 00:42:51.910 clat percentiles (usec): 00:42:51.910 | 1.00th=[ 237], 5.00th=[ 343], 10.00th=[ 371], 20.00th=[ 441], 00:42:51.910 | 30.00th=[ 461], 40.00th=[ 519], 50.00th=[ 537], 60.00th=[ 545], 00:42:51.910 | 70.00th=[ 578], 80.00th=[ 619], 90.00th=[ 652], 95.00th=[ 685], 00:42:51.910 | 99.00th=[ 717], 99.50th=[ 734], 99.90th=[ 758], 99.95th=[ 758], 00:42:51.910 | 99.99th=[ 758] 00:42:51.910 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:42:51.910 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:42:51.910 lat (usec) : 250=1.51%, 500=34.40%, 750=60.68%, 1000=0.19% 00:42:51.910 lat (msec) : 50=3.21% 00:42:51.910 cpu : usr=1.27%, sys=1.66%, ctx=532, majf=0, minf=1 00:42:51.910 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:51.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:51.910 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:51.910 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:51.910 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:51.910 00:42:51.910 Run status group 0 (all jobs): 00:42:51.910 READ: bw=66.4KiB/s (68.0kB/s), 66.4KiB/s-66.4KiB/s (68.0kB/s-68.0kB/s), io=68.0KiB (69.6kB), run=1024-1024msec 00:42:51.910 WRITE: bw=2000KiB/s (2048kB/s), 2000KiB/s-2000KiB/s (2048kB/s-2048kB/s), io=2048KiB (2097kB), run=1024-1024msec 00:42:51.910 00:42:51.910 Disk stats (read/write): 00:42:51.910 nvme0n1: ios=40/512, merge=0/0, ticks=1570/204, in_queue=1774, util=98.30% 00:42:51.910 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:42:51.910 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:42:51.910 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:42:51.910 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:42:51.910 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:42:51.910 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:51.910 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:42:51.910 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:51.910 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:42:51.910 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:42:51.910 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:42:51.910 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:42:51.910 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:42:51.910 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:51.910 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:42:51.910 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:51.910 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:51.910 rmmod nvme_tcp 00:42:51.910 rmmod nvme_fabrics 00:42:51.910 rmmod nvme_keyring 00:42:51.910 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:51.910 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:42:51.910 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:42:51.910 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 3373053 ']' 00:42:51.910 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 3373053 00:42:51.910 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 3373053 ']' 00:42:51.910 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 3373053 00:42:51.910 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:42:51.910 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:51.910 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3373053 00:42:51.910 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:51.910 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:51.910 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3373053' 00:42:51.910 killing process with pid 3373053 00:42:51.910 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 3373053 00:42:51.910 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 3373053 00:42:52.171 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:42:52.171 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:42:52.171 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:42:52.171 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:42:52.171 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:42:52.171 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:42:52.171 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:42:52.171 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:52.171 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:52.171 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:52.171 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:52.171 17:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:54.713 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:54.713 00:42:54.713 real 0m14.799s 00:42:54.713 user 0m36.765s 00:42:54.713 sys 0m7.222s 00:42:54.713 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:54.713 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:54.713 ************************************ 00:42:54.713 END TEST nvmf_nmic 00:42:54.713 ************************************ 00:42:54.713 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:42:54.713 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:42:54.713 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:54.713 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:54.713 ************************************ 00:42:54.713 START TEST nvmf_fio_target 00:42:54.713 ************************************ 00:42:54.713 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:42:54.713 * Looking for test storage... 00:42:54.713 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:54.713 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:42:54.713 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:42:54.713 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:42:54.713 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:42:54.713 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:54.713 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:54.713 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:54.713 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:42:54.713 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:42:54.713 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:42:54.713 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:42:54.713 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:42:54.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:54.714 --rc genhtml_branch_coverage=1 00:42:54.714 --rc genhtml_function_coverage=1 00:42:54.714 --rc genhtml_legend=1 00:42:54.714 --rc geninfo_all_blocks=1 00:42:54.714 --rc geninfo_unexecuted_blocks=1 00:42:54.714 00:42:54.714 ' 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:42:54.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:54.714 --rc genhtml_branch_coverage=1 00:42:54.714 --rc genhtml_function_coverage=1 00:42:54.714 --rc genhtml_legend=1 00:42:54.714 --rc geninfo_all_blocks=1 00:42:54.714 --rc geninfo_unexecuted_blocks=1 00:42:54.714 00:42:54.714 ' 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:42:54.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:54.714 --rc genhtml_branch_coverage=1 00:42:54.714 --rc genhtml_function_coverage=1 00:42:54.714 --rc genhtml_legend=1 00:42:54.714 --rc geninfo_all_blocks=1 00:42:54.714 --rc geninfo_unexecuted_blocks=1 00:42:54.714 00:42:54.714 ' 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:42:54.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:54.714 --rc genhtml_branch_coverage=1 00:42:54.714 --rc genhtml_function_coverage=1 00:42:54.714 --rc genhtml_legend=1 00:42:54.714 --rc geninfo_all_blocks=1 00:42:54.714 --rc geninfo_unexecuted_blocks=1 00:42:54.714 00:42:54.714 ' 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:54.714 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:42:54.715 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:54.715 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:42:54.715 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:54.715 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:54.715 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:54.715 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:54.715 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:54.715 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:54.715 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:54.715 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:54.715 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:54.715 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:54.715 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:54.715 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:54.715 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:54.715 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:42:54.715 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:42:54.715 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:54.715 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:42:54.715 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:42:54.715 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:42:54.715 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:54.715 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:54.715 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:54.715 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:42:54.715 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:42:54.715 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:42:54.715 17:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:43:01.294 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:43:01.294 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:43:01.294 Found net devices under 0000:4b:00.0: cvl_0_0 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:43:01.294 Found net devices under 0000:4b:00.1: cvl_0_1 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:43:01.294 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:43:01.295 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:43:01.295 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:43:01.295 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:01.295 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:01.295 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:01.295 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:01.295 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:01.295 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:01.295 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:01.295 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:01.295 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:01.295 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:01.295 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:01.295 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:01.295 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:01.295 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:01.295 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:01.555 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:01.555 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:01.555 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:01.555 17:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:01.555 17:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:01.555 17:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:01.555 17:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:01.555 17:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:01.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:01.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.494 ms 00:43:01.555 00:43:01.555 --- 10.0.0.2 ping statistics --- 00:43:01.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:01.555 rtt min/avg/max/mdev = 0.494/0.494/0.494/0.000 ms 00:43:01.555 17:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:01.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:01.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:43:01.555 00:43:01.555 --- 10.0.0.1 ping statistics --- 00:43:01.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:01.555 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:43:01.555 17:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:01.555 17:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:43:01.555 17:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:43:01.555 17:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:01.555 17:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:43:01.555 17:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:43:01.555 17:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:01.555 17:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:43:01.555 17:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:43:01.556 17:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:43:01.556 17:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:43:01.556 17:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:01.556 17:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:01.556 17:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=3378266 00:43:01.556 17:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 3378266 00:43:01.556 17:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:43:01.556 17:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 3378266 ']' 00:43:01.556 17:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:01.556 17:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:01.556 17:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:01.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:01.556 17:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:01.556 17:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:01.816 [2024-10-01 17:43:00.154827] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:01.816 [2024-10-01 17:43:00.155796] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:43:01.816 [2024-10-01 17:43:00.155836] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:01.816 [2024-10-01 17:43:00.221559] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:01.816 [2024-10-01 17:43:00.252813] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:01.816 [2024-10-01 17:43:00.252853] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:01.816 [2024-10-01 17:43:00.252861] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:01.816 [2024-10-01 17:43:00.252867] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:01.816 [2024-10-01 17:43:00.252873] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:01.816 [2024-10-01 17:43:00.253029] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:43:01.816 [2024-10-01 17:43:00.253258] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:43:01.816 [2024-10-01 17:43:00.253259] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:43:01.816 [2024-10-01 17:43:00.253106] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:43:01.816 [2024-10-01 17:43:00.313135] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:43:01.816 [2024-10-01 17:43:00.313321] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:01.816 [2024-10-01 17:43:00.314234] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:43:01.816 [2024-10-01 17:43:00.314972] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:01.816 [2024-10-01 17:43:00.315073] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:43:02.758 17:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:02.758 17:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:43:02.758 17:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:43:02.758 17:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:02.758 17:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:02.758 17:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:02.758 17:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:43:02.758 [2024-10-01 17:43:01.133686] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:02.758 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:03.019 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:43:03.019 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:03.019 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:43:03.019 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:03.280 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:43:03.280 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:03.540 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:43:03.540 17:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:43:03.540 17:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:03.800 17:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:43:03.800 17:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:04.060 17:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:43:04.060 17:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:04.321 17:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:43:04.321 17:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:43:04.321 17:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:43:04.583 17:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:43:04.583 17:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:04.844 17:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:43:04.844 17:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:43:04.844 17:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:05.105 [2024-10-01 17:43:03.497845] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:05.105 17:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:43:05.365 17:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:43:05.365 17:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:43:05.936 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:43:05.936 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:43:05.936 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:43:05.936 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:43:05.936 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:43:05.936 17:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:43:07.845 17:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:43:07.845 17:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:43:07.845 17:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:43:07.845 17:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:43:07.845 17:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:43:07.845 17:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:43:07.846 17:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:43:07.846 [global] 00:43:07.846 thread=1 00:43:07.846 invalidate=1 00:43:07.846 rw=write 00:43:07.846 time_based=1 00:43:07.846 runtime=1 00:43:07.846 ioengine=libaio 00:43:07.846 direct=1 00:43:07.846 bs=4096 00:43:07.846 iodepth=1 00:43:07.846 norandommap=0 00:43:07.846 numjobs=1 00:43:07.846 00:43:07.846 verify_dump=1 00:43:07.846 verify_backlog=512 00:43:07.846 verify_state_save=0 00:43:07.846 do_verify=1 00:43:07.846 verify=crc32c-intel 00:43:07.846 [job0] 00:43:07.846 filename=/dev/nvme0n1 00:43:07.846 [job1] 00:43:07.846 filename=/dev/nvme0n2 00:43:07.846 [job2] 00:43:07.846 filename=/dev/nvme0n3 00:43:07.846 [job3] 00:43:07.846 filename=/dev/nvme0n4 00:43:07.846 Could not set queue depth (nvme0n1) 00:43:07.846 Could not set queue depth (nvme0n2) 00:43:07.846 Could not set queue depth (nvme0n3) 00:43:07.846 Could not set queue depth (nvme0n4) 00:43:08.426 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:08.426 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:08.426 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:08.426 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:08.426 fio-3.35 00:43:08.426 Starting 4 threads 00:43:09.832 00:43:09.832 job0: (groupid=0, jobs=1): err= 0: pid=3379712: Tue Oct 1 17:43:07 2024 00:43:09.832 read: IOPS=19, BW=78.0KiB/s (79.9kB/s)(80.0KiB/1025msec) 00:43:09.832 slat (nsec): min=10447, max=27635, avg=25725.30, stdev=3639.17 00:43:09.832 clat (usec): min=906, max=42065, avg=35815.97, stdev=14980.88 00:43:09.832 lat (usec): min=933, max=42091, avg=35841.70, stdev=14982.75 00:43:09.832 clat percentiles (usec): 00:43:09.832 | 1.00th=[ 906], 5.00th=[ 906], 10.00th=[ 1057], 20.00th=[41681], 00:43:09.832 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:43:09.832 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:43:09.832 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:09.832 | 99.99th=[42206] 00:43:09.832 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:43:09.832 slat (nsec): min=8664, max=65616, avg=28601.82, stdev=10593.04 00:43:09.832 clat (usec): min=145, max=861, avg=566.40, stdev=114.67 00:43:09.832 lat (usec): min=166, max=910, avg=595.00, stdev=119.79 00:43:09.832 clat percentiles (usec): 00:43:09.832 | 1.00th=[ 281], 5.00th=[ 367], 10.00th=[ 404], 20.00th=[ 474], 00:43:09.832 | 30.00th=[ 510], 40.00th=[ 545], 50.00th=[ 570], 60.00th=[ 603], 00:43:09.832 | 70.00th=[ 627], 80.00th=[ 668], 90.00th=[ 709], 95.00th=[ 734], 00:43:09.832 | 99.00th=[ 783], 99.50th=[ 832], 99.90th=[ 865], 99.95th=[ 865], 00:43:09.832 | 99.99th=[ 865] 00:43:09.832 bw ( KiB/s): min= 4096, max= 4096, per=38.89%, avg=4096.00, stdev= 0.00, samples=1 00:43:09.832 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:09.832 lat (usec) : 250=0.38%, 500=26.13%, 750=66.17%, 1000=3.76% 00:43:09.832 lat (msec) : 2=0.38%, 50=3.20% 00:43:09.832 cpu : usr=0.98%, sys=1.76%, ctx=532, majf=0, minf=1 00:43:09.832 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:09.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:09.832 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:09.832 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:09.832 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:09.832 job1: (groupid=0, jobs=1): err= 0: pid=3379734: Tue Oct 1 17:43:07 2024 00:43:09.832 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:43:09.832 slat (nsec): min=14153, max=44855, avg=26265.84, stdev=2465.42 00:43:09.832 clat (usec): min=719, max=1429, avg=1074.68, stdev=80.69 00:43:09.832 lat (usec): min=734, max=1456, avg=1100.94, stdev=80.35 00:43:09.832 clat percentiles (usec): 00:43:09.832 | 1.00th=[ 848], 5.00th=[ 938], 10.00th=[ 971], 20.00th=[ 1020], 00:43:09.832 | 30.00th=[ 1045], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1106], 00:43:09.832 | 70.00th=[ 1123], 80.00th=[ 1139], 90.00th=[ 1156], 95.00th=[ 1188], 00:43:09.832 | 99.00th=[ 1254], 99.50th=[ 1287], 99.90th=[ 1434], 99.95th=[ 1434], 00:43:09.832 | 99.99th=[ 1434] 00:43:09.832 write: IOPS=650, BW=2601KiB/s (2664kB/s)(2604KiB/1001msec); 0 zone resets 00:43:09.832 slat (nsec): min=9354, max=67699, avg=26791.27, stdev=10671.52 00:43:09.832 clat (usec): min=281, max=1046, avg=630.84, stdev=148.75 00:43:09.832 lat (usec): min=301, max=1078, avg=657.63, stdev=153.49 00:43:09.832 clat percentiles (usec): 00:43:09.832 | 1.00th=[ 318], 5.00th=[ 388], 10.00th=[ 433], 20.00th=[ 494], 00:43:09.832 | 30.00th=[ 545], 40.00th=[ 594], 50.00th=[ 635], 60.00th=[ 676], 00:43:09.832 | 70.00th=[ 717], 80.00th=[ 766], 90.00th=[ 824], 95.00th=[ 873], 00:43:09.832 | 99.00th=[ 947], 99.50th=[ 996], 99.90th=[ 1045], 99.95th=[ 1045], 00:43:09.832 | 99.99th=[ 1045] 00:43:09.832 bw ( KiB/s): min= 4096, max= 4096, per=38.89%, avg=4096.00, stdev= 0.00, samples=1 00:43:09.832 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:09.832 lat (usec) : 500=11.78%, 750=31.81%, 1000=18.74% 00:43:09.832 lat (msec) : 2=37.66% 00:43:09.832 cpu : usr=2.50%, sys=2.90%, ctx=1164, majf=0, minf=1 00:43:09.832 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:09.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:09.832 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:09.832 issued rwts: total=512,651,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:09.832 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:09.832 job2: (groupid=0, jobs=1): err= 0: pid=3379762: Tue Oct 1 17:43:07 2024 00:43:09.832 read: IOPS=17, BW=70.4KiB/s (72.1kB/s)(72.0KiB/1023msec) 00:43:09.832 slat (nsec): min=25887, max=26731, avg=26469.17, stdev=201.60 00:43:09.832 clat (usec): min=40912, max=41956, avg=41374.88, stdev=463.53 00:43:09.832 lat (usec): min=40939, max=41983, avg=41401.35, stdev=463.54 00:43:09.832 clat percentiles (usec): 00:43:09.832 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:43:09.832 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:43:09.832 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:43:09.832 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:09.832 | 99.99th=[42206] 00:43:09.832 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:43:09.832 slat (nsec): min=9357, max=54411, avg=26705.45, stdev=10662.72 00:43:09.832 clat (usec): min=143, max=855, avg=509.08, stdev=161.81 00:43:09.832 lat (usec): min=154, max=889, avg=535.79, stdev=164.74 00:43:09.832 clat percentiles (usec): 00:43:09.832 | 1.00th=[ 153], 5.00th=[ 167], 10.00th=[ 251], 20.00th=[ 338], 00:43:09.832 | 30.00th=[ 474], 40.00th=[ 523], 50.00th=[ 570], 60.00th=[ 594], 00:43:09.832 | 70.00th=[ 611], 80.00th=[ 627], 90.00th=[ 668], 95.00th=[ 701], 00:43:09.832 | 99.00th=[ 783], 99.50th=[ 840], 99.90th=[ 857], 99.95th=[ 857], 00:43:09.832 | 99.99th=[ 857] 00:43:09.832 bw ( KiB/s): min= 4096, max= 4096, per=38.89%, avg=4096.00, stdev= 0.00, samples=1 00:43:09.832 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:09.832 lat (usec) : 250=9.62%, 500=24.15%, 750=60.57%, 1000=2.26% 00:43:09.832 lat (msec) : 50=3.40% 00:43:09.832 cpu : usr=0.59%, sys=1.47%, ctx=531, majf=0, minf=1 00:43:09.832 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:09.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:09.832 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:09.832 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:09.832 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:09.832 job3: (groupid=0, jobs=1): err= 0: pid=3379771: Tue Oct 1 17:43:07 2024 00:43:09.832 read: IOPS=588, BW=2354KiB/s (2410kB/s)(2356KiB/1001msec) 00:43:09.832 slat (nsec): min=7433, max=61336, avg=24870.06, stdev=8378.21 00:43:09.832 clat (usec): min=375, max=1450, avg=800.64, stdev=115.41 00:43:09.832 lat (usec): min=382, max=1479, avg=825.51, stdev=117.67 00:43:09.832 clat percentiles (usec): 00:43:09.832 | 1.00th=[ 482], 5.00th=[ 652], 10.00th=[ 685], 20.00th=[ 734], 00:43:09.832 | 30.00th=[ 766], 40.00th=[ 783], 50.00th=[ 799], 60.00th=[ 816], 00:43:09.832 | 70.00th=[ 832], 80.00th=[ 857], 90.00th=[ 906], 95.00th=[ 1045], 00:43:09.832 | 99.00th=[ 1156], 99.50th=[ 1254], 99.90th=[ 1450], 99.95th=[ 1450], 00:43:09.832 | 99.99th=[ 1450] 00:43:09.832 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:43:09.832 slat (nsec): min=9686, max=69854, avg=27982.06, stdev=12224.40 00:43:09.832 clat (usec): min=115, max=875, avg=463.66, stdev=101.61 00:43:09.832 lat (usec): min=128, max=912, avg=491.64, stdev=105.04 00:43:09.832 clat percentiles (usec): 00:43:09.832 | 1.00th=[ 269], 5.00th=[ 318], 10.00th=[ 338], 20.00th=[ 375], 00:43:09.832 | 30.00th=[ 420], 40.00th=[ 441], 50.00th=[ 461], 60.00th=[ 478], 00:43:09.832 | 70.00th=[ 498], 80.00th=[ 537], 90.00th=[ 594], 95.00th=[ 660], 00:43:09.832 | 99.00th=[ 750], 99.50th=[ 807], 99.90th=[ 865], 99.95th=[ 873], 00:43:09.832 | 99.99th=[ 873] 00:43:09.832 bw ( KiB/s): min= 4096, max= 4096, per=38.89%, avg=4096.00, stdev= 0.00, samples=1 00:43:09.832 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:09.832 lat (usec) : 250=0.25%, 500=45.69%, 750=26.04%, 1000=25.67% 00:43:09.832 lat (msec) : 2=2.36% 00:43:09.832 cpu : usr=2.40%, sys=4.50%, ctx=1616, majf=0, minf=1 00:43:09.832 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:09.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:09.832 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:09.832 issued rwts: total=589,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:09.832 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:09.832 00:43:09.832 Run status group 0 (all jobs): 00:43:09.832 READ: bw=4445KiB/s (4552kB/s), 70.4KiB/s-2354KiB/s (72.1kB/s-2410kB/s), io=4556KiB (4665kB), run=1001-1025msec 00:43:09.832 WRITE: bw=10.3MiB/s (10.8MB/s), 1998KiB/s-4092KiB/s (2046kB/s-4190kB/s), io=10.5MiB (11.1MB), run=1001-1025msec 00:43:09.832 00:43:09.832 Disk stats (read/write): 00:43:09.833 nvme0n1: ios=64/512, merge=0/0, ticks=538/233, in_queue=771, util=82.97% 00:43:09.833 nvme0n2: ios=437/512, merge=0/0, ticks=641/309, in_queue=950, util=87.15% 00:43:09.833 nvme0n3: ios=12/512, merge=0/0, ticks=494/255, in_queue=749, util=86.98% 00:43:09.833 nvme0n4: ios=569/696, merge=0/0, ticks=1000/316, in_queue=1316, util=96.05% 00:43:09.833 17:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:43:09.833 [global] 00:43:09.833 thread=1 00:43:09.833 invalidate=1 00:43:09.833 rw=randwrite 00:43:09.833 time_based=1 00:43:09.833 runtime=1 00:43:09.833 ioengine=libaio 00:43:09.833 direct=1 00:43:09.833 bs=4096 00:43:09.833 iodepth=1 00:43:09.833 norandommap=0 00:43:09.833 numjobs=1 00:43:09.833 00:43:09.833 verify_dump=1 00:43:09.833 verify_backlog=512 00:43:09.833 verify_state_save=0 00:43:09.833 do_verify=1 00:43:09.833 verify=crc32c-intel 00:43:09.833 [job0] 00:43:09.833 filename=/dev/nvme0n1 00:43:09.833 [job1] 00:43:09.833 filename=/dev/nvme0n2 00:43:09.833 [job2] 00:43:09.833 filename=/dev/nvme0n3 00:43:09.833 [job3] 00:43:09.833 filename=/dev/nvme0n4 00:43:09.833 Could not set queue depth (nvme0n1) 00:43:09.833 Could not set queue depth (nvme0n2) 00:43:09.833 Could not set queue depth (nvme0n3) 00:43:09.833 Could not set queue depth (nvme0n4) 00:43:10.093 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:10.093 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:10.093 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:10.093 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:10.093 fio-3.35 00:43:10.093 Starting 4 threads 00:43:11.513 00:43:11.513 job0: (groupid=0, jobs=1): err= 0: pid=3380179: Tue Oct 1 17:43:09 2024 00:43:11.513 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:43:11.513 slat (nsec): min=6626, max=47479, avg=26358.32, stdev=6824.16 00:43:11.513 clat (usec): min=516, max=41984, avg=1057.19, stdev=2548.60 00:43:11.513 lat (usec): min=544, max=42010, avg=1083.55, stdev=2548.67 00:43:11.513 clat percentiles (usec): 00:43:11.513 | 1.00th=[ 652], 5.00th=[ 701], 10.00th=[ 742], 20.00th=[ 783], 00:43:11.513 | 30.00th=[ 807], 40.00th=[ 824], 50.00th=[ 857], 60.00th=[ 947], 00:43:11.513 | 70.00th=[ 1004], 80.00th=[ 1037], 90.00th=[ 1090], 95.00th=[ 1123], 00:43:11.513 | 99.00th=[ 1188], 99.50th=[ 1287], 99.90th=[42206], 99.95th=[42206], 00:43:11.513 | 99.99th=[42206] 00:43:11.513 write: IOPS=672, BW=2689KiB/s (2754kB/s)(2692KiB/1001msec); 0 zone resets 00:43:11.513 slat (nsec): min=9223, max=65759, avg=30329.08, stdev=10031.56 00:43:11.513 clat (usec): min=267, max=1754, avg=617.37, stdev=142.04 00:43:11.513 lat (usec): min=277, max=1764, avg=647.70, stdev=145.39 00:43:11.513 clat percentiles (usec): 00:43:11.513 | 1.00th=[ 326], 5.00th=[ 388], 10.00th=[ 445], 20.00th=[ 490], 00:43:11.513 | 30.00th=[ 545], 40.00th=[ 578], 50.00th=[ 627], 60.00th=[ 660], 00:43:11.513 | 70.00th=[ 693], 80.00th=[ 725], 90.00th=[ 791], 95.00th=[ 840], 00:43:11.513 | 99.00th=[ 922], 99.50th=[ 955], 99.90th=[ 1762], 99.95th=[ 1762], 00:43:11.513 | 99.99th=[ 1762] 00:43:11.513 bw ( KiB/s): min= 4087, max= 4087, per=40.96%, avg=4087.00, stdev= 0.00, samples=1 00:43:11.513 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:43:11.513 lat (usec) : 500=12.74%, 750=40.51%, 1000=33.00% 00:43:11.513 lat (msec) : 2=13.59%, 50=0.17% 00:43:11.513 cpu : usr=2.20%, sys=4.30%, ctx=1186, majf=0, minf=1 00:43:11.513 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:11.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:11.513 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:11.513 issued rwts: total=512,673,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:11.513 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:11.513 job1: (groupid=0, jobs=1): err= 0: pid=3380191: Tue Oct 1 17:43:09 2024 00:43:11.513 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:43:11.513 slat (nsec): min=6430, max=60204, avg=26089.23, stdev=5393.44 00:43:11.513 clat (usec): min=556, max=1278, avg=953.94, stdev=142.92 00:43:11.513 lat (usec): min=582, max=1304, avg=980.03, stdev=144.08 00:43:11.513 clat percentiles (usec): 00:43:11.513 | 1.00th=[ 635], 5.00th=[ 701], 10.00th=[ 750], 20.00th=[ 799], 00:43:11.513 | 30.00th=[ 857], 40.00th=[ 947], 50.00th=[ 996], 60.00th=[ 1029], 00:43:11.513 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1106], 95.00th=[ 1139], 00:43:11.513 | 99.00th=[ 1172], 99.50th=[ 1188], 99.90th=[ 1287], 99.95th=[ 1287], 00:43:11.513 | 99.99th=[ 1287] 00:43:11.513 write: IOPS=884, BW=3536KiB/s (3621kB/s)(3540KiB/1001msec); 0 zone resets 00:43:11.513 slat (nsec): min=8810, max=52652, avg=27794.47, stdev=10007.58 00:43:11.513 clat (usec): min=238, max=1719, avg=523.76, stdev=131.77 00:43:11.513 lat (usec): min=247, max=1728, avg=551.55, stdev=135.29 00:43:11.513 clat percentiles (usec): 00:43:11.513 | 1.00th=[ 289], 5.00th=[ 330], 10.00th=[ 359], 20.00th=[ 416], 00:43:11.513 | 30.00th=[ 453], 40.00th=[ 482], 50.00th=[ 510], 60.00th=[ 545], 00:43:11.513 | 70.00th=[ 586], 80.00th=[ 635], 90.00th=[ 693], 95.00th=[ 734], 00:43:11.513 | 99.00th=[ 848], 99.50th=[ 906], 99.90th=[ 1713], 99.95th=[ 1713], 00:43:11.513 | 99.99th=[ 1713] 00:43:11.513 bw ( KiB/s): min= 4087, max= 4087, per=40.96%, avg=4087.00, stdev= 0.00, samples=1 00:43:11.513 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:43:11.513 lat (usec) : 250=0.07%, 500=29.78%, 750=34.50%, 1000=17.68% 00:43:11.513 lat (msec) : 2=17.97% 00:43:11.513 cpu : usr=2.80%, sys=5.10%, ctx=1397, majf=0, minf=2 00:43:11.513 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:11.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:11.513 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:11.513 issued rwts: total=512,885,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:11.513 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:11.513 job2: (groupid=0, jobs=1): err= 0: pid=3380207: Tue Oct 1 17:43:09 2024 00:43:11.513 read: IOPS=393, BW=1573KiB/s (1611kB/s)(1628KiB/1035msec) 00:43:11.513 slat (nsec): min=28182, max=49145, avg=29179.34, stdev=2742.92 00:43:11.513 clat (usec): min=713, max=42011, avg=1652.42, stdev=4908.29 00:43:11.513 lat (usec): min=742, max=42040, avg=1681.60, stdev=4908.22 00:43:11.513 clat percentiles (usec): 00:43:11.513 | 1.00th=[ 832], 5.00th=[ 898], 10.00th=[ 947], 20.00th=[ 996], 00:43:11.513 | 30.00th=[ 1029], 40.00th=[ 1045], 50.00th=[ 1057], 60.00th=[ 1074], 00:43:11.513 | 70.00th=[ 1106], 80.00th=[ 1106], 90.00th=[ 1156], 95.00th=[ 1188], 00:43:11.513 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:43:11.513 | 99.99th=[42206] 00:43:11.513 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:43:11.513 slat (nsec): min=9557, max=72381, avg=33605.72, stdev=9474.77 00:43:11.513 clat (usec): min=175, max=2242, avg=633.39, stdev=203.13 00:43:11.513 lat (usec): min=211, max=2277, avg=666.99, stdev=204.85 00:43:11.513 clat percentiles (usec): 00:43:11.513 | 1.00th=[ 297], 5.00th=[ 371], 10.00th=[ 420], 20.00th=[ 469], 00:43:11.513 | 30.00th=[ 529], 40.00th=[ 586], 50.00th=[ 635], 60.00th=[ 676], 00:43:11.513 | 70.00th=[ 725], 80.00th=[ 766], 90.00th=[ 807], 95.00th=[ 873], 00:43:11.513 | 99.00th=[ 1582], 99.50th=[ 1713], 99.90th=[ 2245], 99.95th=[ 2245], 00:43:11.513 | 99.99th=[ 2245] 00:43:11.513 bw ( KiB/s): min= 4087, max= 4087, per=40.96%, avg=4087.00, stdev= 0.00, samples=1 00:43:11.513 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:43:11.513 lat (usec) : 250=0.33%, 500=14.15%, 750=27.64%, 1000=22.09% 00:43:11.513 lat (msec) : 2=34.93%, 4=0.22%, 50=0.65% 00:43:11.513 cpu : usr=2.03%, sys=3.58%, ctx=921, majf=0, minf=1 00:43:11.513 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:11.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:11.513 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:11.513 issued rwts: total=407,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:11.514 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:11.514 job3: (groupid=0, jobs=1): err= 0: pid=3380213: Tue Oct 1 17:43:09 2024 00:43:11.514 read: IOPS=17, BW=70.8KiB/s (72.5kB/s)(72.0KiB/1017msec) 00:43:11.514 slat (nsec): min=7608, max=28698, avg=25545.28, stdev=6136.20 00:43:11.514 clat (usec): min=989, max=42132, avg=39576.07, stdev=9636.24 00:43:11.514 lat (usec): min=998, max=42159, avg=39601.62, stdev=9640.15 00:43:11.514 clat percentiles (usec): 00:43:11.514 | 1.00th=[ 988], 5.00th=[ 988], 10.00th=[40633], 20.00th=[41681], 00:43:11.514 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:43:11.514 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:43:11.514 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:11.514 | 99.99th=[42206] 00:43:11.514 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:43:11.514 slat (nsec): min=9214, max=54169, avg=27126.63, stdev=11666.64 00:43:11.514 clat (usec): min=114, max=2281, avg=558.85, stdev=217.51 00:43:11.514 lat (usec): min=124, max=2316, avg=585.98, stdev=222.72 00:43:11.514 clat percentiles (usec): 00:43:11.514 | 1.00th=[ 121], 5.00th=[ 169], 10.00th=[ 322], 20.00th=[ 404], 00:43:11.514 | 30.00th=[ 465], 40.00th=[ 523], 50.00th=[ 570], 60.00th=[ 611], 00:43:11.514 | 70.00th=[ 668], 80.00th=[ 701], 90.00th=[ 766], 95.00th=[ 816], 00:43:11.514 | 99.00th=[ 898], 99.50th=[ 2040], 99.90th=[ 2278], 99.95th=[ 2278], 00:43:11.514 | 99.99th=[ 2278] 00:43:11.514 bw ( KiB/s): min= 4096, max= 4096, per=41.05%, avg=4096.00, stdev= 0.00, samples=1 00:43:11.514 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:11.514 lat (usec) : 250=5.85%, 500=28.30%, 750=50.94%, 1000=10.94% 00:43:11.514 lat (msec) : 2=0.19%, 4=0.57%, 50=3.21% 00:43:11.514 cpu : usr=0.98%, sys=1.67%, ctx=531, majf=0, minf=1 00:43:11.514 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:11.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:11.514 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:11.514 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:11.514 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:11.514 00:43:11.514 Run status group 0 (all jobs): 00:43:11.514 READ: bw=5600KiB/s (5734kB/s), 70.8KiB/s-2046KiB/s (72.5kB/s-2095kB/s), io=5796KiB (5935kB), run=1001-1035msec 00:43:11.514 WRITE: bw=9979KiB/s (10.2MB/s), 1979KiB/s-3536KiB/s (2026kB/s-3621kB/s), io=10.1MiB (10.6MB), run=1001-1035msec 00:43:11.514 00:43:11.514 Disk stats (read/write): 00:43:11.514 nvme0n1: ios=490/512, merge=0/0, ticks=1448/287, in_queue=1735, util=97.90% 00:43:11.514 nvme0n2: ios=545/615, merge=0/0, ticks=480/233, in_queue=713, util=87.23% 00:43:11.514 nvme0n3: ios=450/512, merge=0/0, ticks=785/256, in_queue=1041, util=95.88% 00:43:11.514 nvme0n4: ios=46/512, merge=0/0, ticks=1131/230, in_queue=1361, util=96.36% 00:43:11.514 17:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:43:11.514 [global] 00:43:11.514 thread=1 00:43:11.514 invalidate=1 00:43:11.514 rw=write 00:43:11.514 time_based=1 00:43:11.514 runtime=1 00:43:11.514 ioengine=libaio 00:43:11.514 direct=1 00:43:11.514 bs=4096 00:43:11.514 iodepth=128 00:43:11.514 norandommap=0 00:43:11.514 numjobs=1 00:43:11.514 00:43:11.514 verify_dump=1 00:43:11.514 verify_backlog=512 00:43:11.514 verify_state_save=0 00:43:11.514 do_verify=1 00:43:11.514 verify=crc32c-intel 00:43:11.514 [job0] 00:43:11.514 filename=/dev/nvme0n1 00:43:11.514 [job1] 00:43:11.514 filename=/dev/nvme0n2 00:43:11.514 [job2] 00:43:11.514 filename=/dev/nvme0n3 00:43:11.514 [job3] 00:43:11.514 filename=/dev/nvme0n4 00:43:11.514 Could not set queue depth (nvme0n1) 00:43:11.514 Could not set queue depth (nvme0n2) 00:43:11.514 Could not set queue depth (nvme0n3) 00:43:11.514 Could not set queue depth (nvme0n4) 00:43:11.777 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:11.777 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:11.777 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:11.777 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:11.777 fio-3.35 00:43:11.777 Starting 4 threads 00:43:13.180 00:43:13.180 job0: (groupid=0, jobs=1): err= 0: pid=3380637: Tue Oct 1 17:43:11 2024 00:43:13.180 read: IOPS=3115, BW=12.2MiB/s (12.8MB/s)(12.3MiB/1007msec) 00:43:13.180 slat (nsec): min=991, max=41312k, avg=137396.41, stdev=1310927.13 00:43:13.180 clat (msec): min=2, max=102, avg=19.66, stdev=21.12 00:43:13.180 lat (msec): min=2, max=111, avg=19.79, stdev=21.25 00:43:13.180 clat percentiles (msec): 00:43:13.180 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 7], 00:43:13.180 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 10], 60.00th=[ 12], 00:43:13.180 | 70.00th=[ 15], 80.00th=[ 29], 90.00th=[ 56], 95.00th=[ 61], 00:43:13.180 | 99.00th=[ 95], 99.50th=[ 103], 99.90th=[ 103], 99.95th=[ 103], 00:43:13.180 | 99.99th=[ 103] 00:43:13.180 write: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec); 0 zone resets 00:43:13.180 slat (nsec): min=1742, max=16407k, avg=146898.46, stdev=1106497.20 00:43:13.180 clat (usec): min=1141, max=69717, avg=18441.99, stdev=16781.98 00:43:13.180 lat (usec): min=1155, max=69727, avg=18588.88, stdev=16890.54 00:43:13.180 clat percentiles (usec): 00:43:13.180 | 1.00th=[ 3097], 5.00th=[ 4621], 10.00th=[ 5997], 20.00th=[ 6325], 00:43:13.180 | 30.00th=[ 6587], 40.00th=[ 7111], 50.00th=[10945], 60.00th=[12387], 00:43:13.180 | 70.00th=[21890], 80.00th=[32113], 90.00th=[47973], 95.00th=[53740], 00:43:13.180 | 99.00th=[66323], 99.50th=[66323], 99.90th=[69731], 99.95th=[69731], 00:43:13.180 | 99.99th=[69731] 00:43:13.180 bw ( KiB/s): min= 7688, max=20480, per=16.35%, avg=14084.00, stdev=9045.31, samples=2 00:43:13.180 iops : min= 1922, max= 5120, avg=3521.00, stdev=2261.33, samples=2 00:43:13.180 lat (msec) : 2=0.46%, 4=0.24%, 10=49.03%, 20=23.05%, 50=16.99% 00:43:13.180 lat (msec) : 100=9.88%, 250=0.36% 00:43:13.180 cpu : usr=2.68%, sys=4.17%, ctx=176, majf=0, minf=2 00:43:13.180 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:43:13.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:13.180 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:13.180 issued rwts: total=3137,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:13.180 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:13.180 job1: (groupid=0, jobs=1): err= 0: pid=3380646: Tue Oct 1 17:43:11 2024 00:43:13.180 read: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec) 00:43:13.180 slat (nsec): min=964, max=12974k, avg=101439.31, stdev=773649.58 00:43:13.180 clat (usec): min=2648, max=54490, avg=12768.81, stdev=5406.05 00:43:13.180 lat (usec): min=2654, max=54498, avg=12870.25, stdev=5474.43 00:43:13.180 clat percentiles (usec): 00:43:13.180 | 1.00th=[ 5473], 5.00th=[ 6783], 10.00th=[ 7177], 20.00th=[ 7767], 00:43:13.180 | 30.00th=[ 9765], 40.00th=[11207], 50.00th=[12518], 60.00th=[13173], 00:43:13.180 | 70.00th=[14353], 80.00th=[15795], 90.00th=[17957], 95.00th=[21627], 00:43:13.180 | 99.00th=[30802], 99.50th=[44303], 99.90th=[54264], 99.95th=[54264], 00:43:13.180 | 99.99th=[54264] 00:43:13.180 write: IOPS=4381, BW=17.1MiB/s (17.9MB/s)(17.3MiB/1011msec); 0 zone resets 00:43:13.180 slat (nsec): min=1616, max=10965k, avg=123752.91, stdev=790541.61 00:43:13.180 clat (usec): min=2494, max=72475, avg=16799.42, stdev=16823.13 00:43:13.180 lat (usec): min=2502, max=72484, avg=16923.17, stdev=16940.35 00:43:13.180 clat percentiles (usec): 00:43:13.180 | 1.00th=[ 3621], 5.00th=[ 6194], 10.00th=[ 6390], 20.00th=[ 7177], 00:43:13.180 | 30.00th=[ 8848], 40.00th=[ 9503], 50.00th=[10159], 60.00th=[12125], 00:43:13.180 | 70.00th=[15008], 80.00th=[17957], 90.00th=[42206], 95.00th=[65274], 00:43:13.180 | 99.00th=[70779], 99.50th=[70779], 99.90th=[72877], 99.95th=[72877], 00:43:13.180 | 99.99th=[72877] 00:43:13.180 bw ( KiB/s): min=11424, max=23000, per=19.98%, avg=17212.00, stdev=8185.47, samples=2 00:43:13.180 iops : min= 2856, max= 5750, avg=4303.00, stdev=2046.37, samples=2 00:43:13.180 lat (msec) : 4=0.83%, 10=38.44%, 20=48.96%, 50=6.64%, 100=5.14% 00:43:13.180 cpu : usr=3.37%, sys=4.75%, ctx=261, majf=0, minf=1 00:43:13.180 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:43:13.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:13.180 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:13.180 issued rwts: total=4096,4430,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:13.180 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:13.180 job2: (groupid=0, jobs=1): err= 0: pid=3380657: Tue Oct 1 17:43:11 2024 00:43:13.180 read: IOPS=7532, BW=29.4MiB/s (30.9MB/s)(29.6MiB/1005msec) 00:43:13.180 slat (nsec): min=964, max=10077k, avg=63143.40, stdev=455830.25 00:43:13.180 clat (usec): min=2392, max=60783, avg=8062.82, stdev=4724.21 00:43:13.180 lat (usec): min=2427, max=60791, avg=8125.96, stdev=4774.83 00:43:13.180 clat percentiles (usec): 00:43:13.180 | 1.00th=[ 4555], 5.00th=[ 4817], 10.00th=[ 5342], 20.00th=[ 6063], 00:43:13.180 | 30.00th=[ 6325], 40.00th=[ 6587], 50.00th=[ 6849], 60.00th=[ 7242], 00:43:13.180 | 70.00th=[ 8160], 80.00th=[ 9241], 90.00th=[11076], 95.00th=[14877], 00:43:13.180 | 99.00th=[29754], 99.50th=[44827], 99.90th=[57934], 99.95th=[60556], 00:43:13.180 | 99.99th=[60556] 00:43:13.180 write: IOPS=7641, BW=29.9MiB/s (31.3MB/s)(30.0MiB/1005msec); 0 zone resets 00:43:13.180 slat (nsec): min=1710, max=7533.3k, avg=62169.93, stdev=391541.87 00:43:13.180 clat (usec): min=2245, max=62369, avg=8621.75, stdev=8974.93 00:43:13.180 lat (usec): min=2253, max=62379, avg=8683.92, stdev=9031.96 00:43:13.180 clat percentiles (usec): 00:43:13.180 | 1.00th=[ 3064], 5.00th=[ 4293], 10.00th=[ 4752], 20.00th=[ 5473], 00:43:13.180 | 30.00th=[ 5997], 40.00th=[ 6325], 50.00th=[ 6521], 60.00th=[ 6652], 00:43:13.180 | 70.00th=[ 6783], 80.00th=[ 8455], 90.00th=[10945], 95.00th=[20841], 00:43:13.180 | 99.00th=[57410], 99.50th=[58459], 99.90th=[61604], 99.95th=[62129], 00:43:13.180 | 99.99th=[62129] 00:43:13.180 bw ( KiB/s): min=22368, max=39072, per=35.66%, avg=30720.00, stdev=11811.51, samples=2 00:43:13.180 iops : min= 5592, max= 9768, avg=7680.00, stdev=2952.88, samples=2 00:43:13.180 lat (msec) : 4=1.88%, 10=85.18%, 20=9.67%, 50=1.71%, 100=1.57% 00:43:13.180 cpu : usr=6.97%, sys=7.57%, ctx=647, majf=0, minf=1 00:43:13.180 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:43:13.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:13.180 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:13.180 issued rwts: total=7570,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:13.180 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:13.180 job3: (groupid=0, jobs=1): err= 0: pid=3380662: Tue Oct 1 17:43:11 2024 00:43:13.180 read: IOPS=5570, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1011msec) 00:43:13.180 slat (nsec): min=1385, max=9910.4k, avg=81867.08, stdev=643510.12 00:43:13.180 clat (usec): min=4704, max=26022, avg=11198.03, stdev=3978.50 00:43:13.180 lat (usec): min=5124, max=26028, avg=11279.89, stdev=4012.01 00:43:13.180 clat percentiles (usec): 00:43:13.180 | 1.00th=[ 5276], 5.00th=[ 6259], 10.00th=[ 6652], 20.00th=[ 7701], 00:43:13.180 | 30.00th=[ 8291], 40.00th=[ 9110], 50.00th=[10683], 60.00th=[11469], 00:43:13.180 | 70.00th=[12649], 80.00th=[14353], 90.00th=[16712], 95.00th=[19006], 00:43:13.180 | 99.00th=[22676], 99.50th=[24249], 99.90th=[24249], 99.95th=[24773], 00:43:13.180 | 99.99th=[26084] 00:43:13.180 write: IOPS=6011, BW=23.5MiB/s (24.6MB/s)(23.7MiB/1011msec); 0 zone resets 00:43:13.180 slat (nsec): min=1599, max=9671.8k, avg=82496.67, stdev=616799.87 00:43:13.180 clat (usec): min=1178, max=42794, avg=10727.21, stdev=5780.29 00:43:13.180 lat (usec): min=1190, max=42798, avg=10809.71, stdev=5818.55 00:43:13.180 clat percentiles (usec): 00:43:13.180 | 1.00th=[ 4686], 5.00th=[ 5145], 10.00th=[ 5997], 20.00th=[ 6718], 00:43:13.180 | 30.00th=[ 7504], 40.00th=[ 8356], 50.00th=[ 9503], 60.00th=[10421], 00:43:13.180 | 70.00th=[11600], 80.00th=[12911], 90.00th=[16581], 95.00th=[21627], 00:43:13.180 | 99.00th=[38011], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:43:13.180 | 99.99th=[42730] 00:43:13.180 bw ( KiB/s): min=23736, max=23872, per=27.63%, avg=23804.00, stdev=96.17, samples=2 00:43:13.180 iops : min= 5934, max= 5968, avg=5951.00, stdev=24.04, samples=2 00:43:13.180 lat (msec) : 2=0.14%, 10=48.22%, 20=47.34%, 50=4.31% 00:43:13.180 cpu : usr=4.26%, sys=6.93%, ctx=265, majf=0, minf=1 00:43:13.180 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:43:13.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:13.180 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:13.180 issued rwts: total=5632,6078,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:13.180 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:13.180 00:43:13.180 Run status group 0 (all jobs): 00:43:13.180 READ: bw=79.0MiB/s (82.8MB/s), 12.2MiB/s-29.4MiB/s (12.8MB/s-30.9MB/s), io=79.8MiB (83.7MB), run=1005-1011msec 00:43:13.180 WRITE: bw=84.1MiB/s (88.2MB/s), 13.9MiB/s-29.9MiB/s (14.6MB/s-31.3MB/s), io=85.0MiB (89.2MB), run=1005-1011msec 00:43:13.180 00:43:13.180 Disk stats (read/write): 00:43:13.180 nvme0n1: ios=2899/3072, merge=0/0, ticks=28319/33598, in_queue=61917, util=97.39% 00:43:13.180 nvme0n2: ios=3621/3919, merge=0/0, ticks=44912/54977, in_queue=99889, util=97.16% 00:43:13.180 nvme0n3: ios=6192/6147, merge=0/0, ticks=48755/54666, in_queue=103421, util=100.00% 00:43:13.180 nvme0n4: ios=4945/5120, merge=0/0, ticks=54635/46987, in_queue=101622, util=89.61% 00:43:13.180 17:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:43:13.180 [global] 00:43:13.180 thread=1 00:43:13.180 invalidate=1 00:43:13.180 rw=randwrite 00:43:13.180 time_based=1 00:43:13.180 runtime=1 00:43:13.180 ioengine=libaio 00:43:13.180 direct=1 00:43:13.180 bs=4096 00:43:13.180 iodepth=128 00:43:13.180 norandommap=0 00:43:13.180 numjobs=1 00:43:13.180 00:43:13.180 verify_dump=1 00:43:13.180 verify_backlog=512 00:43:13.180 verify_state_save=0 00:43:13.180 do_verify=1 00:43:13.180 verify=crc32c-intel 00:43:13.180 [job0] 00:43:13.180 filename=/dev/nvme0n1 00:43:13.180 [job1] 00:43:13.180 filename=/dev/nvme0n2 00:43:13.180 [job2] 00:43:13.180 filename=/dev/nvme0n3 00:43:13.180 [job3] 00:43:13.180 filename=/dev/nvme0n4 00:43:13.180 Could not set queue depth (nvme0n1) 00:43:13.180 Could not set queue depth (nvme0n2) 00:43:13.180 Could not set queue depth (nvme0n3) 00:43:13.180 Could not set queue depth (nvme0n4) 00:43:13.449 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:13.449 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:13.449 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:13.449 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:13.449 fio-3.35 00:43:13.449 Starting 4 threads 00:43:14.853 00:43:14.853 job0: (groupid=0, jobs=1): err= 0: pid=3381109: Tue Oct 1 17:43:13 2024 00:43:14.853 read: IOPS=5641, BW=22.0MiB/s (23.1MB/s)(23.0MiB/1044msec) 00:43:14.853 slat (nsec): min=913, max=9742.6k, avg=74584.57, stdev=511266.60 00:43:14.853 clat (usec): min=2276, max=50835, avg=10440.32, stdev=6718.73 00:43:14.853 lat (usec): min=2298, max=50839, avg=10514.90, stdev=6735.06 00:43:14.853 clat percentiles (usec): 00:43:14.853 | 1.00th=[ 3687], 5.00th=[ 5342], 10.00th=[ 5800], 20.00th=[ 6587], 00:43:14.853 | 30.00th=[ 7439], 40.00th=[ 7898], 50.00th=[ 8848], 60.00th=[ 9765], 00:43:14.853 | 70.00th=[10683], 80.00th=[12518], 90.00th=[15008], 95.00th=[19530], 00:43:14.853 | 99.00th=[46924], 99.50th=[47449], 99.90th=[50594], 99.95th=[50594], 00:43:14.853 | 99.99th=[50594] 00:43:14.853 write: IOPS=5885, BW=23.0MiB/s (24.1MB/s)(24.0MiB/1044msec); 0 zone resets 00:43:14.853 slat (nsec): min=1548, max=9313.9k, avg=85832.97, stdev=489936.14 00:43:14.853 clat (usec): min=1122, max=46557, avg=11549.39, stdev=8623.71 00:43:14.853 lat (usec): min=1132, max=46567, avg=11635.22, stdev=8679.75 00:43:14.853 clat percentiles (usec): 00:43:14.853 | 1.00th=[ 2769], 5.00th=[ 4359], 10.00th=[ 4752], 20.00th=[ 5800], 00:43:14.853 | 30.00th=[ 6456], 40.00th=[ 7570], 50.00th=[ 8160], 60.00th=[ 9634], 00:43:14.853 | 70.00th=[12518], 80.00th=[14091], 90.00th=[23725], 95.00th=[31589], 00:43:14.853 | 99.00th=[44303], 99.50th=[45876], 99.90th=[46400], 99.95th=[46400], 00:43:14.853 | 99.99th=[46400] 00:43:14.853 bw ( KiB/s): min=24016, max=25136, per=26.66%, avg=24576.00, stdev=791.96, samples=2 00:43:14.853 iops : min= 6004, max= 6284, avg=6144.00, stdev=197.99, samples=2 00:43:14.853 lat (msec) : 2=0.20%, 4=2.18%, 10=60.35%, 20=28.15%, 50=8.91% 00:43:14.853 lat (msec) : 100=0.22% 00:43:14.853 cpu : usr=4.60%, sys=4.89%, ctx=484, majf=0, minf=2 00:43:14.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:43:14.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:14.853 issued rwts: total=5890,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:14.853 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:14.853 job1: (groupid=0, jobs=1): err= 0: pid=3381117: Tue Oct 1 17:43:13 2024 00:43:14.853 read: IOPS=3440, BW=13.4MiB/s (14.1MB/s)(13.5MiB/1003msec) 00:43:14.853 slat (nsec): min=891, max=11061k, avg=127468.47, stdev=730289.18 00:43:14.853 clat (usec): min=2421, max=47633, avg=14513.73, stdev=6742.68 00:43:14.853 lat (usec): min=5050, max=47641, avg=14641.20, stdev=6820.02 00:43:14.853 clat percentiles (usec): 00:43:14.853 | 1.00th=[ 6194], 5.00th=[ 7701], 10.00th=[ 7832], 20.00th=[ 8979], 00:43:14.853 | 30.00th=[ 9634], 40.00th=[10421], 50.00th=[11994], 60.00th=[14746], 00:43:14.853 | 70.00th=[16909], 80.00th=[19792], 90.00th=[24249], 95.00th=[26346], 00:43:14.853 | 99.00th=[37487], 99.50th=[41157], 99.90th=[47449], 99.95th=[47449], 00:43:14.853 | 99.99th=[47449] 00:43:14.853 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:43:14.853 slat (nsec): min=1497, max=13000k, avg=150512.75, stdev=812246.86 00:43:14.853 clat (usec): min=5218, max=70857, avg=21253.44, stdev=16694.77 00:43:14.853 lat (usec): min=5222, max=70867, avg=21403.95, stdev=16811.04 00:43:14.853 clat percentiles (usec): 00:43:14.853 | 1.00th=[ 7111], 5.00th=[ 8094], 10.00th=[ 8160], 20.00th=[ 8979], 00:43:14.853 | 30.00th=[11076], 40.00th=[13304], 50.00th=[14222], 60.00th=[17433], 00:43:14.853 | 70.00th=[19268], 80.00th=[26870], 90.00th=[51119], 95.00th=[61604], 00:43:14.853 | 99.00th=[68682], 99.50th=[69731], 99.90th=[70779], 99.95th=[70779], 00:43:14.853 | 99.99th=[70779] 00:43:14.853 bw ( KiB/s): min=12288, max=16384, per=15.55%, avg=14336.00, stdev=2896.31, samples=2 00:43:14.853 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:43:14.853 lat (msec) : 4=0.01%, 10=32.37%, 20=44.11%, 50=17.88%, 100=5.63% 00:43:14.853 cpu : usr=2.99%, sys=3.49%, ctx=356, majf=0, minf=1 00:43:14.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:43:14.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:14.853 issued rwts: total=3451,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:14.853 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:14.853 job2: (groupid=0, jobs=1): err= 0: pid=3381127: Tue Oct 1 17:43:13 2024 00:43:14.853 read: IOPS=7004, BW=27.4MiB/s (28.7MB/s)(27.4MiB/1003msec) 00:43:14.853 slat (nsec): min=931, max=9634.1k, avg=68905.17, stdev=537251.74 00:43:14.853 clat (usec): min=1776, max=27242, avg=9637.07, stdev=3600.26 00:43:14.853 lat (usec): min=1802, max=27251, avg=9705.98, stdev=3636.92 00:43:14.853 clat percentiles (usec): 00:43:14.853 | 1.00th=[ 2606], 5.00th=[ 5145], 10.00th=[ 6063], 20.00th=[ 6915], 00:43:14.853 | 30.00th=[ 7373], 40.00th=[ 8356], 50.00th=[ 8979], 60.00th=[ 9634], 00:43:14.853 | 70.00th=[10552], 80.00th=[11731], 90.00th=[15008], 95.00th=[16188], 00:43:14.853 | 99.00th=[21627], 99.50th=[22152], 99.90th=[26346], 99.95th=[27132], 00:43:14.853 | 99.99th=[27132] 00:43:14.853 write: IOPS=7146, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1003msec); 0 zone resets 00:43:14.853 slat (nsec): min=1531, max=10503k, avg=53447.36, stdev=443022.14 00:43:14.853 clat (usec): min=677, max=22255, avg=8325.00, stdev=3002.50 00:43:14.853 lat (usec): min=693, max=22264, avg=8378.45, stdev=3014.60 00:43:14.853 clat percentiles (usec): 00:43:14.853 | 1.00th=[ 1352], 5.00th=[ 3982], 10.00th=[ 5014], 20.00th=[ 5997], 00:43:14.853 | 30.00th=[ 6718], 40.00th=[ 7439], 50.00th=[ 8029], 60.00th=[ 8455], 00:43:14.853 | 70.00th=[ 9241], 80.00th=[10683], 90.00th=[12518], 95.00th=[14091], 00:43:14.853 | 99.00th=[15139], 99.50th=[19792], 99.90th=[22152], 99.95th=[22152], 00:43:14.853 | 99.99th=[22152] 00:43:14.853 bw ( KiB/s): min=28616, max=28728, per=31.10%, avg=28672.00, stdev=79.20, samples=2 00:43:14.853 iops : min= 7154, max= 7182, avg=7168.00, stdev=19.80, samples=2 00:43:14.853 lat (usec) : 750=0.02%, 1000=0.23% 00:43:14.853 lat (msec) : 2=0.78%, 4=2.53%, 10=65.91%, 20=29.53%, 50=1.00% 00:43:14.853 cpu : usr=4.39%, sys=8.37%, ctx=472, majf=0, minf=1 00:43:14.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:43:14.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:14.853 issued rwts: total=7026,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:14.853 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:14.853 job3: (groupid=0, jobs=1): err= 0: pid=3381134: Tue Oct 1 17:43:13 2024 00:43:14.853 read: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec) 00:43:14.853 slat (nsec): min=1007, max=8844.9k, avg=68354.79, stdev=518263.54 00:43:14.853 clat (usec): min=4082, max=27434, avg=9107.52, stdev=3382.01 00:43:14.853 lat (usec): min=4093, max=27436, avg=9175.87, stdev=3411.15 00:43:14.853 clat percentiles (usec): 00:43:14.853 | 1.00th=[ 4424], 5.00th=[ 5866], 10.00th=[ 6390], 20.00th=[ 6980], 00:43:14.853 | 30.00th=[ 7242], 40.00th=[ 7635], 50.00th=[ 7963], 60.00th=[ 8455], 00:43:14.853 | 70.00th=[ 9634], 80.00th=[11207], 90.00th=[12911], 95.00th=[16450], 00:43:14.853 | 99.00th=[22676], 99.50th=[23987], 99.90th=[26346], 99.95th=[27395], 00:43:14.853 | 99.99th=[27395] 00:43:14.853 write: IOPS=7138, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1004msec); 0 zone resets 00:43:14.853 slat (nsec): min=1619, max=6381.2k, avg=71178.86, stdev=469904.99 00:43:14.853 clat (usec): min=1160, max=58848, avg=9318.80, stdev=7069.07 00:43:14.853 lat (usec): min=1171, max=58856, avg=9389.98, stdev=7111.41 00:43:14.853 clat percentiles (usec): 00:43:14.853 | 1.00th=[ 3589], 5.00th=[ 5014], 10.00th=[ 5342], 20.00th=[ 6128], 00:43:14.853 | 30.00th=[ 6849], 40.00th=[ 7308], 50.00th=[ 7767], 60.00th=[ 8160], 00:43:14.853 | 70.00th=[ 9110], 80.00th=[10421], 90.00th=[12125], 95.00th=[15795], 00:43:14.853 | 99.00th=[53216], 99.50th=[56361], 99.90th=[57410], 99.95th=[58983], 00:43:14.853 | 99.99th=[58983] 00:43:14.853 bw ( KiB/s): min=24704, max=31552, per=30.51%, avg=28128.00, stdev=4842.27, samples=2 00:43:14.853 iops : min= 6176, max= 7888, avg=7032.00, stdev=1210.57, samples=2 00:43:14.853 lat (msec) : 2=0.05%, 4=0.65%, 10=73.36%, 20=22.56%, 50=2.74% 00:43:14.853 lat (msec) : 100=0.63% 00:43:14.853 cpu : usr=5.08%, sys=6.78%, ctx=436, majf=0, minf=1 00:43:14.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:43:14.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:14.853 issued rwts: total=6656,7167,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:14.853 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:14.853 00:43:14.853 Run status group 0 (all jobs): 00:43:14.853 READ: bw=86.1MiB/s (90.3MB/s), 13.4MiB/s-27.4MiB/s (14.1MB/s-28.7MB/s), io=89.9MiB (94.3MB), run=1003-1044msec 00:43:14.853 WRITE: bw=90.0MiB/s (94.4MB/s), 14.0MiB/s-27.9MiB/s (14.6MB/s-29.3MB/s), io=94.0MiB (98.6MB), run=1003-1044msec 00:43:14.853 00:43:14.853 Disk stats (read/write): 00:43:14.853 nvme0n1: ios=5170/5318, merge=0/0, ticks=45492/55312, in_queue=100804, util=87.88% 00:43:14.854 nvme0n2: ios=2602/2930, merge=0/0, ticks=20017/31149, in_queue=51166, util=100.00% 00:43:14.854 nvme0n3: ios=5878/6144, merge=0/0, ticks=52527/48833, in_queue=101360, util=88.28% 00:43:14.854 nvme0n4: ios=5365/5632, merge=0/0, ticks=47743/53570, in_queue=101313, util=89.41% 00:43:14.854 17:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:43:14.854 17:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3381432 00:43:14.854 17:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:43:14.854 17:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:43:14.854 [global] 00:43:14.854 thread=1 00:43:14.854 invalidate=1 00:43:14.854 rw=read 00:43:14.854 time_based=1 00:43:14.854 runtime=10 00:43:14.854 ioengine=libaio 00:43:14.854 direct=1 00:43:14.854 bs=4096 00:43:14.854 iodepth=1 00:43:14.854 norandommap=1 00:43:14.854 numjobs=1 00:43:14.854 00:43:14.854 [job0] 00:43:14.854 filename=/dev/nvme0n1 00:43:14.854 [job1] 00:43:14.854 filename=/dev/nvme0n2 00:43:14.854 [job2] 00:43:14.854 filename=/dev/nvme0n3 00:43:14.854 [job3] 00:43:14.854 filename=/dev/nvme0n4 00:43:14.854 Could not set queue depth (nvme0n1) 00:43:14.854 Could not set queue depth (nvme0n2) 00:43:14.854 Could not set queue depth (nvme0n3) 00:43:14.854 Could not set queue depth (nvme0n4) 00:43:15.115 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:15.115 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:15.115 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:15.115 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:15.115 fio-3.35 00:43:15.115 Starting 4 threads 00:43:17.654 17:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:43:17.915 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=12423168, buflen=4096 00:43:17.915 fio: pid=3381622, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:17.915 17:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:43:17.915 17:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:17.915 17:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:43:17.915 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=929792, buflen=4096 00:43:17.915 fio: pid=3381621, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:18.175 17:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:18.175 fio: io_u error on file /dev/nvme0n1: Input/output error: read offset=1216512, buflen=4096 00:43:18.175 fio: pid=3381619, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:43:18.175 17:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:43:18.435 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=8921088, buflen=4096 00:43:18.435 fio: pid=3381620, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:18.435 17:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:18.435 17:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:43:18.435 00:43:18.435 job0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=3381619: Tue Oct 1 17:43:16 2024 00:43:18.435 read: IOPS=100, BW=399KiB/s (408kB/s)(1188KiB/2979msec) 00:43:18.435 slat (usec): min=6, max=14693, avg=118.29, stdev=1001.10 00:43:18.435 clat (usec): min=366, max=42193, avg=9903.92, stdev=16748.17 00:43:18.435 lat (usec): min=391, max=42218, avg=9998.39, stdev=16737.90 00:43:18.435 clat percentiles (usec): 00:43:18.435 | 1.00th=[ 537], 5.00th=[ 660], 10.00th=[ 758], 20.00th=[ 816], 00:43:18.435 | 30.00th=[ 1029], 40.00th=[ 1123], 50.00th=[ 1172], 60.00th=[ 1221], 00:43:18.435 | 70.00th=[ 1270], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:43:18.435 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:18.435 | 99.99th=[42206] 00:43:18.435 bw ( KiB/s): min= 95, max= 208, per=1.95%, avg=142.20, stdev=45.65, samples=5 00:43:18.435 iops : min= 23, max= 52, avg=35.40, stdev=11.61, samples=5 00:43:18.435 lat (usec) : 500=0.67%, 750=9.06%, 1000=18.46% 00:43:18.435 lat (msec) : 2=49.33%, 20=0.67%, 50=21.48% 00:43:18.435 cpu : usr=0.13%, sys=0.50%, ctx=300, majf=0, minf=1 00:43:18.435 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:18.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:18.435 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:18.435 issued rwts: total=298,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:18.435 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:18.435 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3381620: Tue Oct 1 17:43:16 2024 00:43:18.435 read: IOPS=692, BW=2769KiB/s (2836kB/s)(8712KiB/3146msec) 00:43:18.435 slat (usec): min=5, max=18188, avg=59.82, stdev=718.98 00:43:18.435 clat (usec): min=400, max=42001, avg=1368.68, stdev=4629.80 00:43:18.435 lat (usec): min=407, max=42027, avg=1428.52, stdev=4683.45 00:43:18.435 clat percentiles (usec): 00:43:18.435 | 1.00th=[ 553], 5.00th=[ 652], 10.00th=[ 676], 20.00th=[ 734], 00:43:18.435 | 30.00th=[ 758], 40.00th=[ 775], 50.00th=[ 791], 60.00th=[ 807], 00:43:18.435 | 70.00th=[ 824], 80.00th=[ 898], 90.00th=[ 1074], 95.00th=[ 1139], 00:43:18.435 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:43:18.435 | 99.99th=[42206] 00:43:18.435 bw ( KiB/s): min= 231, max= 5040, per=37.95%, avg=2767.17, stdev=2140.52, samples=6 00:43:18.435 iops : min= 57, max= 1260, avg=691.67, stdev=535.31, samples=6 00:43:18.435 lat (usec) : 500=0.23%, 750=24.92%, 1000=59.39% 00:43:18.435 lat (msec) : 2=13.91%, 10=0.14%, 20=0.05%, 50=1.33% 00:43:18.435 cpu : usr=0.57%, sys=2.03%, ctx=2185, majf=0, minf=1 00:43:18.435 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:18.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:18.435 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:18.435 issued rwts: total=2179,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:18.435 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:18.435 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3381621: Tue Oct 1 17:43:16 2024 00:43:18.435 read: IOPS=80, BW=323KiB/s (330kB/s)(908KiB/2815msec) 00:43:18.435 slat (usec): min=7, max=213, avg=26.95, stdev=13.03 00:43:18.435 clat (usec): min=413, max=42128, avg=12271.63, stdev=18427.49 00:43:18.435 lat (usec): min=439, max=42154, avg=12298.59, stdev=18428.79 00:43:18.435 clat percentiles (usec): 00:43:18.435 | 1.00th=[ 441], 5.00th=[ 586], 10.00th=[ 668], 20.00th=[ 766], 00:43:18.435 | 30.00th=[ 848], 40.00th=[ 906], 50.00th=[ 979], 60.00th=[ 1057], 00:43:18.435 | 70.00th=[ 1205], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:43:18.435 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:18.435 | 99.99th=[42206] 00:43:18.435 bw ( KiB/s): min= 96, max= 1040, per=4.83%, avg=352.00, stdev=411.20, samples=5 00:43:18.435 iops : min= 24, max= 260, avg=88.00, stdev=102.80, samples=5 00:43:18.435 lat (usec) : 500=1.75%, 750=16.67%, 1000=33.77% 00:43:18.435 lat (msec) : 2=19.74%, 50=27.63% 00:43:18.435 cpu : usr=0.18%, sys=0.14%, ctx=229, majf=0, minf=2 00:43:18.435 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:18.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:18.435 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:18.435 issued rwts: total=228,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:18.435 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:18.435 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3381622: Tue Oct 1 17:43:16 2024 00:43:18.435 read: IOPS=1166, BW=4663KiB/s (4774kB/s)(11.8MiB/2602msec) 00:43:18.435 slat (nsec): min=5264, max=63473, avg=23389.53, stdev=7453.38 00:43:18.435 clat (usec): min=273, max=41229, avg=821.73, stdev=742.62 00:43:18.435 lat (usec): min=299, max=41238, avg=845.12, stdev=742.52 00:43:18.435 clat percentiles (usec): 00:43:18.435 | 1.00th=[ 416], 5.00th=[ 619], 10.00th=[ 685], 20.00th=[ 750], 00:43:18.435 | 30.00th=[ 783], 40.00th=[ 799], 50.00th=[ 807], 60.00th=[ 824], 00:43:18.435 | 70.00th=[ 840], 80.00th=[ 873], 90.00th=[ 963], 95.00th=[ 996], 00:43:18.435 | 99.00th=[ 1057], 99.50th=[ 1074], 99.90th=[ 1139], 99.95th=[ 1172], 00:43:18.435 | 99.99th=[41157] 00:43:18.435 bw ( KiB/s): min= 4055, max= 5000, per=64.44%, avg=4699.00, stdev=384.11, samples=5 00:43:18.435 iops : min= 1013, max= 1250, avg=1174.60, stdev=96.34, samples=5 00:43:18.435 lat (usec) : 500=1.91%, 750=18.29%, 1000=74.92% 00:43:18.435 lat (msec) : 2=4.81%, 50=0.03% 00:43:18.435 cpu : usr=1.08%, sys=3.27%, ctx=3034, majf=0, minf=2 00:43:18.435 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:18.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:18.435 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:18.435 issued rwts: total=3034,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:18.435 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:18.435 00:43:18.435 Run status group 0 (all jobs): 00:43:18.435 READ: bw=7292KiB/s (7467kB/s), 323KiB/s-4663KiB/s (330kB/s-4774kB/s), io=22.4MiB (23.5MB), run=2602-3146msec 00:43:18.435 00:43:18.435 Disk stats (read/write): 00:43:18.436 nvme0n1: ios=286/0, merge=0/0, ticks=2780/0, in_queue=2780, util=94.16% 00:43:18.436 nvme0n2: ios=2146/0, merge=0/0, ticks=2914/0, in_queue=2914, util=93.15% 00:43:18.436 nvme0n3: ios=222/0, merge=0/0, ticks=2572/0, in_queue=2572, util=96.03% 00:43:18.436 nvme0n4: ios=3033/0, merge=0/0, ticks=2469/0, in_queue=2469, util=96.42% 00:43:18.436 17:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:18.436 17:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:43:18.696 17:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:18.696 17:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:43:18.957 17:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:18.957 17:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:43:18.957 17:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:18.957 17:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:43:19.217 17:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:43:19.217 17:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3381432 00:43:19.217 17:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:43:19.217 17:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:43:19.217 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:43:19.217 17:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:43:19.217 17:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:43:19.217 17:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:43:19.217 17:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:19.478 17:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:43:19.478 17:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:19.478 17:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:43:19.478 17:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:43:19.478 17:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:43:19.478 nvmf hotplug test: fio failed as expected 00:43:19.478 17:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:19.478 17:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:43:19.478 17:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:43:19.478 17:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:43:19.478 17:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:43:19.478 17:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:43:19.478 17:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:43:19.478 17:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:43:19.478 17:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:19.478 17:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:43:19.478 17:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:19.478 17:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:19.478 rmmod nvme_tcp 00:43:19.478 rmmod nvme_fabrics 00:43:19.739 rmmod nvme_keyring 00:43:19.739 17:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:19.739 17:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:43:19.739 17:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:43:19.739 17:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 3378266 ']' 00:43:19.739 17:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 3378266 00:43:19.739 17:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 3378266 ']' 00:43:19.739 17:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 3378266 00:43:19.739 17:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:43:19.739 17:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:19.739 17:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3378266 00:43:19.739 17:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:43:19.739 17:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:43:19.739 17:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3378266' 00:43:19.739 killing process with pid 3378266 00:43:19.739 17:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 3378266 00:43:19.739 17:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 3378266 00:43:19.739 17:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:43:19.739 17:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:43:19.739 17:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:43:19.739 17:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:43:19.739 17:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:43:19.739 17:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:43:19.740 17:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:43:19.740 17:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:19.740 17:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:19.740 17:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:19.740 17:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:19.740 17:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:22.284 00:43:22.284 real 0m27.578s 00:43:22.284 user 2m17.250s 00:43:22.284 sys 0m11.922s 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:22.284 ************************************ 00:43:22.284 END TEST nvmf_fio_target 00:43:22.284 ************************************ 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:22.284 ************************************ 00:43:22.284 START TEST nvmf_bdevio 00:43:22.284 ************************************ 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:43:22.284 * Looking for test storage... 00:43:22.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:43:22.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:22.284 --rc genhtml_branch_coverage=1 00:43:22.284 --rc genhtml_function_coverage=1 00:43:22.284 --rc genhtml_legend=1 00:43:22.284 --rc geninfo_all_blocks=1 00:43:22.284 --rc geninfo_unexecuted_blocks=1 00:43:22.284 00:43:22.284 ' 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:43:22.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:22.284 --rc genhtml_branch_coverage=1 00:43:22.284 --rc genhtml_function_coverage=1 00:43:22.284 --rc genhtml_legend=1 00:43:22.284 --rc geninfo_all_blocks=1 00:43:22.284 --rc geninfo_unexecuted_blocks=1 00:43:22.284 00:43:22.284 ' 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:43:22.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:22.284 --rc genhtml_branch_coverage=1 00:43:22.284 --rc genhtml_function_coverage=1 00:43:22.284 --rc genhtml_legend=1 00:43:22.284 --rc geninfo_all_blocks=1 00:43:22.284 --rc geninfo_unexecuted_blocks=1 00:43:22.284 00:43:22.284 ' 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:43:22.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:22.284 --rc genhtml_branch_coverage=1 00:43:22.284 --rc genhtml_function_coverage=1 00:43:22.284 --rc genhtml_legend=1 00:43:22.284 --rc geninfo_all_blocks=1 00:43:22.284 --rc geninfo_unexecuted_blocks=1 00:43:22.284 00:43:22.284 ' 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:22.284 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:22.285 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:43:22.285 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:43:22.285 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:22.285 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:22.285 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:22.285 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:22.285 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:22.285 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:43:22.285 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:22.285 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:22.285 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:22.285 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:22.285 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:22.285 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:22.285 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:43:22.285 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:22.285 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:43:22.285 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:22.285 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:22.285 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:22.285 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:22.285 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:22.285 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:22.285 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:22.285 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:22.285 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:22.285 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:22.285 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:22.285 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:22.285 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:43:22.285 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:43:22.285 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:22.285 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:43:22.285 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:43:22.285 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:43:22.285 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:22.285 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:22.285 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:22.285 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:43:22.285 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:43:22.285 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:43:22.285 17:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:43:30.429 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:43:30.429 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:43:30.429 Found net devices under 0000:4b:00.0: cvl_0_0 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:43:30.429 Found net devices under 0000:4b:00.1: cvl_0_1 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:30.429 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:30.430 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:30.430 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:30.430 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:30.430 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:30.430 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:30.430 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.687 ms 00:43:30.430 00:43:30.430 --- 10.0.0.2 ping statistics --- 00:43:30.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:30.430 rtt min/avg/max/mdev = 0.687/0.687/0.687/0.000 ms 00:43:30.430 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:30.430 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:30.430 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:43:30.430 00:43:30.430 --- 10.0.0.1 ping statistics --- 00:43:30.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:30.430 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:43:30.430 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:30.430 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:43:30.430 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:43:30.430 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:30.430 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:43:30.430 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:43:30.430 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:30.430 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:43:30.430 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:43:30.430 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:43:30.430 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:43:30.430 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:30.430 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:30.430 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=3386638 00:43:30.430 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 3386638 00:43:30.430 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:43:30.430 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 3386638 ']' 00:43:30.430 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:30.430 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:30.430 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:30.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:30.430 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:30.430 17:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:30.430 [2024-10-01 17:43:27.952544] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:30.430 [2024-10-01 17:43:27.953669] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:43:30.430 [2024-10-01 17:43:27.953720] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:30.430 [2024-10-01 17:43:28.042029] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:30.430 [2024-10-01 17:43:28.089366] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:30.430 [2024-10-01 17:43:28.089417] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:30.430 [2024-10-01 17:43:28.089425] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:30.430 [2024-10-01 17:43:28.089432] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:30.430 [2024-10-01 17:43:28.089438] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:30.430 [2024-10-01 17:43:28.089604] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:43:30.430 [2024-10-01 17:43:28.089764] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:43:30.430 [2024-10-01 17:43:28.089918] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:43:30.430 [2024-10-01 17:43:28.089919] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:43:30.430 [2024-10-01 17:43:28.156280] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:30.430 [2024-10-01 17:43:28.157503] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:43:30.430 [2024-10-01 17:43:28.157603] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:30.430 [2024-10-01 17:43:28.158224] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:43:30.430 [2024-10-01 17:43:28.158284] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:43:30.430 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:30.430 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:43:30.430 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:43:30.430 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:30.430 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:30.430 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:30.430 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:30.430 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:30.430 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:30.430 [2024-10-01 17:43:28.810879] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:30.430 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:30.430 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:43:30.430 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:30.430 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:30.430 Malloc0 00:43:30.430 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:30.430 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:43:30.430 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:30.430 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:30.430 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:30.430 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:30.430 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:30.430 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:30.430 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:30.430 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:30.430 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:30.430 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:30.430 [2024-10-01 17:43:28.895228] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:30.430 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:30.430 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:43:30.430 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:43:30.430 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:43:30.430 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:43:30.430 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:43:30.430 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:43:30.430 { 00:43:30.430 "params": { 00:43:30.430 "name": "Nvme$subsystem", 00:43:30.430 "trtype": "$TEST_TRANSPORT", 00:43:30.430 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:30.430 "adrfam": "ipv4", 00:43:30.430 "trsvcid": "$NVMF_PORT", 00:43:30.430 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:30.430 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:30.430 "hdgst": ${hdgst:-false}, 00:43:30.430 "ddgst": ${ddgst:-false} 00:43:30.430 }, 00:43:30.430 "method": "bdev_nvme_attach_controller" 00:43:30.430 } 00:43:30.430 EOF 00:43:30.430 )") 00:43:30.430 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:43:30.430 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:43:30.430 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:43:30.430 17:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:43:30.430 "params": { 00:43:30.430 "name": "Nvme1", 00:43:30.430 "trtype": "tcp", 00:43:30.430 "traddr": "10.0.0.2", 00:43:30.430 "adrfam": "ipv4", 00:43:30.430 "trsvcid": "4420", 00:43:30.430 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:30.431 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:30.431 "hdgst": false, 00:43:30.431 "ddgst": false 00:43:30.431 }, 00:43:30.431 "method": "bdev_nvme_attach_controller" 00:43:30.431 }' 00:43:30.431 [2024-10-01 17:43:28.953280] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:43:30.431 [2024-10-01 17:43:28.953348] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3386858 ] 00:43:30.691 [2024-10-01 17:43:29.018576] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:43:30.691 [2024-10-01 17:43:29.059616] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:43:30.691 [2024-10-01 17:43:29.059739] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:43:30.691 [2024-10-01 17:43:29.059742] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:43:30.691 I/O targets: 00:43:30.691 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:43:30.691 00:43:30.691 00:43:30.691 CUnit - A unit testing framework for C - Version 2.1-3 00:43:30.691 http://cunit.sourceforge.net/ 00:43:30.691 00:43:30.691 00:43:30.691 Suite: bdevio tests on: Nvme1n1 00:43:30.951 Test: blockdev write read block ...passed 00:43:30.951 Test: blockdev write zeroes read block ...passed 00:43:30.951 Test: blockdev write zeroes read no split ...passed 00:43:30.951 Test: blockdev write zeroes read split ...passed 00:43:30.951 Test: blockdev write zeroes read split partial ...passed 00:43:30.951 Test: blockdev reset ...[2024-10-01 17:43:29.440387] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:30.951 [2024-10-01 17:43:29.440449] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x245bc50 (9): Bad file descriptor 00:43:30.951 [2024-10-01 17:43:29.453201] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:43:30.951 passed 00:43:30.951 Test: blockdev write read 8 blocks ...passed 00:43:30.951 Test: blockdev write read size > 128k ...passed 00:43:30.951 Test: blockdev write read invalid size ...passed 00:43:31.212 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:43:31.212 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:43:31.212 Test: blockdev write read max offset ...passed 00:43:31.212 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:43:31.212 Test: blockdev writev readv 8 blocks ...passed 00:43:31.212 Test: blockdev writev readv 30 x 1block ...passed 00:43:31.212 Test: blockdev writev readv block ...passed 00:43:31.212 Test: blockdev writev readv size > 128k ...passed 00:43:31.212 Test: blockdev writev readv size > 128k in two iovs ...passed 00:43:31.212 Test: blockdev comparev and writev ...[2024-10-01 17:43:29.636827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:31.212 [2024-10-01 17:43:29.636857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:31.212 [2024-10-01 17:43:29.636868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:31.212 [2024-10-01 17:43:29.636874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:43:31.212 [2024-10-01 17:43:29.637423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:31.212 [2024-10-01 17:43:29.637433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:43:31.212 [2024-10-01 17:43:29.637443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:31.212 [2024-10-01 17:43:29.637448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:43:31.212 [2024-10-01 17:43:29.638000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:31.212 [2024-10-01 17:43:29.638009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:43:31.212 [2024-10-01 17:43:29.638019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:31.212 [2024-10-01 17:43:29.638024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:43:31.212 [2024-10-01 17:43:29.638588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:31.212 [2024-10-01 17:43:29.638596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:43:31.212 [2024-10-01 17:43:29.638605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:31.212 [2024-10-01 17:43:29.638611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:43:31.212 passed 00:43:31.212 Test: blockdev nvme passthru rw ...passed 00:43:31.212 Test: blockdev nvme passthru vendor specific ...[2024-10-01 17:43:29.722827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:31.212 [2024-10-01 17:43:29.722838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:43:31.212 [2024-10-01 17:43:29.723200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:31.212 [2024-10-01 17:43:29.723209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:43:31.212 [2024-10-01 17:43:29.723546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:31.212 [2024-10-01 17:43:29.723554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:43:31.212 [2024-10-01 17:43:29.723909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:31.212 [2024-10-01 17:43:29.723921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:43:31.212 passed 00:43:31.212 Test: blockdev nvme admin passthru ...passed 00:43:31.472 Test: blockdev copy ...passed 00:43:31.472 00:43:31.472 Run Summary: Type Total Ran Passed Failed Inactive 00:43:31.472 suites 1 1 n/a 0 0 00:43:31.472 tests 23 23 23 0 0 00:43:31.472 asserts 152 152 152 0 n/a 00:43:31.472 00:43:31.472 Elapsed time = 1.110 seconds 00:43:31.472 17:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:31.472 17:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:31.472 17:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:31.472 17:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:31.472 17:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:43:31.472 17:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:43:31.472 17:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:43:31.472 17:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:43:31.472 17:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:31.472 17:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:43:31.472 17:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:31.472 17:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:31.472 rmmod nvme_tcp 00:43:31.472 rmmod nvme_fabrics 00:43:31.472 rmmod nvme_keyring 00:43:31.472 17:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:31.472 17:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:43:31.472 17:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:43:31.472 17:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 3386638 ']' 00:43:31.472 17:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 3386638 00:43:31.472 17:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 3386638 ']' 00:43:31.472 17:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 3386638 00:43:31.472 17:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:43:31.473 17:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:31.473 17:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3386638 00:43:31.733 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:43:31.733 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:43:31.733 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3386638' 00:43:31.733 killing process with pid 3386638 00:43:31.733 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 3386638 00:43:31.733 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 3386638 00:43:31.733 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:43:31.733 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:43:31.733 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:43:31.733 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:43:31.733 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:43:31.733 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:43:31.733 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:43:31.733 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:31.733 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:31.733 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:31.733 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:31.733 17:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:34.278 17:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:34.278 00:43:34.278 real 0m11.886s 00:43:34.278 user 0m8.873s 00:43:34.278 sys 0m6.251s 00:43:34.278 17:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:34.278 17:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:34.278 ************************************ 00:43:34.278 END TEST nvmf_bdevio 00:43:34.278 ************************************ 00:43:34.278 17:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:43:34.278 00:43:34.278 real 4m54.065s 00:43:34.278 user 10m12.126s 00:43:34.278 sys 2m0.504s 00:43:34.278 17:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:34.278 17:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:34.278 ************************************ 00:43:34.278 END TEST nvmf_target_core_interrupt_mode 00:43:34.278 ************************************ 00:43:34.278 17:43:32 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:43:34.278 17:43:32 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:43:34.278 17:43:32 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:34.278 17:43:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:34.278 ************************************ 00:43:34.278 START TEST nvmf_interrupt 00:43:34.278 ************************************ 00:43:34.278 17:43:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:43:34.278 * Looking for test storage... 00:43:34.278 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:34.278 17:43:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:43:34.278 17:43:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lcov --version 00:43:34.278 17:43:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:43:34.278 17:43:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:43:34.278 17:43:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:34.278 17:43:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:34.278 17:43:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:34.278 17:43:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:43:34.278 17:43:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:43:34.278 17:43:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:43:34.278 17:43:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:43:34.278 17:43:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:43:34.278 17:43:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:43:34.278 17:43:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:43:34.278 17:43:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:34.278 17:43:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:43:34.278 17:43:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:43:34.278 17:43:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:34.278 17:43:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:34.278 17:43:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:43:34.278 17:43:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:43:34.278 17:43:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:34.278 17:43:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:43:34.278 17:43:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:43:34.278 17:43:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:43:34.278 17:43:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:43:34.278 17:43:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:34.278 17:43:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:43:34.278 17:43:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:43:34.278 17:43:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:43:34.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:34.279 --rc genhtml_branch_coverage=1 00:43:34.279 --rc genhtml_function_coverage=1 00:43:34.279 --rc genhtml_legend=1 00:43:34.279 --rc geninfo_all_blocks=1 00:43:34.279 --rc geninfo_unexecuted_blocks=1 00:43:34.279 00:43:34.279 ' 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:43:34.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:34.279 --rc genhtml_branch_coverage=1 00:43:34.279 --rc genhtml_function_coverage=1 00:43:34.279 --rc genhtml_legend=1 00:43:34.279 --rc geninfo_all_blocks=1 00:43:34.279 --rc geninfo_unexecuted_blocks=1 00:43:34.279 00:43:34.279 ' 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:43:34.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:34.279 --rc genhtml_branch_coverage=1 00:43:34.279 --rc genhtml_function_coverage=1 00:43:34.279 --rc genhtml_legend=1 00:43:34.279 --rc geninfo_all_blocks=1 00:43:34.279 --rc geninfo_unexecuted_blocks=1 00:43:34.279 00:43:34.279 ' 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:43:34.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:34.279 --rc genhtml_branch_coverage=1 00:43:34.279 --rc genhtml_function_coverage=1 00:43:34.279 --rc genhtml_legend=1 00:43:34.279 --rc geninfo_all_blocks=1 00:43:34.279 --rc geninfo_unexecuted_blocks=1 00:43:34.279 00:43:34.279 ' 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # prepare_net_devs 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # local -g is_hw=no 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # remove_spdk_ns 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:43:34.279 17:43:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:43:40.860 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:43:40.860 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:43:40.860 Found net devices under 0000:4b:00.0: cvl_0_0 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:43:40.860 Found net devices under 0000:4b:00.1: cvl_0_1 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # is_hw=yes 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:40.860 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:40.861 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:40.861 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:40.861 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:40.861 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:41.121 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:41.121 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:41.121 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:41.121 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:41.121 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:41.121 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:41.121 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:41.121 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:41.121 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:43:41.121 00:43:41.121 --- 10.0.0.2 ping statistics --- 00:43:41.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:41.121 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:43:41.121 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:41.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:41.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.352 ms 00:43:41.121 00:43:41.121 --- 10.0.0.1 ping statistics --- 00:43:41.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:41.121 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:43:41.121 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:41.121 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@448 -- # return 0 00:43:41.121 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:43:41.121 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:41.121 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:43:41.121 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:43:41.121 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:41.121 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:43:41.121 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:43:41.121 17:43:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:43:41.121 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:43:41.121 17:43:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:41.121 17:43:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:41.121 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # nvmfpid=3391135 00:43:41.121 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # waitforlisten 3391135 00:43:41.121 17:43:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:43:41.121 17:43:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 3391135 ']' 00:43:41.121 17:43:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:41.121 17:43:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:41.121 17:43:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:41.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:41.121 17:43:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:41.121 17:43:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:41.121 [2024-10-01 17:43:39.664496] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:41.121 [2024-10-01 17:43:39.665575] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:43:41.121 [2024-10-01 17:43:39.665620] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:41.381 [2024-10-01 17:43:39.735973] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:43:41.381 [2024-10-01 17:43:39.767302] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:41.381 [2024-10-01 17:43:39.767337] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:41.381 [2024-10-01 17:43:39.767347] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:41.381 [2024-10-01 17:43:39.767353] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:41.381 [2024-10-01 17:43:39.767359] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:41.381 [2024-10-01 17:43:39.767499] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:43:41.381 [2024-10-01 17:43:39.767501] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:43:41.381 [2024-10-01 17:43:39.815288] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:41.381 [2024-10-01 17:43:39.815701] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:43:41.381 [2024-10-01 17:43:39.816112] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:41.952 17:43:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:41.952 17:43:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:43:41.952 17:43:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:43:41.952 17:43:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:41.952 17:43:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:41.952 17:43:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:41.952 17:43:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:43:42.213 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:43:42.213 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:43:42.213 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:43:42.213 5000+0 records in 00:43:42.213 5000+0 records out 00:43:42.213 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0174133 s, 588 MB/s 00:43:42.213 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:43:42.213 17:43:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:42.213 17:43:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:42.213 AIO0 00:43:42.213 17:43:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:42.213 17:43:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:43:42.213 17:43:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:42.213 17:43:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:42.213 [2024-10-01 17:43:40.564101] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:42.213 17:43:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:42.213 17:43:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:43:42.213 17:43:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:42.213 17:43:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:42.213 17:43:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:42.213 17:43:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:43:42.213 17:43:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:42.213 17:43:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:42.213 17:43:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:42.213 17:43:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:42.213 17:43:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:42.213 17:43:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:42.213 [2024-10-01 17:43:40.600467] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:42.213 17:43:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:42.213 17:43:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:43:42.213 17:43:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3391135 0 00:43:42.213 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3391135 0 idle 00:43:42.213 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3391135 00:43:42.213 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:43:42.213 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:42.213 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:42.213 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:42.213 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:42.213 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:42.213 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:42.213 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:42.213 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:42.213 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3391135 -w 256 00:43:42.213 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:43:42.474 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3391135 root 20 0 128.2g 42624 31104 S 0.0 0.0 0:00.22 reactor_0' 00:43:42.474 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3391135 root 20 0 128.2g 42624 31104 S 0.0 0.0 0:00.22 reactor_0 00:43:42.474 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:42.474 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:42.474 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:42.474 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:42.474 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:42.474 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:42.474 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:42.474 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:42.474 17:43:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:43:42.474 17:43:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3391135 1 00:43:42.474 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3391135 1 idle 00:43:42.474 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3391135 00:43:42.474 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:43:42.474 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:42.474 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:42.474 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:42.474 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:42.475 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:42.475 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:42.475 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:42.475 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:42.475 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3391135 -w 256 00:43:42.475 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:43:42.475 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3391187 root 20 0 128.2g 42624 31104 S 0.0 0.0 0:00.00 reactor_1' 00:43:42.475 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3391187 root 20 0 128.2g 42624 31104 S 0.0 0.0 0:00.00 reactor_1 00:43:42.475 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:42.475 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:42.475 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:42.475 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:42.475 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:42.475 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:42.475 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:42.475 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:42.475 17:43:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:43:42.475 17:43:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3391405 00:43:42.475 17:43:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:43:42.475 17:43:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:43:42.475 17:43:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3391135 0 00:43:42.475 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3391135 0 busy 00:43:42.475 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3391135 00:43:42.475 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:43:42.475 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:43:42.475 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:43:42.475 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:42.475 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:43:42.475 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:42.475 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:42.475 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:42.475 17:43:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:43:42.475 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3391135 -w 256 00:43:42.475 17:43:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:43:42.736 17:43:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3391135 root 20 0 128.2g 42624 31104 R 99.9 0.0 0:00.37 reactor_0' 00:43:42.736 17:43:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:42.736 17:43:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3391135 root 20 0 128.2g 42624 31104 R 99.9 0.0 0:00.37 reactor_0 00:43:42.736 17:43:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:42.736 17:43:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:43:42.736 17:43:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:43:42.736 17:43:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:43:42.736 17:43:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:43:42.736 17:43:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:43:42.736 17:43:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:42.736 17:43:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:43:42.736 17:43:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:43:42.737 17:43:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3391135 1 00:43:42.737 17:43:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3391135 1 busy 00:43:42.737 17:43:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3391135 00:43:42.737 17:43:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:43:42.737 17:43:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:43:42.737 17:43:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:43:42.737 17:43:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:42.737 17:43:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:43:42.737 17:43:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:42.737 17:43:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:42.737 17:43:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:42.737 17:43:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3391135 -w 256 00:43:42.737 17:43:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:43:42.997 17:43:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3391187 root 20 0 128.2g 42624 31104 R 87.5 0.0 0:00.25 reactor_1' 00:43:42.997 17:43:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3391187 root 20 0 128.2g 42624 31104 R 87.5 0.0 0:00.25 reactor_1 00:43:42.997 17:43:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:42.997 17:43:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:42.997 17:43:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=87.5 00:43:42.997 17:43:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=87 00:43:42.997 17:43:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:43:42.997 17:43:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:43:42.997 17:43:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:43:42.997 17:43:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:42.997 17:43:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3391405 00:43:53.000 Initializing NVMe Controllers 00:43:53.000 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:43:53.000 Controller IO queue size 256, less than required. 00:43:53.000 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:43:53.000 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:43:53.000 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:43:53.000 Initialization complete. Launching workers. 00:43:53.000 ======================================================== 00:43:53.000 Latency(us) 00:43:53.000 Device Information : IOPS MiB/s Average min max 00:43:53.000 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16135.10 63.03 15877.27 2315.87 54939.35 00:43:53.000 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 18223.40 71.19 14049.90 7321.43 27616.55 00:43:53.000 ======================================================== 00:43:53.000 Total : 34358.50 134.21 14908.05 2315.87 54939.35 00:43:53.000 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3391135 0 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3391135 0 idle 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3391135 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3391135 -w 256 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3391135 root 20 0 128.2g 42624 31104 S 0.0 0.0 0:20.23 reactor_0' 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3391135 root 20 0 128.2g 42624 31104 S 0.0 0.0 0:20.23 reactor_0 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3391135 1 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3391135 1 idle 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3391135 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3391135 -w 256 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3391187 root 20 0 128.2g 42624 31104 S 0.0 0.0 0:10.01 reactor_1' 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3391187 root 20 0 128.2g 42624 31104 S 0.0 0.0 0:10.01 reactor_1 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:53.000 17:43:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:43:53.570 17:43:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:43:53.570 17:43:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:43:53.570 17:43:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:43:53.570 17:43:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:43:53.570 17:43:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:43:55.535 17:43:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:43:55.535 17:43:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:43:55.535 17:43:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:43:55.535 17:43:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:43:55.535 17:43:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:43:55.535 17:43:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:43:55.535 17:43:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:43:55.535 17:43:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3391135 0 00:43:55.535 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3391135 0 idle 00:43:55.535 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3391135 00:43:55.535 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:43:55.535 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:55.535 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:55.535 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:55.535 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:55.535 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:55.535 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:55.535 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:55.535 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:55.535 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3391135 -w 256 00:43:55.535 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:43:55.822 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3391135 root 20 0 128.2g 77184 31104 S 0.0 0.1 0:20.48 reactor_0' 00:43:55.822 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3391135 root 20 0 128.2g 77184 31104 S 0.0 0.1 0:20.48 reactor_0 00:43:55.823 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:55.823 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:55.823 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:55.823 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:55.823 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:55.823 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:55.823 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:55.823 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:55.823 17:43:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:43:55.823 17:43:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3391135 1 00:43:55.823 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3391135 1 idle 00:43:55.823 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3391135 00:43:55.823 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:43:55.823 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:55.823 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:55.823 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:55.823 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:55.823 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:55.823 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:55.823 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:55.823 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:55.823 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3391135 -w 256 00:43:55.823 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:43:56.109 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3391187 root 20 0 128.2g 77184 31104 S 0.0 0.1 0:10.15 reactor_1' 00:43:56.109 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3391187 root 20 0 128.2g 77184 31104 S 0.0 0.1 0:10.15 reactor_1 00:43:56.109 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:56.109 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:56.109 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:56.109 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:56.109 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:56.109 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:56.109 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:56.109 17:43:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:56.109 17:43:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:43:56.109 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:43:56.109 17:43:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:43:56.109 17:43:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:43:56.109 17:43:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:43:56.109 17:43:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:56.109 17:43:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:43:56.109 17:43:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:56.109 17:43:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:43:56.109 17:43:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:43:56.109 17:43:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:43:56.109 17:43:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # nvmfcleanup 00:43:56.109 17:43:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:43:56.109 17:43:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:56.109 17:43:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:43:56.109 17:43:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:56.109 17:43:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:56.109 rmmod nvme_tcp 00:43:56.109 rmmod nvme_fabrics 00:43:56.109 rmmod nvme_keyring 00:43:56.109 17:43:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:56.109 17:43:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:43:56.109 17:43:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:43:56.109 17:43:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@515 -- # '[' -n 3391135 ']' 00:43:56.109 17:43:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # killprocess 3391135 00:43:56.109 17:43:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 3391135 ']' 00:43:56.109 17:43:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 3391135 00:43:56.109 17:43:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:43:56.109 17:43:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:56.369 17:43:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3391135 00:43:56.369 17:43:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:43:56.369 17:43:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:43:56.369 17:43:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3391135' 00:43:56.369 killing process with pid 3391135 00:43:56.369 17:43:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 3391135 00:43:56.369 17:43:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 3391135 00:43:56.369 17:43:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:43:56.369 17:43:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:43:56.369 17:43:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:43:56.369 17:43:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:43:56.369 17:43:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-save 00:43:56.369 17:43:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:43:56.369 17:43:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-restore 00:43:56.369 17:43:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:56.369 17:43:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:56.369 17:43:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:56.369 17:43:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:43:56.369 17:43:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:58.910 17:43:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:58.910 00:43:58.910 real 0m24.519s 00:43:58.910 user 0m40.153s 00:43:58.910 sys 0m9.015s 00:43:58.910 17:43:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:58.910 17:43:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:58.910 ************************************ 00:43:58.910 END TEST nvmf_interrupt 00:43:58.910 ************************************ 00:43:58.910 00:43:58.910 real 37m34.778s 00:43:58.910 user 91m18.134s 00:43:58.910 sys 10m59.252s 00:43:58.910 17:43:56 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:58.910 17:43:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:58.910 ************************************ 00:43:58.910 END TEST nvmf_tcp 00:43:58.910 ************************************ 00:43:58.910 17:43:57 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:43:58.910 17:43:57 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:43:58.910 17:43:57 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:43:58.910 17:43:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:58.910 17:43:57 -- common/autotest_common.sh@10 -- # set +x 00:43:58.910 ************************************ 00:43:58.910 START TEST spdkcli_nvmf_tcp 00:43:58.910 ************************************ 00:43:58.910 17:43:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:43:58.910 * Looking for test storage... 00:43:58.910 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:43:58.910 17:43:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:43:58.910 17:43:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:43:58.910 17:43:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:43:58.910 17:43:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:43:58.910 17:43:57 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:58.910 17:43:57 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:58.910 17:43:57 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:58.910 17:43:57 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:43:58.910 17:43:57 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:43:58.910 17:43:57 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:43:58.910 17:43:57 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:43:58.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:58.911 --rc genhtml_branch_coverage=1 00:43:58.911 --rc genhtml_function_coverage=1 00:43:58.911 --rc genhtml_legend=1 00:43:58.911 --rc geninfo_all_blocks=1 00:43:58.911 --rc geninfo_unexecuted_blocks=1 00:43:58.911 00:43:58.911 ' 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:43:58.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:58.911 --rc genhtml_branch_coverage=1 00:43:58.911 --rc genhtml_function_coverage=1 00:43:58.911 --rc genhtml_legend=1 00:43:58.911 --rc geninfo_all_blocks=1 00:43:58.911 --rc geninfo_unexecuted_blocks=1 00:43:58.911 00:43:58.911 ' 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:43:58.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:58.911 --rc genhtml_branch_coverage=1 00:43:58.911 --rc genhtml_function_coverage=1 00:43:58.911 --rc genhtml_legend=1 00:43:58.911 --rc geninfo_all_blocks=1 00:43:58.911 --rc geninfo_unexecuted_blocks=1 00:43:58.911 00:43:58.911 ' 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:43:58.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:58.911 --rc genhtml_branch_coverage=1 00:43:58.911 --rc genhtml_function_coverage=1 00:43:58.911 --rc genhtml_legend=1 00:43:58.911 --rc geninfo_all_blocks=1 00:43:58.911 --rc geninfo_unexecuted_blocks=1 00:43:58.911 00:43:58.911 ' 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:58.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3394593 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3394593 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 3394593 ']' 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:58.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:58.911 17:43:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:58.911 [2024-10-01 17:43:57.348309] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:43:58.911 [2024-10-01 17:43:57.348385] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3394593 ] 00:43:58.911 [2024-10-01 17:43:57.413130] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:43:58.911 [2024-10-01 17:43:57.451464] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:43:58.911 [2024-10-01 17:43:57.451469] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:43:59.172 17:43:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:59.172 17:43:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:43:59.172 17:43:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:43:59.172 17:43:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:59.172 17:43:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:59.172 17:43:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:43:59.172 17:43:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:43:59.172 17:43:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:43:59.172 17:43:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:59.172 17:43:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:59.172 17:43:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:43:59.172 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:43:59.172 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:43:59.172 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:43:59.172 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:43:59.172 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:43:59.172 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:43:59.172 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:43:59.172 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:43:59.172 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:43:59.172 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:43:59.172 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:43:59.172 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:43:59.172 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:43:59.172 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:43:59.172 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:43:59.172 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:43:59.172 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:43:59.172 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:43:59.172 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:43:59.172 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:43:59.172 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:43:59.172 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:43:59.172 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:43:59.172 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:43:59.172 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:43:59.172 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:43:59.172 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:43:59.172 ' 00:44:02.466 [2024-10-01 17:44:00.296590] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:03.405 [2024-10-01 17:44:01.653090] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:44:05.945 [2024-10-01 17:44:04.188689] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:44:07.862 [2024-10-01 17:44:06.395404] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:44:09.775 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:44:09.775 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:44:09.775 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:44:09.775 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:44:09.775 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:44:09.775 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:44:09.775 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:44:09.775 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:44:09.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:44:09.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:44:09.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:44:09.775 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:09.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:44:09.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:44:09.775 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:09.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:44:09.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:44:09.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:44:09.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:44:09.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:09.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:44:09.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:44:09.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:44:09.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:44:09.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:09.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:44:09.775 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:44:09.775 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:44:09.775 17:44:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:44:09.775 17:44:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:09.775 17:44:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:09.775 17:44:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:44:09.775 17:44:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:09.775 17:44:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:09.775 17:44:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:44:09.775 17:44:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:44:10.037 17:44:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:44:10.298 17:44:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:44:10.298 17:44:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:44:10.298 17:44:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:10.298 17:44:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:10.298 17:44:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:44:10.298 17:44:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:10.298 17:44:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:10.298 17:44:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:44:10.298 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:44:10.298 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:44:10.298 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:44:10.298 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:44:10.298 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:44:10.298 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:44:10.298 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:44:10.298 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:44:10.298 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:44:10.298 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:44:10.298 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:44:10.298 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:44:10.298 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:44:10.298 ' 00:44:15.585 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:44:15.585 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:44:15.585 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:44:15.585 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:44:15.585 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:44:15.585 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:44:15.585 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:44:15.585 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:44:15.585 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:44:15.585 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:44:15.585 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:44:15.585 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:44:15.585 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:44:15.585 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:44:15.585 17:44:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:44:15.585 17:44:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:15.585 17:44:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:15.585 17:44:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3394593 00:44:15.585 17:44:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 3394593 ']' 00:44:15.585 17:44:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 3394593 00:44:15.585 17:44:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:44:15.585 17:44:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:44:15.585 17:44:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3394593 00:44:15.585 17:44:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:44:15.585 17:44:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:44:15.585 17:44:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3394593' 00:44:15.585 killing process with pid 3394593 00:44:15.585 17:44:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 3394593 00:44:15.585 17:44:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 3394593 00:44:15.585 17:44:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:44:15.585 17:44:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:44:15.585 17:44:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3394593 ']' 00:44:15.585 17:44:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3394593 00:44:15.585 17:44:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 3394593 ']' 00:44:15.585 17:44:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 3394593 00:44:15.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3394593) - No such process 00:44:15.585 17:44:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 3394593 is not found' 00:44:15.585 Process with pid 3394593 is not found 00:44:15.585 17:44:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:44:15.585 17:44:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:44:15.585 17:44:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:44:15.585 00:44:15.585 real 0m16.914s 00:44:15.585 user 0m36.754s 00:44:15.585 sys 0m0.782s 00:44:15.585 17:44:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:15.585 17:44:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:15.585 ************************************ 00:44:15.585 END TEST spdkcli_nvmf_tcp 00:44:15.585 ************************************ 00:44:15.585 17:44:13 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:44:15.585 17:44:14 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:44:15.585 17:44:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:15.585 17:44:14 -- common/autotest_common.sh@10 -- # set +x 00:44:15.585 ************************************ 00:44:15.585 START TEST nvmf_identify_passthru 00:44:15.585 ************************************ 00:44:15.585 17:44:14 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:44:15.585 * Looking for test storage... 00:44:15.846 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:15.846 17:44:14 nvmf_identify_passthru -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:44:15.846 17:44:14 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lcov --version 00:44:15.846 17:44:14 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:44:15.846 17:44:14 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:44:15.846 17:44:14 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:15.846 17:44:14 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:15.846 17:44:14 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:15.847 17:44:14 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:44:15.847 17:44:14 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:44:15.847 17:44:14 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:44:15.847 17:44:14 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:44:15.847 17:44:14 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:44:15.847 17:44:14 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:44:15.847 17:44:14 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:44:15.847 17:44:14 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:15.847 17:44:14 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:44:15.847 17:44:14 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:44:15.847 17:44:14 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:15.847 17:44:14 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:15.847 17:44:14 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:44:15.847 17:44:14 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:44:15.847 17:44:14 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:15.847 17:44:14 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:44:15.847 17:44:14 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:44:15.847 17:44:14 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:44:15.847 17:44:14 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:44:15.847 17:44:14 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:15.847 17:44:14 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:44:15.847 17:44:14 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:44:15.847 17:44:14 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:15.847 17:44:14 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:15.847 17:44:14 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:44:15.847 17:44:14 nvmf_identify_passthru -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:15.847 17:44:14 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:44:15.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:15.847 --rc genhtml_branch_coverage=1 00:44:15.847 --rc genhtml_function_coverage=1 00:44:15.847 --rc genhtml_legend=1 00:44:15.847 --rc geninfo_all_blocks=1 00:44:15.847 --rc geninfo_unexecuted_blocks=1 00:44:15.847 00:44:15.847 ' 00:44:15.847 17:44:14 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:44:15.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:15.847 --rc genhtml_branch_coverage=1 00:44:15.847 --rc genhtml_function_coverage=1 00:44:15.847 --rc genhtml_legend=1 00:44:15.847 --rc geninfo_all_blocks=1 00:44:15.847 --rc geninfo_unexecuted_blocks=1 00:44:15.847 00:44:15.847 ' 00:44:15.847 17:44:14 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:44:15.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:15.847 --rc genhtml_branch_coverage=1 00:44:15.847 --rc genhtml_function_coverage=1 00:44:15.847 --rc genhtml_legend=1 00:44:15.847 --rc geninfo_all_blocks=1 00:44:15.847 --rc geninfo_unexecuted_blocks=1 00:44:15.847 00:44:15.847 ' 00:44:15.847 17:44:14 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:44:15.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:15.847 --rc genhtml_branch_coverage=1 00:44:15.847 --rc genhtml_function_coverage=1 00:44:15.847 --rc genhtml_legend=1 00:44:15.847 --rc geninfo_all_blocks=1 00:44:15.847 --rc geninfo_unexecuted_blocks=1 00:44:15.847 00:44:15.847 ' 00:44:15.847 17:44:14 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:15.847 17:44:14 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:44:15.847 17:44:14 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:15.847 17:44:14 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:15.847 17:44:14 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:15.847 17:44:14 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:15.847 17:44:14 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:15.847 17:44:14 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:15.847 17:44:14 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:15.847 17:44:14 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:15.847 17:44:14 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:15.847 17:44:14 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:15.847 17:44:14 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:44:15.847 17:44:14 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:44:15.847 17:44:14 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:15.847 17:44:14 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:15.847 17:44:14 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:15.847 17:44:14 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:15.847 17:44:14 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:15.847 17:44:14 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:44:15.847 17:44:14 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:15.847 17:44:14 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:15.847 17:44:14 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:15.847 17:44:14 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:15.847 17:44:14 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:15.847 17:44:14 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:15.847 17:44:14 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:44:15.847 17:44:14 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:15.847 17:44:14 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:44:15.847 17:44:14 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:15.847 17:44:14 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:15.847 17:44:14 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:15.847 17:44:14 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:15.847 17:44:14 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:15.847 17:44:14 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:15.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:15.847 17:44:14 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:15.847 17:44:14 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:15.847 17:44:14 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:15.847 17:44:14 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:15.847 17:44:14 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:44:15.847 17:44:14 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:15.847 17:44:14 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:15.847 17:44:14 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:15.847 17:44:14 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:15.847 17:44:14 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:15.847 17:44:14 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:15.847 17:44:14 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:44:15.847 17:44:14 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:15.847 17:44:14 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:44:15.847 17:44:14 nvmf_identify_passthru -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:44:15.847 17:44:14 nvmf_identify_passthru -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:15.848 17:44:14 nvmf_identify_passthru -- nvmf/common.sh@474 -- # prepare_net_devs 00:44:15.848 17:44:14 nvmf_identify_passthru -- nvmf/common.sh@436 -- # local -g is_hw=no 00:44:15.848 17:44:14 nvmf_identify_passthru -- nvmf/common.sh@438 -- # remove_spdk_ns 00:44:15.848 17:44:14 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:15.848 17:44:14 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:15.848 17:44:14 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:15.848 17:44:14 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:44:15.848 17:44:14 nvmf_identify_passthru -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:44:15.848 17:44:14 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:44:15.848 17:44:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:44:23.987 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:44:23.987 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:44:23.987 Found net devices under 0000:4b:00.0: cvl_0_0 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:44:23.987 Found net devices under 0000:4b:00.1: cvl_0_1 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@440 -- # is_hw=yes 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:23.987 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:23.988 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:23.988 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:23.988 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:23.988 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:23.988 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:23.988 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:23.988 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:23.988 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:23.988 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:23.988 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:23.988 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:23.988 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:23.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:23.988 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:44:23.988 00:44:23.988 --- 10.0.0.2 ping statistics --- 00:44:23.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:23.988 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:44:23.988 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:23.988 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:23.988 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:44:23.988 00:44:23.988 --- 10.0.0.1 ping statistics --- 00:44:23.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:23.988 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:44:23.988 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:23.988 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@448 -- # return 0 00:44:23.988 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:44:23.988 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:23.988 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:44:23.988 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:44:23.988 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:23.988 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:44:23.988 17:44:21 nvmf_identify_passthru -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:44:23.988 17:44:21 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:44:23.988 17:44:21 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:23.988 17:44:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:23.988 17:44:21 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:44:23.988 17:44:21 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:44:23.988 17:44:21 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:44:23.988 17:44:21 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:44:23.988 17:44:21 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:44:23.988 17:44:21 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:44:23.988 17:44:21 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:44:23.988 17:44:21 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:44:23.988 17:44:21 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:44:23.988 17:44:21 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:44:23.988 17:44:21 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:44:23.988 17:44:21 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:44:23.988 17:44:21 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:65:00.0 00:44:23.988 17:44:21 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:44:23.988 17:44:21 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:44:23.988 17:44:21 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:44:23.988 17:44:21 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:44:23.988 17:44:21 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:44:23.988 17:44:22 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:44:23.988 17:44:22 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:44:23.988 17:44:22 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:44:23.988 17:44:22 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:44:24.247 17:44:22 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:44:24.247 17:44:22 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:44:24.247 17:44:22 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:24.247 17:44:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:24.247 17:44:22 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:44:24.247 17:44:22 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:24.247 17:44:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:24.247 17:44:22 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3401668 00:44:24.247 17:44:22 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:44:24.247 17:44:22 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:44:24.247 17:44:22 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3401668 00:44:24.247 17:44:22 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 3401668 ']' 00:44:24.247 17:44:22 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:24.247 17:44:22 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:44:24.247 17:44:22 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:24.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:24.247 17:44:22 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:44:24.247 17:44:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:24.247 [2024-10-01 17:44:22.652014] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:44:24.247 [2024-10-01 17:44:22.652068] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:24.247 [2024-10-01 17:44:22.719808] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:24.247 [2024-10-01 17:44:22.754747] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:24.248 [2024-10-01 17:44:22.754786] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:24.248 [2024-10-01 17:44:22.754793] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:24.248 [2024-10-01 17:44:22.754800] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:24.248 [2024-10-01 17:44:22.754806] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:24.248 [2024-10-01 17:44:22.754948] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:44:24.248 [2024-10-01 17:44:22.755106] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:44:24.248 [2024-10-01 17:44:22.755517] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:44:24.248 [2024-10-01 17:44:22.755519] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:44:25.188 17:44:23 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:44:25.188 17:44:23 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:44:25.188 17:44:23 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:44:25.188 17:44:23 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:25.188 17:44:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:25.188 INFO: Log level set to 20 00:44:25.188 INFO: Requests: 00:44:25.188 { 00:44:25.188 "jsonrpc": "2.0", 00:44:25.188 "method": "nvmf_set_config", 00:44:25.188 "id": 1, 00:44:25.188 "params": { 00:44:25.188 "admin_cmd_passthru": { 00:44:25.188 "identify_ctrlr": true 00:44:25.188 } 00:44:25.188 } 00:44:25.188 } 00:44:25.188 00:44:25.188 INFO: response: 00:44:25.188 { 00:44:25.188 "jsonrpc": "2.0", 00:44:25.188 "id": 1, 00:44:25.188 "result": true 00:44:25.188 } 00:44:25.188 00:44:25.188 17:44:23 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:25.188 17:44:23 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:44:25.188 17:44:23 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:25.188 17:44:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:25.188 INFO: Setting log level to 20 00:44:25.188 INFO: Setting log level to 20 00:44:25.188 INFO: Log level set to 20 00:44:25.188 INFO: Log level set to 20 00:44:25.188 INFO: Requests: 00:44:25.188 { 00:44:25.188 "jsonrpc": "2.0", 00:44:25.188 "method": "framework_start_init", 00:44:25.188 "id": 1 00:44:25.188 } 00:44:25.188 00:44:25.188 INFO: Requests: 00:44:25.188 { 00:44:25.188 "jsonrpc": "2.0", 00:44:25.188 "method": "framework_start_init", 00:44:25.188 "id": 1 00:44:25.188 } 00:44:25.188 00:44:25.188 [2024-10-01 17:44:23.532002] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:44:25.188 INFO: response: 00:44:25.188 { 00:44:25.188 "jsonrpc": "2.0", 00:44:25.188 "id": 1, 00:44:25.188 "result": true 00:44:25.188 } 00:44:25.188 00:44:25.188 INFO: response: 00:44:25.188 { 00:44:25.188 "jsonrpc": "2.0", 00:44:25.188 "id": 1, 00:44:25.188 "result": true 00:44:25.188 } 00:44:25.188 00:44:25.188 17:44:23 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:25.188 17:44:23 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:25.188 17:44:23 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:25.188 17:44:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:25.188 INFO: Setting log level to 40 00:44:25.188 INFO: Setting log level to 40 00:44:25.188 INFO: Setting log level to 40 00:44:25.188 [2024-10-01 17:44:23.545322] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:25.188 17:44:23 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:25.188 17:44:23 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:44:25.188 17:44:23 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:25.188 17:44:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:25.188 17:44:23 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:44:25.188 17:44:23 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:25.188 17:44:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:25.449 Nvme0n1 00:44:25.449 17:44:23 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:25.449 17:44:23 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:44:25.449 17:44:23 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:25.449 17:44:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:25.449 17:44:23 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:25.450 17:44:23 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:44:25.450 17:44:23 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:25.450 17:44:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:25.450 17:44:23 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:25.450 17:44:23 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:25.450 17:44:23 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:25.450 17:44:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:25.450 [2024-10-01 17:44:23.924168] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:25.450 17:44:23 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:25.450 17:44:23 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:44:25.450 17:44:23 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:25.450 17:44:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:25.450 [ 00:44:25.450 { 00:44:25.450 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:44:25.450 "subtype": "Discovery", 00:44:25.450 "listen_addresses": [], 00:44:25.450 "allow_any_host": true, 00:44:25.450 "hosts": [] 00:44:25.450 }, 00:44:25.450 { 00:44:25.450 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:44:25.450 "subtype": "NVMe", 00:44:25.450 "listen_addresses": [ 00:44:25.450 { 00:44:25.450 "trtype": "TCP", 00:44:25.450 "adrfam": "IPv4", 00:44:25.450 "traddr": "10.0.0.2", 00:44:25.450 "trsvcid": "4420" 00:44:25.450 } 00:44:25.450 ], 00:44:25.450 "allow_any_host": true, 00:44:25.450 "hosts": [], 00:44:25.450 "serial_number": "SPDK00000000000001", 00:44:25.450 "model_number": "SPDK bdev Controller", 00:44:25.450 "max_namespaces": 1, 00:44:25.450 "min_cntlid": 1, 00:44:25.450 "max_cntlid": 65519, 00:44:25.450 "namespaces": [ 00:44:25.450 { 00:44:25.450 "nsid": 1, 00:44:25.450 "bdev_name": "Nvme0n1", 00:44:25.450 "name": "Nvme0n1", 00:44:25.450 "nguid": "36344730526054870025384500000044", 00:44:25.450 "uuid": "36344730-5260-5487-0025-384500000044" 00:44:25.450 } 00:44:25.450 ] 00:44:25.450 } 00:44:25.450 ] 00:44:25.450 17:44:23 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:25.450 17:44:23 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:44:25.450 17:44:23 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:44:25.450 17:44:23 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:44:25.711 17:44:24 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:44:25.711 17:44:24 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:44:25.711 17:44:24 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:44:25.711 17:44:24 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:44:25.711 17:44:24 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:44:25.711 17:44:24 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:44:25.711 17:44:24 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:44:25.711 17:44:24 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:25.711 17:44:24 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:25.711 17:44:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:25.711 17:44:24 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:25.711 17:44:24 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:44:25.711 17:44:24 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:44:25.711 17:44:24 nvmf_identify_passthru -- nvmf/common.sh@514 -- # nvmfcleanup 00:44:25.711 17:44:24 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:44:25.711 17:44:24 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:25.711 17:44:24 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:44:25.711 17:44:24 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:25.711 17:44:24 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:25.711 rmmod nvme_tcp 00:44:25.711 rmmod nvme_fabrics 00:44:25.711 rmmod nvme_keyring 00:44:25.711 17:44:24 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:25.711 17:44:24 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:44:25.711 17:44:24 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:44:25.711 17:44:24 nvmf_identify_passthru -- nvmf/common.sh@515 -- # '[' -n 3401668 ']' 00:44:25.711 17:44:24 nvmf_identify_passthru -- nvmf/common.sh@516 -- # killprocess 3401668 00:44:25.711 17:44:24 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 3401668 ']' 00:44:25.711 17:44:24 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 3401668 00:44:25.711 17:44:24 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:44:25.972 17:44:24 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:44:25.972 17:44:24 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3401668 00:44:25.972 17:44:24 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:44:25.972 17:44:24 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:44:25.972 17:44:24 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3401668' 00:44:25.972 killing process with pid 3401668 00:44:25.972 17:44:24 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 3401668 00:44:25.972 17:44:24 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 3401668 00:44:26.232 17:44:24 nvmf_identify_passthru -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:44:26.232 17:44:24 nvmf_identify_passthru -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:44:26.232 17:44:24 nvmf_identify_passthru -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:44:26.232 17:44:24 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:44:26.232 17:44:24 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-save 00:44:26.232 17:44:24 nvmf_identify_passthru -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:44:26.232 17:44:24 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-restore 00:44:26.232 17:44:24 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:26.232 17:44:24 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:26.232 17:44:24 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:26.232 17:44:24 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:26.232 17:44:24 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:28.140 17:44:26 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:28.140 00:44:28.140 real 0m12.617s 00:44:28.140 user 0m9.846s 00:44:28.140 sys 0m6.004s 00:44:28.140 17:44:26 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:28.140 17:44:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:28.140 ************************************ 00:44:28.140 END TEST nvmf_identify_passthru 00:44:28.140 ************************************ 00:44:28.402 17:44:26 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:44:28.402 17:44:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:44:28.402 17:44:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:28.402 17:44:26 -- common/autotest_common.sh@10 -- # set +x 00:44:28.402 ************************************ 00:44:28.402 START TEST nvmf_dif 00:44:28.402 ************************************ 00:44:28.402 17:44:26 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:44:28.402 * Looking for test storage... 00:44:28.402 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:28.402 17:44:26 nvmf_dif -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:44:28.402 17:44:26 nvmf_dif -- common/autotest_common.sh@1681 -- # lcov --version 00:44:28.402 17:44:26 nvmf_dif -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:44:28.402 17:44:26 nvmf_dif -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:44:28.402 17:44:26 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:28.402 17:44:26 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:28.402 17:44:26 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:28.402 17:44:26 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:44:28.402 17:44:26 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:44:28.402 17:44:26 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:44:28.402 17:44:26 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:44:28.402 17:44:26 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:44:28.402 17:44:26 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:44:28.402 17:44:26 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:44:28.402 17:44:26 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:28.402 17:44:26 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:44:28.402 17:44:26 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:44:28.402 17:44:26 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:28.402 17:44:26 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:28.402 17:44:26 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:44:28.402 17:44:26 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:44:28.402 17:44:26 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:28.402 17:44:26 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:44:28.402 17:44:26 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:44:28.402 17:44:26 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:44:28.402 17:44:26 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:44:28.402 17:44:26 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:28.402 17:44:26 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:44:28.402 17:44:26 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:44:28.402 17:44:26 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:28.402 17:44:26 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:28.402 17:44:26 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:44:28.402 17:44:26 nvmf_dif -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:28.402 17:44:26 nvmf_dif -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:44:28.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:28.402 --rc genhtml_branch_coverage=1 00:44:28.402 --rc genhtml_function_coverage=1 00:44:28.402 --rc genhtml_legend=1 00:44:28.402 --rc geninfo_all_blocks=1 00:44:28.402 --rc geninfo_unexecuted_blocks=1 00:44:28.402 00:44:28.402 ' 00:44:28.402 17:44:26 nvmf_dif -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:44:28.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:28.402 --rc genhtml_branch_coverage=1 00:44:28.402 --rc genhtml_function_coverage=1 00:44:28.402 --rc genhtml_legend=1 00:44:28.402 --rc geninfo_all_blocks=1 00:44:28.402 --rc geninfo_unexecuted_blocks=1 00:44:28.402 00:44:28.402 ' 00:44:28.402 17:44:26 nvmf_dif -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:44:28.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:28.402 --rc genhtml_branch_coverage=1 00:44:28.402 --rc genhtml_function_coverage=1 00:44:28.402 --rc genhtml_legend=1 00:44:28.402 --rc geninfo_all_blocks=1 00:44:28.402 --rc geninfo_unexecuted_blocks=1 00:44:28.402 00:44:28.402 ' 00:44:28.402 17:44:26 nvmf_dif -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:44:28.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:28.402 --rc genhtml_branch_coverage=1 00:44:28.402 --rc genhtml_function_coverage=1 00:44:28.402 --rc genhtml_legend=1 00:44:28.402 --rc geninfo_all_blocks=1 00:44:28.402 --rc geninfo_unexecuted_blocks=1 00:44:28.402 00:44:28.402 ' 00:44:28.402 17:44:26 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:28.402 17:44:26 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:44:28.402 17:44:26 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:28.402 17:44:26 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:28.402 17:44:26 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:28.402 17:44:26 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:28.402 17:44:26 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:28.402 17:44:26 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:28.402 17:44:26 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:28.402 17:44:26 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:28.402 17:44:26 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:28.402 17:44:26 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:28.402 17:44:26 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:44:28.402 17:44:26 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:44:28.402 17:44:26 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:28.402 17:44:26 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:28.402 17:44:26 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:28.402 17:44:26 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:28.402 17:44:26 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:28.402 17:44:26 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:44:28.402 17:44:26 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:28.402 17:44:26 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:28.402 17:44:26 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:28.402 17:44:26 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:28.402 17:44:26 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:28.402 17:44:26 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:28.402 17:44:26 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:44:28.402 17:44:26 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:28.402 17:44:26 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:44:28.402 17:44:26 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:28.402 17:44:26 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:28.402 17:44:26 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:28.402 17:44:26 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:28.402 17:44:26 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:28.402 17:44:26 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:28.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:28.402 17:44:26 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:28.402 17:44:26 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:28.402 17:44:26 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:28.402 17:44:26 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:44:28.402 17:44:26 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:44:28.402 17:44:26 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:44:28.402 17:44:26 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:44:28.402 17:44:26 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:44:28.402 17:44:26 nvmf_dif -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:44:28.402 17:44:26 nvmf_dif -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:28.402 17:44:26 nvmf_dif -- nvmf/common.sh@474 -- # prepare_net_devs 00:44:28.402 17:44:26 nvmf_dif -- nvmf/common.sh@436 -- # local -g is_hw=no 00:44:28.402 17:44:26 nvmf_dif -- nvmf/common.sh@438 -- # remove_spdk_ns 00:44:28.402 17:44:26 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:28.402 17:44:26 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:28.402 17:44:26 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:28.663 17:44:26 nvmf_dif -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:44:28.663 17:44:26 nvmf_dif -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:44:28.663 17:44:26 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:44:28.663 17:44:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:44:35.245 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:44:35.245 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:44:35.245 Found net devices under 0000:4b:00.0: cvl_0_0 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:44:35.245 Found net devices under 0000:4b:00.1: cvl_0_1 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@440 -- # is_hw=yes 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:35.245 17:44:33 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:35.505 17:44:33 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:35.505 17:44:33 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:35.505 17:44:33 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:35.505 17:44:33 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:35.505 17:44:33 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:35.505 17:44:33 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:35.505 17:44:33 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:35.505 17:44:33 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:35.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:35.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.561 ms 00:44:35.505 00:44:35.505 --- 10.0.0.2 ping statistics --- 00:44:35.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:35.505 rtt min/avg/max/mdev = 0.561/0.561/0.561/0.000 ms 00:44:35.505 17:44:33 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:35.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:35.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:44:35.505 00:44:35.505 --- 10.0.0.1 ping statistics --- 00:44:35.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:35.505 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:44:35.505 17:44:33 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:35.505 17:44:33 nvmf_dif -- nvmf/common.sh@448 -- # return 0 00:44:35.505 17:44:33 nvmf_dif -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:44:35.505 17:44:33 nvmf_dif -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:44:38.804 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:44:38.804 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:44:38.804 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:44:38.804 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:44:38.804 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:44:38.804 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:44:38.804 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:44:38.804 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:44:38.804 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:44:38.804 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:44:38.804 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:44:38.804 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:44:38.804 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:44:38.804 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:44:38.804 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:44:38.804 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:44:38.804 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:44:39.376 17:44:37 nvmf_dif -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:39.376 17:44:37 nvmf_dif -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:44:39.376 17:44:37 nvmf_dif -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:44:39.376 17:44:37 nvmf_dif -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:39.376 17:44:37 nvmf_dif -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:44:39.376 17:44:37 nvmf_dif -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:44:39.376 17:44:37 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:44:39.376 17:44:37 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:44:39.376 17:44:37 nvmf_dif -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:44:39.376 17:44:37 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:39.376 17:44:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:39.376 17:44:37 nvmf_dif -- nvmf/common.sh@507 -- # nvmfpid=3407537 00:44:39.376 17:44:37 nvmf_dif -- nvmf/common.sh@508 -- # waitforlisten 3407537 00:44:39.376 17:44:37 nvmf_dif -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:44:39.376 17:44:37 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 3407537 ']' 00:44:39.376 17:44:37 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:39.376 17:44:37 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:44:39.376 17:44:37 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:39.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:39.376 17:44:37 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:44:39.376 17:44:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:39.376 [2024-10-01 17:44:37.737843] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:44:39.376 [2024-10-01 17:44:37.737897] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:39.376 [2024-10-01 17:44:37.806646] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:39.376 [2024-10-01 17:44:37.839525] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:39.376 [2024-10-01 17:44:37.839565] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:39.376 [2024-10-01 17:44:37.839573] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:39.376 [2024-10-01 17:44:37.839580] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:39.376 [2024-10-01 17:44:37.839586] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:39.376 [2024-10-01 17:44:37.839604] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:44:39.376 17:44:37 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:44:39.637 17:44:37 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:44:39.637 17:44:37 nvmf_dif -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:44:39.637 17:44:37 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:39.637 17:44:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:39.637 17:44:37 nvmf_dif -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:39.637 17:44:37 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:44:39.637 17:44:37 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:44:39.637 17:44:37 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:39.637 17:44:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:39.637 [2024-10-01 17:44:37.968220] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:39.637 17:44:37 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:39.637 17:44:37 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:44:39.637 17:44:37 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:44:39.637 17:44:37 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:39.637 17:44:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:39.637 ************************************ 00:44:39.638 START TEST fio_dif_1_default 00:44:39.638 ************************************ 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:39.638 bdev_null0 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:39.638 [2024-10-01 17:44:38.052561] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # config=() 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # local subsystem config 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:44:39.638 { 00:44:39.638 "params": { 00:44:39.638 "name": "Nvme$subsystem", 00:44:39.638 "trtype": "$TEST_TRANSPORT", 00:44:39.638 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:39.638 "adrfam": "ipv4", 00:44:39.638 "trsvcid": "$NVMF_PORT", 00:44:39.638 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:39.638 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:39.638 "hdgst": ${hdgst:-false}, 00:44:39.638 "ddgst": ${ddgst:-false} 00:44:39.638 }, 00:44:39.638 "method": "bdev_nvme_attach_controller" 00:44:39.638 } 00:44:39.638 EOF 00:44:39.638 )") 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # cat 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # jq . 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@583 -- # IFS=, 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:44:39.638 "params": { 00:44:39.638 "name": "Nvme0", 00:44:39.638 "trtype": "tcp", 00:44:39.638 "traddr": "10.0.0.2", 00:44:39.638 "adrfam": "ipv4", 00:44:39.638 "trsvcid": "4420", 00:44:39.638 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:39.638 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:39.638 "hdgst": false, 00:44:39.638 "ddgst": false 00:44:39.638 }, 00:44:39.638 "method": "bdev_nvme_attach_controller" 00:44:39.638 }' 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:39.638 17:44:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:40.226 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:44:40.226 fio-3.35 00:44:40.226 Starting 1 thread 00:44:52.445 00:44:52.445 filename0: (groupid=0, jobs=1): err= 0: pid=3408046: Tue Oct 1 17:44:49 2024 00:44:52.445 read: IOPS=97, BW=389KiB/s (399kB/s)(3904KiB/10024msec) 00:44:52.445 slat (nsec): min=5399, max=31225, avg=6162.48, stdev=1551.87 00:44:52.445 clat (usec): min=40814, max=43308, avg=41062.45, stdev=298.83 00:44:52.445 lat (usec): min=40819, max=43339, avg=41068.61, stdev=299.18 00:44:52.445 clat percentiles (usec): 00:44:52.445 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:44:52.445 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:44:52.445 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:44:52.445 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:44:52.445 | 99.99th=[43254] 00:44:52.445 bw ( KiB/s): min= 384, max= 416, per=99.62%, avg=388.80, stdev=11.72, samples=20 00:44:52.445 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:44:52.445 lat (msec) : 50=100.00% 00:44:52.445 cpu : usr=93.70%, sys=6.10%, ctx=7, majf=0, minf=225 00:44:52.445 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:52.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:52.445 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:52.445 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:52.445 latency : target=0, window=0, percentile=100.00%, depth=4 00:44:52.445 00:44:52.445 Run status group 0 (all jobs): 00:44:52.445 READ: bw=389KiB/s (399kB/s), 389KiB/s-389KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10024-10024msec 00:44:52.445 17:44:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:44:52.445 17:44:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:44:52.445 17:44:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:44:52.445 17:44:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:52.445 17:44:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:44:52.445 17:44:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:52.445 17:44:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:52.445 17:44:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:52.445 17:44:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:52.445 17:44:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:52.445 17:44:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:52.445 17:44:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:52.445 17:44:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:52.445 00:44:52.445 real 0m11.253s 00:44:52.445 user 0m26.316s 00:44:52.445 sys 0m0.914s 00:44:52.445 17:44:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:52.445 17:44:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:52.445 ************************************ 00:44:52.445 END TEST fio_dif_1_default 00:44:52.445 ************************************ 00:44:52.445 17:44:49 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:44:52.445 17:44:49 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:44:52.445 17:44:49 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:52.445 17:44:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:52.445 ************************************ 00:44:52.445 START TEST fio_dif_1_multi_subsystems 00:44:52.445 ************************************ 00:44:52.445 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:44:52.445 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:44:52.445 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:44:52.445 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:44:52.445 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:44:52.445 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:44:52.445 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:44:52.445 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:44:52.445 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:52.445 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:52.445 bdev_null0 00:44:52.445 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:52.445 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:52.445 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:52.445 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:52.446 [2024-10-01 17:44:49.365718] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:52.446 bdev_null1 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # config=() 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # local subsystem config 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:44:52.446 { 00:44:52.446 "params": { 00:44:52.446 "name": "Nvme$subsystem", 00:44:52.446 "trtype": "$TEST_TRANSPORT", 00:44:52.446 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:52.446 "adrfam": "ipv4", 00:44:52.446 "trsvcid": "$NVMF_PORT", 00:44:52.446 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:52.446 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:52.446 "hdgst": ${hdgst:-false}, 00:44:52.446 "ddgst": ${ddgst:-false} 00:44:52.446 }, 00:44:52.446 "method": "bdev_nvme_attach_controller" 00:44:52.446 } 00:44:52.446 EOF 00:44:52.446 )") 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:44:52.446 { 00:44:52.446 "params": { 00:44:52.446 "name": "Nvme$subsystem", 00:44:52.446 "trtype": "$TEST_TRANSPORT", 00:44:52.446 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:52.446 "adrfam": "ipv4", 00:44:52.446 "trsvcid": "$NVMF_PORT", 00:44:52.446 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:52.446 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:52.446 "hdgst": ${hdgst:-false}, 00:44:52.446 "ddgst": ${ddgst:-false} 00:44:52.446 }, 00:44:52.446 "method": "bdev_nvme_attach_controller" 00:44:52.446 } 00:44:52.446 EOF 00:44:52.446 )") 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # jq . 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@583 -- # IFS=, 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:44:52.446 "params": { 00:44:52.446 "name": "Nvme0", 00:44:52.446 "trtype": "tcp", 00:44:52.446 "traddr": "10.0.0.2", 00:44:52.446 "adrfam": "ipv4", 00:44:52.446 "trsvcid": "4420", 00:44:52.446 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:52.446 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:52.446 "hdgst": false, 00:44:52.446 "ddgst": false 00:44:52.446 }, 00:44:52.446 "method": "bdev_nvme_attach_controller" 00:44:52.446 },{ 00:44:52.446 "params": { 00:44:52.446 "name": "Nvme1", 00:44:52.446 "trtype": "tcp", 00:44:52.446 "traddr": "10.0.0.2", 00:44:52.446 "adrfam": "ipv4", 00:44:52.446 "trsvcid": "4420", 00:44:52.446 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:52.446 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:52.446 "hdgst": false, 00:44:52.446 "ddgst": false 00:44:52.446 }, 00:44:52.446 "method": "bdev_nvme_attach_controller" 00:44:52.446 }' 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:52.446 17:44:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:52.446 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:44:52.446 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:44:52.446 fio-3.35 00:44:52.446 Starting 2 threads 00:45:02.436 00:45:02.436 filename0: (groupid=0, jobs=1): err= 0: pid=3410246: Tue Oct 1 17:45:00 2024 00:45:02.436 read: IOPS=189, BW=757KiB/s (775kB/s)(7584KiB/10019msec) 00:45:02.436 slat (nsec): min=5457, max=44415, avg=6539.34, stdev=1832.53 00:45:02.436 clat (usec): min=517, max=43797, avg=21118.42, stdev=20145.20 00:45:02.436 lat (usec): min=522, max=43830, avg=21124.96, stdev=20144.98 00:45:02.436 clat percentiles (usec): 00:45:02.436 | 1.00th=[ 635], 5.00th=[ 799], 10.00th=[ 881], 20.00th=[ 906], 00:45:02.436 | 30.00th=[ 922], 40.00th=[ 938], 50.00th=[40633], 60.00th=[41157], 00:45:02.436 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:45:02.436 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:45:02.436 | 99.99th=[43779] 00:45:02.436 bw ( KiB/s): min= 672, max= 768, per=66.06%, avg=756.80, stdev=26.01, samples=20 00:45:02.436 iops : min= 168, max= 192, avg=189.20, stdev= 6.50, samples=20 00:45:02.436 lat (usec) : 750=4.22%, 1000=44.94% 00:45:02.436 lat (msec) : 2=0.63%, 50=50.21% 00:45:02.436 cpu : usr=95.40%, sys=4.36%, ctx=13, majf=0, minf=183 00:45:02.436 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:02.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:02.436 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:02.436 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:02.436 latency : target=0, window=0, percentile=100.00%, depth=4 00:45:02.436 filename1: (groupid=0, jobs=1): err= 0: pid=3410247: Tue Oct 1 17:45:00 2024 00:45:02.436 read: IOPS=97, BW=389KiB/s (398kB/s)(3904KiB/10039msec) 00:45:02.436 slat (nsec): min=5427, max=33798, avg=6860.17, stdev=2086.98 00:45:02.436 clat (usec): min=40838, max=42973, avg=41123.14, stdev=368.10 00:45:02.436 lat (usec): min=40845, max=42980, avg=41130.00, stdev=368.40 00:45:02.436 clat percentiles (usec): 00:45:02.436 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:45:02.436 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:45:02.436 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:45:02.436 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:45:02.436 | 99.99th=[42730] 00:45:02.436 bw ( KiB/s): min= 384, max= 416, per=33.91%, avg=388.80, stdev=11.72, samples=20 00:45:02.436 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:45:02.436 lat (msec) : 50=100.00% 00:45:02.436 cpu : usr=95.10%, sys=4.66%, ctx=14, majf=0, minf=127 00:45:02.436 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:02.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:02.436 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:02.436 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:02.436 latency : target=0, window=0, percentile=100.00%, depth=4 00:45:02.436 00:45:02.436 Run status group 0 (all jobs): 00:45:02.436 READ: bw=1144KiB/s (1172kB/s), 389KiB/s-757KiB/s (398kB/s-775kB/s), io=11.2MiB (11.8MB), run=10019-10039msec 00:45:02.436 17:45:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:45:02.436 17:45:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:45:02.436 17:45:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:45:02.436 17:45:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:02.436 17:45:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:45:02.436 17:45:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:02.436 17:45:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:02.436 17:45:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:02.436 17:45:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:02.436 17:45:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:02.436 17:45:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:02.436 17:45:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:02.436 17:45:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:02.436 17:45:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:45:02.436 17:45:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:45:02.436 17:45:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:45:02.436 17:45:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:02.436 17:45:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:02.436 17:45:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:02.436 17:45:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:02.436 17:45:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:45:02.436 17:45:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:02.436 17:45:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:02.436 17:45:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:02.436 00:45:02.436 real 0m11.419s 00:45:02.436 user 0m32.024s 00:45:02.436 sys 0m1.292s 00:45:02.436 17:45:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:02.436 17:45:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:02.436 ************************************ 00:45:02.436 END TEST fio_dif_1_multi_subsystems 00:45:02.436 ************************************ 00:45:02.436 17:45:00 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:45:02.436 17:45:00 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:45:02.436 17:45:00 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:02.436 17:45:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:02.436 ************************************ 00:45:02.436 START TEST fio_dif_rand_params 00:45:02.436 ************************************ 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:02.436 bdev_null0 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:02.436 [2024-10-01 17:45:00.855278] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:45:02.436 { 00:45:02.436 "params": { 00:45:02.436 "name": "Nvme$subsystem", 00:45:02.436 "trtype": "$TEST_TRANSPORT", 00:45:02.436 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:02.436 "adrfam": "ipv4", 00:45:02.436 "trsvcid": "$NVMF_PORT", 00:45:02.436 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:02.436 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:02.436 "hdgst": ${hdgst:-false}, 00:45:02.436 "ddgst": ${ddgst:-false} 00:45:02.436 }, 00:45:02.436 "method": "bdev_nvme_attach_controller" 00:45:02.436 } 00:45:02.436 EOF 00:45:02.436 )") 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:45:02.436 17:45:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:45:02.436 "params": { 00:45:02.436 "name": "Nvme0", 00:45:02.436 "trtype": "tcp", 00:45:02.436 "traddr": "10.0.0.2", 00:45:02.436 "adrfam": "ipv4", 00:45:02.436 "trsvcid": "4420", 00:45:02.437 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:02.437 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:02.437 "hdgst": false, 00:45:02.437 "ddgst": false 00:45:02.437 }, 00:45:02.437 "method": "bdev_nvme_attach_controller" 00:45:02.437 }' 00:45:02.437 17:45:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:02.437 17:45:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:02.437 17:45:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:02.437 17:45:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:02.437 17:45:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:45:02.437 17:45:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:02.437 17:45:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:02.437 17:45:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:02.437 17:45:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:02.437 17:45:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:03.028 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:45:03.028 ... 00:45:03.028 fio-3.35 00:45:03.028 Starting 3 threads 00:45:09.614 00:45:09.614 filename0: (groupid=0, jobs=1): err= 0: pid=3412714: Tue Oct 1 17:45:06 2024 00:45:09.614 read: IOPS=244, BW=30.6MiB/s (32.1MB/s)(154MiB/5036msec) 00:45:09.614 slat (nsec): min=3045, max=18132, avg=6122.58, stdev=659.78 00:45:09.614 clat (usec): min=6218, max=51724, avg=12242.22, stdev=4913.06 00:45:09.614 lat (usec): min=6223, max=51730, avg=12248.34, stdev=4913.04 00:45:09.614 clat percentiles (usec): 00:45:09.614 | 1.00th=[ 7177], 5.00th=[ 8717], 10.00th=[ 9634], 20.00th=[10421], 00:45:09.614 | 30.00th=[10945], 40.00th=[11600], 50.00th=[11994], 60.00th=[12256], 00:45:09.614 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13566], 95.00th=[13960], 00:45:09.614 | 99.00th=[50594], 99.50th=[51119], 99.90th=[51643], 99.95th=[51643], 00:45:09.614 | 99.99th=[51643] 00:45:09.614 bw ( KiB/s): min=27392, max=35328, per=33.67%, avg=31488.00, stdev=2358.66, samples=10 00:45:09.614 iops : min= 214, max= 276, avg=246.00, stdev=18.43, samples=10 00:45:09.614 lat (msec) : 10=14.52%, 20=84.02%, 50=0.41%, 100=1.05% 00:45:09.614 cpu : usr=95.00%, sys=4.77%, ctx=34, majf=0, minf=131 00:45:09.614 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:09.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:09.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:09.614 issued rwts: total=1233,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:09.614 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:09.614 filename0: (groupid=0, jobs=1): err= 0: pid=3412715: Tue Oct 1 17:45:06 2024 00:45:09.614 read: IOPS=240, BW=30.1MiB/s (31.5MB/s)(152MiB/5045msec) 00:45:09.614 slat (nsec): min=7997, max=31598, avg=8880.49, stdev=1417.74 00:45:09.614 clat (usec): min=7733, max=54365, avg=12430.94, stdev=5515.61 00:45:09.614 lat (usec): min=7745, max=54374, avg=12439.82, stdev=5515.54 00:45:09.614 clat percentiles (usec): 00:45:09.614 | 1.00th=[ 8455], 5.00th=[ 9241], 10.00th=[ 9765], 20.00th=[10552], 00:45:09.614 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11994], 60.00th=[12256], 00:45:09.614 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13173], 95.00th=[13829], 00:45:09.614 | 99.00th=[51119], 99.50th=[52167], 99.90th=[53216], 99.95th=[54264], 00:45:09.614 | 99.99th=[54264] 00:45:09.614 bw ( KiB/s): min=24832, max=34048, per=33.15%, avg=31001.60, stdev=3292.69, samples=10 00:45:09.614 iops : min= 194, max= 266, avg=242.20, stdev=25.72, samples=10 00:45:09.614 lat (msec) : 10=12.94%, 20=85.16%, 50=0.41%, 100=1.48% 00:45:09.614 cpu : usr=94.85%, sys=4.94%, ctx=8, majf=0, minf=72 00:45:09.614 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:09.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:09.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:09.614 issued rwts: total=1213,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:09.614 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:09.614 filename0: (groupid=0, jobs=1): err= 0: pid=3412716: Tue Oct 1 17:45:06 2024 00:45:09.614 read: IOPS=247, BW=31.0MiB/s (32.5MB/s)(155MiB/5006msec) 00:45:09.614 slat (nsec): min=5715, max=31303, avg=8429.75, stdev=1487.33 00:45:09.614 clat (usec): min=5361, max=52810, avg=12097.90, stdev=2564.50 00:45:09.614 lat (usec): min=5370, max=52819, avg=12106.33, stdev=2564.50 00:45:09.614 clat percentiles (usec): 00:45:09.614 | 1.00th=[ 7504], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[10552], 00:45:09.614 | 30.00th=[11338], 40.00th=[11994], 50.00th=[12256], 60.00th=[12649], 00:45:09.614 | 70.00th=[12911], 80.00th=[13304], 90.00th=[13960], 95.00th=[14353], 00:45:09.614 | 99.00th=[15270], 99.50th=[15664], 99.90th=[52691], 99.95th=[52691], 00:45:09.614 | 99.99th=[52691] 00:45:09.614 bw ( KiB/s): min=29952, max=33280, per=33.89%, avg=31692.80, stdev=1229.51, samples=10 00:45:09.614 iops : min= 234, max= 260, avg=247.60, stdev= 9.61, samples=10 00:45:09.614 lat (msec) : 10=14.03%, 20=85.73%, 100=0.24% 00:45:09.614 cpu : usr=95.18%, sys=4.58%, ctx=10, majf=0, minf=58 00:45:09.614 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:09.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:09.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:09.614 issued rwts: total=1240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:09.614 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:09.614 00:45:09.614 Run status group 0 (all jobs): 00:45:09.614 READ: bw=91.3MiB/s (95.8MB/s), 30.1MiB/s-31.0MiB/s (31.5MB/s-32.5MB/s), io=461MiB (483MB), run=5006-5045msec 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:09.614 bdev_null0 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:09.614 [2024-10-01 17:45:07.066073] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:09.614 bdev_null1 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:09.614 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:09.615 bdev_null2 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:45:09.615 { 00:45:09.615 "params": { 00:45:09.615 "name": "Nvme$subsystem", 00:45:09.615 "trtype": "$TEST_TRANSPORT", 00:45:09.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:09.615 "adrfam": "ipv4", 00:45:09.615 "trsvcid": "$NVMF_PORT", 00:45:09.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:09.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:09.615 "hdgst": ${hdgst:-false}, 00:45:09.615 "ddgst": ${ddgst:-false} 00:45:09.615 }, 00:45:09.615 "method": "bdev_nvme_attach_controller" 00:45:09.615 } 00:45:09.615 EOF 00:45:09.615 )") 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:45:09.615 { 00:45:09.615 "params": { 00:45:09.615 "name": "Nvme$subsystem", 00:45:09.615 "trtype": "$TEST_TRANSPORT", 00:45:09.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:09.615 "adrfam": "ipv4", 00:45:09.615 "trsvcid": "$NVMF_PORT", 00:45:09.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:09.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:09.615 "hdgst": ${hdgst:-false}, 00:45:09.615 "ddgst": ${ddgst:-false} 00:45:09.615 }, 00:45:09.615 "method": "bdev_nvme_attach_controller" 00:45:09.615 } 00:45:09.615 EOF 00:45:09.615 )") 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:45:09.615 { 00:45:09.615 "params": { 00:45:09.615 "name": "Nvme$subsystem", 00:45:09.615 "trtype": "$TEST_TRANSPORT", 00:45:09.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:09.615 "adrfam": "ipv4", 00:45:09.615 "trsvcid": "$NVMF_PORT", 00:45:09.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:09.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:09.615 "hdgst": ${hdgst:-false}, 00:45:09.615 "ddgst": ${ddgst:-false} 00:45:09.615 }, 00:45:09.615 "method": "bdev_nvme_attach_controller" 00:45:09.615 } 00:45:09.615 EOF 00:45:09.615 )") 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:45:09.615 "params": { 00:45:09.615 "name": "Nvme0", 00:45:09.615 "trtype": "tcp", 00:45:09.615 "traddr": "10.0.0.2", 00:45:09.615 "adrfam": "ipv4", 00:45:09.615 "trsvcid": "4420", 00:45:09.615 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:09.615 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:09.615 "hdgst": false, 00:45:09.615 "ddgst": false 00:45:09.615 }, 00:45:09.615 "method": "bdev_nvme_attach_controller" 00:45:09.615 },{ 00:45:09.615 "params": { 00:45:09.615 "name": "Nvme1", 00:45:09.615 "trtype": "tcp", 00:45:09.615 "traddr": "10.0.0.2", 00:45:09.615 "adrfam": "ipv4", 00:45:09.615 "trsvcid": "4420", 00:45:09.615 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:09.615 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:09.615 "hdgst": false, 00:45:09.615 "ddgst": false 00:45:09.615 }, 00:45:09.615 "method": "bdev_nvme_attach_controller" 00:45:09.615 },{ 00:45:09.615 "params": { 00:45:09.615 "name": "Nvme2", 00:45:09.615 "trtype": "tcp", 00:45:09.615 "traddr": "10.0.0.2", 00:45:09.615 "adrfam": "ipv4", 00:45:09.615 "trsvcid": "4420", 00:45:09.615 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:45:09.615 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:45:09.615 "hdgst": false, 00:45:09.615 "ddgst": false 00:45:09.615 }, 00:45:09.615 "method": "bdev_nvme_attach_controller" 00:45:09.615 }' 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:09.615 17:45:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:09.615 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:09.615 ... 00:45:09.615 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:09.615 ... 00:45:09.615 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:09.616 ... 00:45:09.616 fio-3.35 00:45:09.616 Starting 24 threads 00:45:21.983 00:45:21.983 filename0: (groupid=0, jobs=1): err= 0: pid=3414049: Tue Oct 1 17:45:18 2024 00:45:21.983 read: IOPS=491, BW=1966KiB/s (2013kB/s)(19.2MiB/10028msec) 00:45:21.983 slat (usec): min=5, max=101, avg=11.96, stdev=11.22 00:45:21.983 clat (usec): min=11027, max=39841, avg=32451.76, stdev=1937.90 00:45:21.983 lat (usec): min=11037, max=39848, avg=32463.71, stdev=1936.56 00:45:21.983 clat percentiles (usec): 00:45:21.983 | 1.00th=[21890], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:45:21.983 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:45:21.983 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:45:21.983 | 99.00th=[34341], 99.50th=[38011], 99.90th=[39060], 99.95th=[39584], 00:45:21.983 | 99.99th=[39584] 00:45:21.983 bw ( KiB/s): min= 1916, max= 2052, per=4.16%, avg=1964.15, stdev=61.54, samples=20 00:45:21.983 iops : min= 479, max= 513, avg=491.00, stdev=15.33, samples=20 00:45:21.983 lat (msec) : 20=0.97%, 50=99.03% 00:45:21.983 cpu : usr=98.95%, sys=0.74%, ctx=13, majf=0, minf=66 00:45:21.983 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:45:21.983 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.983 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.983 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:21.983 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:21.983 filename0: (groupid=0, jobs=1): err= 0: pid=3414050: Tue Oct 1 17:45:18 2024 00:45:21.983 read: IOPS=492, BW=1969KiB/s (2016kB/s)(19.2MiB/10010msec) 00:45:21.983 slat (nsec): min=5572, max=62290, avg=14075.74, stdev=10024.45 00:45:21.983 clat (usec): min=12740, max=46451, avg=32385.70, stdev=2150.30 00:45:21.983 lat (usec): min=12752, max=46462, avg=32399.78, stdev=2150.29 00:45:21.983 clat percentiles (usec): 00:45:21.983 | 1.00th=[17171], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:45:21.983 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:45:21.983 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:45:21.983 | 99.00th=[34341], 99.50th=[34341], 99.90th=[46400], 99.95th=[46400], 00:45:21.983 | 99.99th=[46400] 00:45:21.983 bw ( KiB/s): min= 1916, max= 2180, per=4.16%, avg=1966.63, stdev=76.63, samples=19 00:45:21.983 iops : min= 479, max= 545, avg=491.58, stdev=19.08, samples=19 00:45:21.983 lat (msec) : 20=1.42%, 50=98.58% 00:45:21.983 cpu : usr=98.98%, sys=0.73%, ctx=12, majf=0, minf=46 00:45:21.983 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:21.983 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.983 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.983 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:21.983 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:21.983 filename0: (groupid=0, jobs=1): err= 0: pid=3414051: Tue Oct 1 17:45:18 2024 00:45:21.983 read: IOPS=488, BW=1955KiB/s (2002kB/s)(19.1MiB/10016msec) 00:45:21.983 slat (nsec): min=5533, max=97315, avg=25631.12, stdev=16874.54 00:45:21.983 clat (usec): min=15771, max=51492, avg=32471.92, stdev=1702.37 00:45:21.983 lat (usec): min=15777, max=51500, avg=32497.55, stdev=1702.78 00:45:21.983 clat percentiles (usec): 00:45:21.983 | 1.00th=[28967], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:45:21.983 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:45:21.983 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:45:21.983 | 99.00th=[38536], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:45:21.983 | 99.99th=[51643] 00:45:21.983 bw ( KiB/s): min= 1792, max= 2048, per=4.12%, avg=1946.32, stdev=68.80, samples=19 00:45:21.983 iops : min= 448, max= 512, avg=486.58, stdev=17.20, samples=19 00:45:21.983 lat (msec) : 20=0.51%, 50=99.45%, 100=0.04% 00:45:21.983 cpu : usr=98.89%, sys=0.80%, ctx=13, majf=0, minf=44 00:45:21.983 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.6%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:21.983 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.983 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.983 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:21.983 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:21.983 filename0: (groupid=0, jobs=1): err= 0: pid=3414052: Tue Oct 1 17:45:18 2024 00:45:21.983 read: IOPS=491, BW=1966KiB/s (2013kB/s)(19.2MiB/10025msec) 00:45:21.983 slat (usec): min=5, max=101, avg=21.48, stdev=18.38 00:45:21.983 clat (usec): min=11145, max=38124, avg=32374.47, stdev=1816.34 00:45:21.983 lat (usec): min=11153, max=38131, avg=32395.95, stdev=1815.55 00:45:21.983 clat percentiles (usec): 00:45:21.983 | 1.00th=[21627], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:45:21.983 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:45:21.983 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:45:21.983 | 99.00th=[33817], 99.50th=[33817], 99.90th=[34341], 99.95th=[34866], 00:45:21.983 | 99.99th=[38011] 00:45:21.983 bw ( KiB/s): min= 1916, max= 2052, per=4.16%, avg=1964.15, stdev=63.05, samples=20 00:45:21.983 iops : min= 479, max= 513, avg=491.00, stdev=15.71, samples=20 00:45:21.983 lat (msec) : 20=0.97%, 50=99.03% 00:45:21.984 cpu : usr=98.68%, sys=0.94%, ctx=72, majf=0, minf=50 00:45:21.984 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:21.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.984 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.984 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:21.984 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:21.984 filename0: (groupid=0, jobs=1): err= 0: pid=3414053: Tue Oct 1 17:45:18 2024 00:45:21.984 read: IOPS=488, BW=1956KiB/s (2003kB/s)(19.1MiB/10014msec) 00:45:21.984 slat (nsec): min=5589, max=99320, avg=21005.28, stdev=18073.97 00:45:21.984 clat (usec): min=21579, max=45847, avg=32551.68, stdev=1097.89 00:45:21.984 lat (usec): min=21589, max=45853, avg=32572.69, stdev=1096.58 00:45:21.984 clat percentiles (usec): 00:45:21.984 | 1.00th=[29754], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:45:21.984 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:45:21.984 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:45:21.984 | 99.00th=[34866], 99.50th=[38011], 99.90th=[40109], 99.95th=[40109], 00:45:21.984 | 99.99th=[45876] 00:45:21.984 bw ( KiB/s): min= 1916, max= 2048, per=4.13%, avg=1952.68, stdev=55.51, samples=19 00:45:21.984 iops : min= 479, max= 512, avg=488.05, stdev=13.77, samples=19 00:45:21.984 lat (msec) : 50=100.00% 00:45:21.984 cpu : usr=99.06%, sys=0.62%, ctx=34, majf=0, minf=82 00:45:21.984 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:45:21.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.984 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.984 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:21.984 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:21.984 filename0: (groupid=0, jobs=1): err= 0: pid=3414054: Tue Oct 1 17:45:18 2024 00:45:21.984 read: IOPS=486, BW=1947KiB/s (1994kB/s)(19.0MiB/10005msec) 00:45:21.984 slat (nsec): min=5565, max=95568, avg=17942.83, stdev=15453.26 00:45:21.984 clat (usec): min=11761, max=62824, avg=32789.17, stdev=4306.96 00:45:21.984 lat (usec): min=11774, max=62847, avg=32807.11, stdev=4305.85 00:45:21.984 clat percentiles (usec): 00:45:21.984 | 1.00th=[19792], 5.00th=[27395], 10.00th=[32113], 20.00th=[32375], 00:45:21.984 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:45:21.984 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[38536], 00:45:21.984 | 99.00th=[49021], 99.50th=[51643], 99.90th=[62653], 99.95th=[62653], 00:45:21.984 | 99.99th=[62653] 00:45:21.984 bw ( KiB/s): min= 1763, max= 2064, per=4.11%, avg=1942.00, stdev=80.74, samples=19 00:45:21.984 iops : min= 440, max= 516, avg=485.42, stdev=20.24, samples=19 00:45:21.984 lat (msec) : 20=1.13%, 50=98.25%, 100=0.62% 00:45:21.984 cpu : usr=99.00%, sys=0.68%, ctx=28, majf=0, minf=62 00:45:21.984 IO depths : 1=0.6%, 2=1.6%, 4=5.3%, 8=76.6%, 16=15.9%, 32=0.0%, >=64=0.0% 00:45:21.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.984 complete : 0=0.0%, 4=90.1%, 8=8.0%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.984 issued rwts: total=4870,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:21.984 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:21.984 filename0: (groupid=0, jobs=1): err= 0: pid=3414055: Tue Oct 1 17:45:18 2024 00:45:21.984 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.1MiB/10005msec) 00:45:21.984 slat (nsec): min=5572, max=63912, avg=9741.17, stdev=6663.41 00:45:21.984 clat (usec): min=14708, max=62023, avg=32615.24, stdev=2663.45 00:45:21.984 lat (usec): min=14714, max=62044, avg=32624.98, stdev=2663.56 00:45:21.984 clat percentiles (usec): 00:45:21.984 | 1.00th=[20841], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:45:21.984 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:45:21.984 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:45:21.984 | 99.00th=[43779], 99.50th=[46400], 99.90th=[57410], 99.95th=[57410], 00:45:21.984 | 99.99th=[62129] 00:45:21.984 bw ( KiB/s): min= 1795, max= 2048, per=4.12%, avg=1946.26, stdev=67.26, samples=19 00:45:21.984 iops : min= 448, max= 512, avg=486.53, stdev=16.91, samples=19 00:45:21.984 lat (msec) : 20=0.49%, 50=99.18%, 100=0.33% 00:45:21.984 cpu : usr=97.80%, sys=1.21%, ctx=398, majf=0, minf=79 00:45:21.984 IO depths : 1=5.2%, 2=11.4%, 4=24.9%, 8=51.2%, 16=7.3%, 32=0.0%, >=64=0.0% 00:45:21.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.984 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.984 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:21.984 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:21.984 filename0: (groupid=0, jobs=1): err= 0: pid=3414056: Tue Oct 1 17:45:18 2024 00:45:21.984 read: IOPS=489, BW=1956KiB/s (2003kB/s)(19.1MiB/10011msec) 00:45:21.984 slat (nsec): min=5873, max=94881, avg=22492.19, stdev=13871.16 00:45:21.984 clat (usec): min=21628, max=34798, avg=32514.82, stdev=778.92 00:45:21.984 lat (usec): min=21638, max=34807, avg=32537.31, stdev=777.89 00:45:21.984 clat percentiles (usec): 00:45:21.984 | 1.00th=[31589], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:45:21.984 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:45:21.984 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:45:21.984 | 99.00th=[33817], 99.50th=[34866], 99.90th=[34866], 99.95th=[34866], 00:45:21.984 | 99.99th=[34866] 00:45:21.984 bw ( KiB/s): min= 1916, max= 2048, per=4.13%, avg=1952.53, stdev=57.42, samples=19 00:45:21.984 iops : min= 479, max= 512, avg=488.05, stdev=14.23, samples=19 00:45:21.984 lat (msec) : 50=100.00% 00:45:21.984 cpu : usr=98.65%, sys=1.03%, ctx=10, majf=0, minf=70 00:45:21.984 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:45:21.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.984 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.984 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:21.984 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:21.984 filename1: (groupid=0, jobs=1): err= 0: pid=3414057: Tue Oct 1 17:45:18 2024 00:45:21.984 read: IOPS=489, BW=1956KiB/s (2003kB/s)(19.1MiB/10010msec) 00:45:21.984 slat (usec): min=5, max=102, avg=28.86, stdev=17.93 00:45:21.984 clat (usec): min=21673, max=39241, avg=32435.64, stdev=899.48 00:45:21.984 lat (usec): min=21683, max=39249, avg=32464.50, stdev=899.82 00:45:21.984 clat percentiles (usec): 00:45:21.984 | 1.00th=[31065], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:45:21.984 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:45:21.984 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:45:21.984 | 99.00th=[34341], 99.50th=[34341], 99.90th=[39060], 99.95th=[39060], 00:45:21.984 | 99.99th=[39060] 00:45:21.984 bw ( KiB/s): min= 1916, max= 2048, per=4.13%, avg=1952.53, stdev=57.42, samples=19 00:45:21.984 iops : min= 479, max= 512, avg=488.05, stdev=14.23, samples=19 00:45:21.984 lat (msec) : 50=100.00% 00:45:21.984 cpu : usr=98.69%, sys=0.87%, ctx=104, majf=0, minf=74 00:45:21.984 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:21.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.984 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.984 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:21.984 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:21.984 filename1: (groupid=0, jobs=1): err= 0: pid=3414058: Tue Oct 1 17:45:18 2024 00:45:21.984 read: IOPS=498, BW=1994KiB/s (2042kB/s)(19.5MiB/10003msec) 00:45:21.984 slat (nsec): min=5563, max=64544, avg=12835.43, stdev=9096.99 00:45:21.984 clat (usec): min=13811, max=57898, avg=31994.05, stdev=3664.56 00:45:21.984 lat (usec): min=13836, max=57931, avg=32006.88, stdev=3664.95 00:45:21.984 clat percentiles (usec): 00:45:21.984 | 1.00th=[19006], 5.00th=[23725], 10.00th=[31065], 20.00th=[32113], 00:45:21.984 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:45:21.984 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:45:21.984 | 99.00th=[44303], 99.50th=[48497], 99.90th=[57934], 99.95th=[57934], 00:45:21.984 | 99.99th=[57934] 00:45:21.984 bw ( KiB/s): min= 1795, max= 2240, per=4.19%, avg=1981.58, stdev=110.47, samples=19 00:45:21.984 iops : min= 448, max= 560, avg=495.32, stdev=27.60, samples=19 00:45:21.984 lat (msec) : 20=1.40%, 50=98.28%, 100=0.32% 00:45:21.984 cpu : usr=98.81%, sys=0.80%, ctx=71, majf=0, minf=41 00:45:21.984 IO depths : 1=5.0%, 2=10.3%, 4=21.7%, 8=55.1%, 16=7.8%, 32=0.0%, >=64=0.0% 00:45:21.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.984 complete : 0=0.0%, 4=93.3%, 8=1.3%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.984 issued rwts: total=4986,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:21.985 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:21.985 filename1: (groupid=0, jobs=1): err= 0: pid=3414059: Tue Oct 1 17:45:18 2024 00:45:21.985 read: IOPS=494, BW=1979KiB/s (2027kB/s)(19.3MiB/10007msec) 00:45:21.985 slat (nsec): min=5554, max=96125, avg=15200.56, stdev=14226.36 00:45:21.985 clat (usec): min=13472, max=70414, avg=32257.21, stdev=5172.04 00:45:21.985 lat (usec): min=13478, max=70432, avg=32272.41, stdev=5171.53 00:45:21.985 clat percentiles (usec): 00:45:21.985 | 1.00th=[18482], 5.00th=[24249], 10.00th=[26084], 20.00th=[29230], 00:45:21.985 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:45:21.985 | 70.00th=[32900], 80.00th=[33162], 90.00th=[36439], 95.00th=[43254], 00:45:21.985 | 99.00th=[50070], 99.50th=[54264], 99.90th=[59507], 99.95th=[59507], 00:45:21.985 | 99.99th=[70779] 00:45:21.985 bw ( KiB/s): min= 1760, max= 2096, per=4.18%, avg=1973.05, stdev=70.74, samples=19 00:45:21.985 iops : min= 440, max= 524, avg=493.26, stdev=17.69, samples=19 00:45:21.985 lat (msec) : 20=1.62%, 50=97.23%, 100=1.15% 00:45:21.985 cpu : usr=98.96%, sys=0.70%, ctx=88, majf=0, minf=71 00:45:21.985 IO depths : 1=0.3%, 2=0.9%, 4=4.6%, 8=78.5%, 16=15.7%, 32=0.0%, >=64=0.0% 00:45:21.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.985 complete : 0=0.0%, 4=89.5%, 8=8.3%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.985 issued rwts: total=4952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:21.985 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:21.985 filename1: (groupid=0, jobs=1): err= 0: pid=3414060: Tue Oct 1 17:45:18 2024 00:45:21.985 read: IOPS=493, BW=1972KiB/s (2019kB/s)(19.3MiB/10028msec) 00:45:21.985 slat (nsec): min=5623, max=84362, avg=16175.46, stdev=11374.52 00:45:21.985 clat (usec): min=11122, max=36943, avg=32319.12, stdev=2043.57 00:45:21.985 lat (usec): min=11131, max=36955, avg=32335.29, stdev=2043.48 00:45:21.985 clat percentiles (usec): 00:45:21.985 | 1.00th=[19268], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:45:21.985 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:45:21.985 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:45:21.985 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:45:21.985 | 99.99th=[36963] 00:45:21.985 bw ( KiB/s): min= 1916, max= 2052, per=4.17%, avg=1970.55, stdev=64.81, samples=20 00:45:21.985 iops : min= 479, max= 513, avg=492.60, stdev=16.16, samples=20 00:45:21.985 lat (msec) : 20=1.29%, 50=98.71% 00:45:21.985 cpu : usr=98.83%, sys=0.86%, ctx=58, majf=0, minf=57 00:45:21.985 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:21.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.985 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.985 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:21.985 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:21.985 filename1: (groupid=0, jobs=1): err= 0: pid=3414061: Tue Oct 1 17:45:18 2024 00:45:21.985 read: IOPS=490, BW=1961KiB/s (2008kB/s)(19.2MiB/10018msec) 00:45:21.985 slat (nsec): min=5577, max=64312, avg=14134.98, stdev=9193.41 00:45:21.985 clat (usec): min=17680, max=48789, avg=32510.62, stdev=1808.65 00:45:21.985 lat (usec): min=17686, max=48798, avg=32524.75, stdev=1809.10 00:45:21.985 clat percentiles (usec): 00:45:21.985 | 1.00th=[20841], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:45:21.985 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:45:21.985 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:45:21.985 | 99.00th=[35390], 99.50th=[40109], 99.90th=[46924], 99.95th=[48497], 00:45:21.985 | 99.99th=[49021] 00:45:21.985 bw ( KiB/s): min= 1916, max= 2048, per=4.15%, avg=1959.05, stdev=59.30, samples=19 00:45:21.985 iops : min= 479, max= 512, avg=489.68, stdev=14.71, samples=19 00:45:21.985 lat (msec) : 20=0.69%, 50=99.31% 00:45:21.985 cpu : usr=98.91%, sys=0.72%, ctx=65, majf=0, minf=115 00:45:21.985 IO depths : 1=4.9%, 2=11.1%, 4=24.9%, 8=51.4%, 16=7.6%, 32=0.0%, >=64=0.0% 00:45:21.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.985 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.985 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:21.985 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:21.985 filename1: (groupid=0, jobs=1): err= 0: pid=3414062: Tue Oct 1 17:45:18 2024 00:45:21.985 read: IOPS=526, BW=2106KiB/s (2157kB/s)(20.6MiB/10027msec) 00:45:21.985 slat (nsec): min=2794, max=63439, avg=7166.24, stdev=3912.89 00:45:21.985 clat (usec): min=1455, max=51724, avg=30277.53, stdev=6417.13 00:45:21.985 lat (usec): min=1460, max=51731, avg=30284.69, stdev=6417.58 00:45:21.985 clat percentiles (usec): 00:45:21.985 | 1.00th=[ 1745], 5.00th=[17695], 10.00th=[22152], 20.00th=[31851], 00:45:21.985 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:45:21.985 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33162], 00:45:21.985 | 99.00th=[34866], 99.50th=[45876], 99.90th=[49021], 99.95th=[49546], 00:45:21.985 | 99.99th=[51643] 00:45:21.985 bw ( KiB/s): min= 1916, max= 3424, per=4.47%, avg=2110.75, stdev=381.75, samples=20 00:45:21.985 iops : min= 479, max= 856, avg=527.65, stdev=95.45, samples=20 00:45:21.985 lat (msec) : 2=1.65%, 4=0.78%, 10=0.30%, 20=5.30%, 50=91.93% 00:45:21.985 lat (msec) : 100=0.04% 00:45:21.985 cpu : usr=98.87%, sys=0.84%, ctx=62, majf=0, minf=97 00:45:21.985 IO depths : 1=4.6%, 2=9.9%, 4=22.0%, 8=55.5%, 16=7.9%, 32=0.0%, >=64=0.0% 00:45:21.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.985 complete : 0=0.0%, 4=93.3%, 8=0.9%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.985 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:21.985 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:21.985 filename1: (groupid=0, jobs=1): err= 0: pid=3414063: Tue Oct 1 17:45:18 2024 00:45:21.985 read: IOPS=491, BW=1967KiB/s (2014kB/s)(19.2MiB/10012msec) 00:45:21.985 slat (nsec): min=5501, max=74434, avg=17140.67, stdev=12505.56 00:45:21.985 clat (usec): min=10561, max=63290, avg=32390.30, stdev=3150.43 00:45:21.985 lat (usec): min=10567, max=63309, avg=32407.44, stdev=3151.33 00:45:21.985 clat percentiles (usec): 00:45:21.985 | 1.00th=[20055], 5.00th=[28967], 10.00th=[31851], 20.00th=[32113], 00:45:21.985 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:45:21.985 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:45:21.985 | 99.00th=[40109], 99.50th=[47449], 99.90th=[63177], 99.95th=[63177], 00:45:21.985 | 99.99th=[63177] 00:45:21.985 bw ( KiB/s): min= 1792, max= 2171, per=4.15%, avg=1963.74, stdev=81.04, samples=19 00:45:21.985 iops : min= 448, max= 542, avg=490.89, stdev=20.15, samples=19 00:45:21.985 lat (msec) : 20=1.02%, 50=98.62%, 100=0.37% 00:45:21.985 cpu : usr=99.02%, sys=0.68%, ctx=16, majf=0, minf=50 00:45:21.985 IO depths : 1=3.1%, 2=8.2%, 4=20.4%, 8=58.1%, 16=10.1%, 32=0.0%, >=64=0.0% 00:45:21.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.985 complete : 0=0.0%, 4=93.2%, 8=1.8%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.985 issued rwts: total=4924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:21.985 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:21.985 filename1: (groupid=0, jobs=1): err= 0: pid=3414064: Tue Oct 1 17:45:18 2024 00:45:21.985 read: IOPS=491, BW=1967KiB/s (2014kB/s)(19.2MiB/10015msec) 00:45:21.985 slat (nsec): min=5466, max=83445, avg=21858.05, stdev=12501.16 00:45:21.985 clat (usec): min=15449, max=55865, avg=32351.17, stdev=2491.35 00:45:21.985 lat (usec): min=15455, max=55872, avg=32373.03, stdev=2492.86 00:45:21.985 clat percentiles (usec): 00:45:21.985 | 1.00th=[20579], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:45:21.985 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:45:21.985 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:45:21.985 | 99.00th=[39584], 99.50th=[45351], 99.90th=[51643], 99.95th=[52167], 00:45:21.985 | 99.99th=[55837] 00:45:21.985 bw ( KiB/s): min= 1792, max= 2112, per=4.14%, avg=1958.11, stdev=77.00, samples=19 00:45:21.985 iops : min= 448, max= 528, avg=489.53, stdev=19.25, samples=19 00:45:21.985 lat (msec) : 20=0.65%, 50=99.07%, 100=0.28% 00:45:21.985 cpu : usr=98.95%, sys=0.71%, ctx=53, majf=0, minf=53 00:45:21.985 IO depths : 1=5.1%, 2=11.2%, 4=24.3%, 8=52.0%, 16=7.4%, 32=0.0%, >=64=0.0% 00:45:21.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.985 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.985 issued rwts: total=4924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:21.985 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:21.985 filename2: (groupid=0, jobs=1): err= 0: pid=3414065: Tue Oct 1 17:45:18 2024 00:45:21.985 read: IOPS=498, BW=1992KiB/s (2040kB/s)(19.5MiB/10023msec) 00:45:21.985 slat (nsec): min=5572, max=83099, avg=10140.25, stdev=6584.65 00:45:21.985 clat (usec): min=4963, max=39301, avg=32039.70, stdev=3125.78 00:45:21.985 lat (usec): min=4981, max=39309, avg=32049.84, stdev=3125.04 00:45:21.985 clat percentiles (usec): 00:45:21.985 | 1.00th=[13829], 5.00th=[31065], 10.00th=[32113], 20.00th=[32375], 00:45:21.985 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:45:21.985 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:45:21.985 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:45:21.985 | 99.99th=[39060] 00:45:21.985 bw ( KiB/s): min= 1916, max= 2304, per=4.21%, avg=1989.75, stdev=97.33, samples=20 00:45:21.986 iops : min= 479, max= 576, avg=497.40, stdev=24.31, samples=20 00:45:21.986 lat (msec) : 10=0.32%, 20=1.92%, 50=97.76% 00:45:21.986 cpu : usr=98.19%, sys=1.05%, ctx=187, majf=0, minf=87 00:45:21.986 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:21.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.986 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.986 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:21.986 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:21.986 filename2: (groupid=0, jobs=1): err= 0: pid=3414066: Tue Oct 1 17:45:18 2024 00:45:21.986 read: IOPS=489, BW=1956KiB/s (2003kB/s)(19.1MiB/10011msec) 00:45:21.986 slat (nsec): min=5497, max=61546, avg=17776.08, stdev=10805.35 00:45:21.986 clat (usec): min=15933, max=54695, avg=32546.57, stdev=1735.79 00:45:21.986 lat (usec): min=15941, max=54704, avg=32564.34, stdev=1735.52 00:45:21.986 clat percentiles (usec): 00:45:21.986 | 1.00th=[30802], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:45:21.986 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:45:21.986 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:45:21.986 | 99.00th=[34341], 99.50th=[43779], 99.90th=[49546], 99.95th=[49546], 00:45:21.986 | 99.99th=[54789] 00:45:21.986 bw ( KiB/s): min= 1792, max= 2048, per=4.13%, avg=1952.84, stdev=72.61, samples=19 00:45:21.986 iops : min= 448, max= 512, avg=488.21, stdev=18.15, samples=19 00:45:21.986 lat (msec) : 20=0.49%, 50=99.47%, 100=0.04% 00:45:21.986 cpu : usr=99.02%, sys=0.68%, ctx=14, majf=0, minf=63 00:45:21.986 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:45:21.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.986 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.986 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:21.986 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:21.986 filename2: (groupid=0, jobs=1): err= 0: pid=3414067: Tue Oct 1 17:45:18 2024 00:45:21.986 read: IOPS=492, BW=1972KiB/s (2019kB/s)(19.3MiB/10029msec) 00:45:21.986 slat (nsec): min=5586, max=64751, avg=16603.88, stdev=10952.22 00:45:21.986 clat (usec): min=13262, max=48215, avg=32311.21, stdev=2192.81 00:45:21.986 lat (usec): min=13287, max=48223, avg=32327.81, stdev=2192.68 00:45:21.986 clat percentiles (usec): 00:45:21.986 | 1.00th=[17171], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:45:21.986 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:45:21.986 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:45:21.986 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34866], 99.95th=[46400], 00:45:21.986 | 99.99th=[47973] 00:45:21.986 bw ( KiB/s): min= 1916, max= 2176, per=4.17%, avg=1970.30, stdev=76.35, samples=20 00:45:21.986 iops : min= 479, max= 544, avg=492.50, stdev=19.01, samples=20 00:45:21.986 lat (msec) : 20=1.38%, 50=98.62% 00:45:21.986 cpu : usr=98.90%, sys=0.79%, ctx=27, majf=0, minf=91 00:45:21.986 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:21.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.986 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.986 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:21.986 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:21.986 filename2: (groupid=0, jobs=1): err= 0: pid=3414068: Tue Oct 1 17:45:18 2024 00:45:21.986 read: IOPS=498, BW=1993KiB/s (2041kB/s)(19.5MiB/10005msec) 00:45:21.986 slat (nsec): min=5561, max=69796, avg=16368.92, stdev=11534.65 00:45:21.986 clat (usec): min=16707, max=57345, avg=31956.05, stdev=3396.38 00:45:21.986 lat (usec): min=16724, max=57371, avg=31972.42, stdev=3397.62 00:45:21.986 clat percentiles (usec): 00:45:21.986 | 1.00th=[19006], 5.00th=[24249], 10.00th=[30802], 20.00th=[32113], 00:45:21.986 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:45:21.986 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:45:21.986 | 99.00th=[41681], 99.50th=[46400], 99.90th=[57410], 99.95th=[57410], 00:45:21.986 | 99.99th=[57410] 00:45:21.986 bw ( KiB/s): min= 1795, max= 2288, per=4.21%, avg=1990.84, stdev=127.94, samples=19 00:45:21.986 iops : min= 448, max= 572, avg=497.63, stdev=32.03, samples=19 00:45:21.986 lat (msec) : 20=2.37%, 50=97.31%, 100=0.32% 00:45:21.986 cpu : usr=98.91%, sys=0.79%, ctx=14, majf=0, minf=45 00:45:21.986 IO depths : 1=5.2%, 2=10.4%, 4=21.7%, 8=55.1%, 16=7.6%, 32=0.0%, >=64=0.0% 00:45:21.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.986 complete : 0=0.0%, 4=93.2%, 8=1.3%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.986 issued rwts: total=4986,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:21.986 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:21.986 filename2: (groupid=0, jobs=1): err= 0: pid=3414069: Tue Oct 1 17:45:18 2024 00:45:21.986 read: IOPS=489, BW=1956KiB/s (2003kB/s)(19.1MiB/10010msec) 00:45:21.986 slat (nsec): min=5602, max=95173, avg=27324.47, stdev=17121.30 00:45:21.986 clat (usec): min=21694, max=43888, avg=32467.52, stdev=926.62 00:45:21.986 lat (usec): min=21703, max=43909, avg=32494.84, stdev=926.63 00:45:21.986 clat percentiles (usec): 00:45:21.986 | 1.00th=[31327], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:45:21.986 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:45:21.986 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:45:21.986 | 99.00th=[34341], 99.50th=[34341], 99.90th=[40109], 99.95th=[40109], 00:45:21.986 | 99.99th=[43779] 00:45:21.986 bw ( KiB/s): min= 1916, max= 2048, per=4.13%, avg=1952.53, stdev=57.42, samples=19 00:45:21.986 iops : min= 479, max= 512, avg=488.05, stdev=14.23, samples=19 00:45:21.986 lat (msec) : 50=100.00% 00:45:21.986 cpu : usr=98.88%, sys=0.77%, ctx=63, majf=0, minf=72 00:45:21.986 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:45:21.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.986 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.986 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:21.986 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:21.986 filename2: (groupid=0, jobs=1): err= 0: pid=3414070: Tue Oct 1 17:45:18 2024 00:45:21.986 read: IOPS=488, BW=1955KiB/s (2002kB/s)(19.1MiB/10006msec) 00:45:21.986 slat (nsec): min=5617, max=99125, avg=27735.12, stdev=16386.09 00:45:21.986 clat (usec): min=6554, max=62090, avg=32470.95, stdev=2294.54 00:45:21.986 lat (usec): min=6560, max=62121, avg=32498.69, stdev=2295.27 00:45:21.986 clat percentiles (usec): 00:45:21.986 | 1.00th=[31327], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:45:21.986 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:45:21.986 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:45:21.986 | 99.00th=[34341], 99.50th=[34866], 99.90th=[62129], 99.95th=[62129], 00:45:21.986 | 99.99th=[62129] 00:45:21.986 bw ( KiB/s): min= 1788, max= 2048, per=4.12%, avg=1946.05, stdev=69.33, samples=19 00:45:21.986 iops : min= 447, max= 512, avg=486.47, stdev=17.35, samples=19 00:45:21.986 lat (msec) : 10=0.20%, 20=0.33%, 50=99.14%, 100=0.33% 00:45:21.986 cpu : usr=98.99%, sys=0.70%, ctx=18, majf=0, minf=67 00:45:21.986 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:21.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.986 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.986 issued rwts: total=4890,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:21.986 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:21.986 filename2: (groupid=0, jobs=1): err= 0: pid=3414071: Tue Oct 1 17:45:18 2024 00:45:21.986 read: IOPS=489, BW=1959KiB/s (2006kB/s)(19.1MiB/10005msec) 00:45:21.986 slat (nsec): min=5576, max=69356, avg=17130.07, stdev=11927.00 00:45:21.986 clat (usec): min=5986, max=57492, avg=32510.30, stdev=2801.18 00:45:21.986 lat (usec): min=5992, max=57515, avg=32527.43, stdev=2801.45 00:45:21.986 clat percentiles (usec): 00:45:21.986 | 1.00th=[20841], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:45:21.986 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:45:21.986 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:45:21.986 | 99.00th=[43779], 99.50th=[46924], 99.90th=[57410], 99.95th=[57410], 00:45:21.986 | 99.99th=[57410] 00:45:21.986 bw ( KiB/s): min= 1795, max= 2048, per=4.12%, avg=1948.79, stdev=76.25, samples=19 00:45:21.986 iops : min= 448, max= 512, avg=487.16, stdev=19.15, samples=19 00:45:21.986 lat (msec) : 10=0.29%, 20=0.61%, 50=98.61%, 100=0.49% 00:45:21.986 cpu : usr=98.46%, sys=0.90%, ctx=149, majf=0, minf=54 00:45:21.986 IO depths : 1=5.7%, 2=11.8%, 4=24.8%, 8=50.9%, 16=6.8%, 32=0.0%, >=64=0.0% 00:45:21.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.986 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.986 issued rwts: total=4900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:21.986 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:21.986 filename2: (groupid=0, jobs=1): err= 0: pid=3414072: Tue Oct 1 17:45:18 2024 00:45:21.986 read: IOPS=490, BW=1961KiB/s (2008kB/s)(19.2MiB/10019msec) 00:45:21.986 slat (nsec): min=5572, max=89418, avg=15039.13, stdev=13580.16 00:45:21.986 clat (usec): min=17613, max=41384, avg=32514.28, stdev=1244.08 00:45:21.986 lat (usec): min=17623, max=41439, avg=32529.31, stdev=1243.17 00:45:21.987 clat percentiles (usec): 00:45:21.987 | 1.00th=[25297], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:45:21.987 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:45:21.987 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:45:21.987 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:45:21.987 | 99.99th=[41157] 00:45:21.987 bw ( KiB/s): min= 1916, max= 2048, per=4.14%, avg=1956.65, stdev=59.72, samples=20 00:45:21.987 iops : min= 479, max= 512, avg=489.05, stdev=14.76, samples=20 00:45:21.987 lat (msec) : 20=0.33%, 50=99.67% 00:45:21.987 cpu : usr=98.81%, sys=0.83%, ctx=48, majf=0, minf=70 00:45:21.987 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:21.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.987 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:21.987 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:21.987 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:21.987 00:45:21.987 Run status group 0 (all jobs): 00:45:21.987 READ: bw=46.1MiB/s (48.4MB/s), 1947KiB/s-2106KiB/s (1994kB/s-2157kB/s), io=463MiB (485MB), run=10003-10029msec 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:21.987 bdev_null0 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:21.987 [2024-10-01 17:45:18.707050] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:21.987 bdev_null1 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:21.987 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:45:21.988 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:21.988 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:45:21.988 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:45:21.988 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:21.988 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:45:21.988 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:45:21.988 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:45:21.988 17:45:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:45:21.988 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:45:21.988 17:45:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:45:21.988 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:45:21.988 17:45:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:45:21.988 17:45:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:45:21.988 { 00:45:21.988 "params": { 00:45:21.988 "name": "Nvme$subsystem", 00:45:21.988 "trtype": "$TEST_TRANSPORT", 00:45:21.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:21.988 "adrfam": "ipv4", 00:45:21.988 "trsvcid": "$NVMF_PORT", 00:45:21.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:21.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:21.988 "hdgst": ${hdgst:-false}, 00:45:21.988 "ddgst": ${ddgst:-false} 00:45:21.988 }, 00:45:21.988 "method": "bdev_nvme_attach_controller" 00:45:21.988 } 00:45:21.988 EOF 00:45:21.988 )") 00:45:21.988 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:21.988 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:45:21.988 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:21.988 17:45:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:45:21.988 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:45:21.988 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:21.988 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:21.988 17:45:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:45:21.988 17:45:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:45:21.988 { 00:45:21.988 "params": { 00:45:21.988 "name": "Nvme$subsystem", 00:45:21.988 "trtype": "$TEST_TRANSPORT", 00:45:21.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:21.988 "adrfam": "ipv4", 00:45:21.988 "trsvcid": "$NVMF_PORT", 00:45:21.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:21.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:21.988 "hdgst": ${hdgst:-false}, 00:45:21.988 "ddgst": ${ddgst:-false} 00:45:21.988 }, 00:45:21.988 "method": "bdev_nvme_attach_controller" 00:45:21.988 } 00:45:21.988 EOF 00:45:21.988 )") 00:45:21.988 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:21.988 17:45:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:21.988 17:45:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:45:21.988 17:45:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:45:21.988 17:45:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:45:21.988 17:45:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:45:21.988 "params": { 00:45:21.988 "name": "Nvme0", 00:45:21.988 "trtype": "tcp", 00:45:21.988 "traddr": "10.0.0.2", 00:45:21.988 "adrfam": "ipv4", 00:45:21.988 "trsvcid": "4420", 00:45:21.988 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:21.988 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:21.988 "hdgst": false, 00:45:21.988 "ddgst": false 00:45:21.988 }, 00:45:21.988 "method": "bdev_nvme_attach_controller" 00:45:21.988 },{ 00:45:21.988 "params": { 00:45:21.988 "name": "Nvme1", 00:45:21.988 "trtype": "tcp", 00:45:21.988 "traddr": "10.0.0.2", 00:45:21.988 "adrfam": "ipv4", 00:45:21.988 "trsvcid": "4420", 00:45:21.988 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:21.988 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:21.988 "hdgst": false, 00:45:21.988 "ddgst": false 00:45:21.988 }, 00:45:21.988 "method": "bdev_nvme_attach_controller" 00:45:21.988 }' 00:45:21.988 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:21.988 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:21.988 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:21.988 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:21.988 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:45:21.988 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:21.988 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:21.988 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:21.988 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:21.988 17:45:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:21.988 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:45:21.988 ... 00:45:21.988 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:45:21.988 ... 00:45:21.988 fio-3.35 00:45:21.988 Starting 4 threads 00:45:27.276 00:45:27.276 filename0: (groupid=0, jobs=1): err= 0: pid=3416888: Tue Oct 1 17:45:25 2024 00:45:27.276 read: IOPS=2041, BW=16.0MiB/s (16.7MB/s)(79.8MiB/5002msec) 00:45:27.276 slat (nsec): min=5415, max=77847, avg=6384.27, stdev=2593.29 00:45:27.276 clat (usec): min=2217, max=6346, avg=3901.10, stdev=638.94 00:45:27.276 lat (usec): min=2238, max=6352, avg=3907.48, stdev=638.79 00:45:27.276 clat percentiles (usec): 00:45:27.276 | 1.00th=[ 3064], 5.00th=[ 3359], 10.00th=[ 3458], 20.00th=[ 3556], 00:45:27.276 | 30.00th=[ 3654], 40.00th=[ 3687], 50.00th=[ 3720], 60.00th=[ 3785], 00:45:27.276 | 70.00th=[ 3818], 80.00th=[ 3818], 90.00th=[ 5407], 95.00th=[ 5473], 00:45:27.276 | 99.00th=[ 5735], 99.50th=[ 5800], 99.90th=[ 6063], 99.95th=[ 6194], 00:45:27.276 | 99.99th=[ 6325] 00:45:27.276 bw ( KiB/s): min=15712, max=16944, per=24.36%, avg=16298.67, stdev=371.63, samples=9 00:45:27.276 iops : min= 1964, max= 2118, avg=2037.33, stdev=46.45, samples=9 00:45:27.276 lat (msec) : 4=84.46%, 10=15.54% 00:45:27.276 cpu : usr=96.78%, sys=3.00%, ctx=6, majf=0, minf=0 00:45:27.276 IO depths : 1=0.1%, 2=0.1%, 4=71.3%, 8=28.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:27.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:27.276 complete : 0=0.0%, 4=93.8%, 8=6.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:27.276 issued rwts: total=10213,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:27.276 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:27.276 filename0: (groupid=0, jobs=1): err= 0: pid=3416889: Tue Oct 1 17:45:25 2024 00:45:27.276 read: IOPS=2124, BW=16.6MiB/s (17.4MB/s)(83.0MiB/5001msec) 00:45:27.276 slat (nsec): min=5395, max=88954, avg=6955.30, stdev=2698.48 00:45:27.276 clat (usec): min=1190, max=6279, avg=3749.99, stdev=433.46 00:45:27.276 lat (usec): min=1196, max=6285, avg=3756.94, stdev=433.42 00:45:27.276 clat percentiles (usec): 00:45:27.276 | 1.00th=[ 2868], 5.00th=[ 3326], 10.00th=[ 3490], 20.00th=[ 3556], 00:45:27.276 | 30.00th=[ 3589], 40.00th=[ 3621], 50.00th=[ 3720], 60.00th=[ 3785], 00:45:27.276 | 70.00th=[ 3785], 80.00th=[ 3818], 90.00th=[ 3851], 95.00th=[ 4178], 00:45:27.276 | 99.00th=[ 5669], 99.50th=[ 5735], 99.90th=[ 5932], 99.95th=[ 6063], 00:45:27.276 | 99.99th=[ 6259] 00:45:27.276 bw ( KiB/s): min=15888, max=17600, per=25.31%, avg=16931.67, stdev=599.57, samples=9 00:45:27.276 iops : min= 1986, max= 2200, avg=2116.44, stdev=74.95, samples=9 00:45:27.276 lat (msec) : 2=0.23%, 4=92.39%, 10=7.38% 00:45:27.276 cpu : usr=95.50%, sys=3.70%, ctx=241, majf=0, minf=0 00:45:27.276 IO depths : 1=0.1%, 2=0.3%, 4=65.5%, 8=34.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:27.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:27.276 complete : 0=0.0%, 4=97.8%, 8=2.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:27.276 issued rwts: total=10624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:27.276 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:27.276 filename1: (groupid=0, jobs=1): err= 0: pid=3416890: Tue Oct 1 17:45:25 2024 00:45:27.276 read: IOPS=2119, BW=16.6MiB/s (17.4MB/s)(82.9MiB/5004msec) 00:45:27.276 slat (nsec): min=5397, max=71872, avg=8108.77, stdev=3160.79 00:45:27.276 clat (usec): min=1984, max=6102, avg=3755.15, stdev=379.85 00:45:27.276 lat (usec): min=2005, max=6113, avg=3763.26, stdev=379.52 00:45:27.276 clat percentiles (usec): 00:45:27.276 | 1.00th=[ 3163], 5.00th=[ 3425], 10.00th=[ 3523], 20.00th=[ 3556], 00:45:27.276 | 30.00th=[ 3589], 40.00th=[ 3621], 50.00th=[ 3752], 60.00th=[ 3785], 00:45:27.276 | 70.00th=[ 3785], 80.00th=[ 3818], 90.00th=[ 3851], 95.00th=[ 4146], 00:45:27.276 | 99.00th=[ 5538], 99.50th=[ 5669], 99.90th=[ 5800], 99.95th=[ 5997], 00:45:27.276 | 99.99th=[ 6063] 00:45:27.276 bw ( KiB/s): min=16224, max=17376, per=25.42%, avg=17009.78, stdev=355.39, samples=9 00:45:27.276 iops : min= 2028, max= 2172, avg=2126.22, stdev=44.42, samples=9 00:45:27.276 lat (msec) : 2=0.01%, 4=92.45%, 10=7.54% 00:45:27.276 cpu : usr=96.72%, sys=2.86%, ctx=108, majf=0, minf=0 00:45:27.276 IO depths : 1=0.1%, 2=0.1%, 4=66.7%, 8=33.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:27.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:27.276 complete : 0=0.0%, 4=97.0%, 8=3.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:27.276 issued rwts: total=10605,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:27.276 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:27.276 filename1: (groupid=0, jobs=1): err= 0: pid=3416891: Tue Oct 1 17:45:25 2024 00:45:27.276 read: IOPS=2080, BW=16.3MiB/s (17.0MB/s)(81.3MiB/5002msec) 00:45:27.276 slat (nsec): min=5400, max=58528, avg=7858.93, stdev=2954.69 00:45:27.276 clat (usec): min=1756, max=8438, avg=3823.59, stdev=519.50 00:45:27.276 lat (usec): min=1761, max=8471, avg=3831.45, stdev=519.19 00:45:27.276 clat percentiles (usec): 00:45:27.276 | 1.00th=[ 3097], 5.00th=[ 3458], 10.00th=[ 3523], 20.00th=[ 3556], 00:45:27.276 | 30.00th=[ 3589], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3785], 00:45:27.276 | 70.00th=[ 3785], 80.00th=[ 3818], 90.00th=[ 4113], 95.00th=[ 5276], 00:45:27.276 | 99.00th=[ 5735], 99.50th=[ 5800], 99.90th=[ 6456], 99.95th=[ 8356], 00:45:27.276 | 99.99th=[ 8455] 00:45:27.276 bw ( KiB/s): min=15952, max=17200, per=24.93%, avg=16680.89, stdev=453.69, samples=9 00:45:27.276 iops : min= 1994, max= 2150, avg=2085.11, stdev=56.71, samples=9 00:45:27.276 lat (msec) : 2=0.12%, 4=88.60%, 10=11.28% 00:45:27.276 cpu : usr=97.48%, sys=2.28%, ctx=4, majf=0, minf=0 00:45:27.276 IO depths : 1=0.1%, 2=0.1%, 4=72.1%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:27.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:27.276 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:27.276 issued rwts: total=10407,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:27.276 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:27.276 00:45:27.276 Run status group 0 (all jobs): 00:45:27.276 READ: bw=65.3MiB/s (68.5MB/s), 16.0MiB/s-16.6MiB/s (16.7MB/s-17.4MB/s), io=327MiB (343MB), run=5001-5004msec 00:45:27.276 17:45:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:45:27.276 17:45:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:45:27.276 17:45:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:27.276 17:45:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:27.276 17:45:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:45:27.276 17:45:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:27.276 17:45:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:27.276 17:45:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:27.276 17:45:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:27.276 17:45:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:27.277 17:45:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:27.277 17:45:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:27.277 17:45:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:27.277 17:45:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:27.277 17:45:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:45:27.277 17:45:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:45:27.277 17:45:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:27.277 17:45:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:27.277 17:45:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:27.277 17:45:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:27.277 17:45:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:45:27.277 17:45:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:27.277 17:45:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:27.277 17:45:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:27.277 00:45:27.277 real 0m24.403s 00:45:27.277 user 5m20.368s 00:45:27.277 sys 0m4.395s 00:45:27.277 17:45:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:27.277 17:45:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:27.277 ************************************ 00:45:27.277 END TEST fio_dif_rand_params 00:45:27.277 ************************************ 00:45:27.277 17:45:25 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:45:27.277 17:45:25 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:45:27.277 17:45:25 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:27.277 17:45:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:27.277 ************************************ 00:45:27.277 START TEST fio_dif_digest 00:45:27.277 ************************************ 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:27.277 bdev_null0 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:27.277 [2024-10-01 17:45:25.333579] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # config=() 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # local subsystem config 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:45:27.277 { 00:45:27.277 "params": { 00:45:27.277 "name": "Nvme$subsystem", 00:45:27.277 "trtype": "$TEST_TRANSPORT", 00:45:27.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:27.277 "adrfam": "ipv4", 00:45:27.277 "trsvcid": "$NVMF_PORT", 00:45:27.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:27.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:27.277 "hdgst": ${hdgst:-false}, 00:45:27.277 "ddgst": ${ddgst:-false} 00:45:27.277 }, 00:45:27.277 "method": "bdev_nvme_attach_controller" 00:45:27.277 } 00:45:27.277 EOF 00:45:27.277 )") 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # cat 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # jq . 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@583 -- # IFS=, 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:45:27.277 "params": { 00:45:27.277 "name": "Nvme0", 00:45:27.277 "trtype": "tcp", 00:45:27.277 "traddr": "10.0.0.2", 00:45:27.277 "adrfam": "ipv4", 00:45:27.277 "trsvcid": "4420", 00:45:27.277 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:27.277 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:27.277 "hdgst": true, 00:45:27.277 "ddgst": true 00:45:27.277 }, 00:45:27.277 "method": "bdev_nvme_attach_controller" 00:45:27.277 }' 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:27.277 17:45:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:27.277 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:45:27.277 ... 00:45:27.277 fio-3.35 00:45:27.277 Starting 3 threads 00:45:39.510 00:45:39.510 filename0: (groupid=0, jobs=1): err= 0: pid=3418234: Tue Oct 1 17:45:36 2024 00:45:39.510 read: IOPS=228, BW=28.6MiB/s (30.0MB/s)(288MiB/10050msec) 00:45:39.510 slat (nsec): min=5725, max=36905, avg=6876.45, stdev=1397.86 00:45:39.510 clat (usec): min=8123, max=51035, avg=13082.10, stdev=1427.16 00:45:39.510 lat (usec): min=8131, max=51041, avg=13088.98, stdev=1427.12 00:45:39.510 clat percentiles (usec): 00:45:39.510 | 1.00th=[10945], 5.00th=[11600], 10.00th=[11994], 20.00th=[12387], 00:45:39.510 | 30.00th=[12649], 40.00th=[12780], 50.00th=[13042], 60.00th=[13304], 00:45:39.510 | 70.00th=[13435], 80.00th=[13829], 90.00th=[14222], 95.00th=[14615], 00:45:39.510 | 99.00th=[15270], 99.50th=[15533], 99.90th=[16450], 99.95th=[49546], 00:45:39.510 | 99.99th=[51119] 00:45:39.510 bw ( KiB/s): min=28672, max=31232, per=34.83%, avg=29414.40, stdev=669.11, samples=20 00:45:39.510 iops : min= 224, max= 244, avg=229.80, stdev= 5.23, samples=20 00:45:39.510 lat (msec) : 10=0.48%, 20=99.43%, 50=0.04%, 100=0.04% 00:45:39.510 cpu : usr=93.24%, sys=5.80%, ctx=609, majf=0, minf=123 00:45:39.510 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:39.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:39.510 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:39.510 issued rwts: total=2300,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:39.510 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:39.510 filename0: (groupid=0, jobs=1): err= 0: pid=3418235: Tue Oct 1 17:45:36 2024 00:45:39.510 read: IOPS=230, BW=28.8MiB/s (30.2MB/s)(290MiB/10048msec) 00:45:39.510 slat (nsec): min=5796, max=32290, avg=6583.30, stdev=1019.90 00:45:39.510 clat (usec): min=8611, max=54364, avg=12975.98, stdev=1455.48 00:45:39.510 lat (usec): min=8618, max=54370, avg=12982.56, stdev=1455.49 00:45:39.510 clat percentiles (usec): 00:45:39.510 | 1.00th=[10683], 5.00th=[11469], 10.00th=[11863], 20.00th=[12256], 00:45:39.510 | 30.00th=[12518], 40.00th=[12780], 50.00th=[12911], 60.00th=[13173], 00:45:39.510 | 70.00th=[13435], 80.00th=[13698], 90.00th=[14091], 95.00th=[14353], 00:45:39.510 | 99.00th=[15008], 99.50th=[15270], 99.90th=[15795], 99.95th=[50070], 00:45:39.510 | 99.99th=[54264] 00:45:39.510 bw ( KiB/s): min=28672, max=30976, per=35.11%, avg=29644.80, stdev=535.70, samples=20 00:45:39.510 iops : min= 224, max= 242, avg=231.60, stdev= 4.19, samples=20 00:45:39.510 lat (msec) : 10=0.43%, 20=99.48%, 50=0.04%, 100=0.04% 00:45:39.510 cpu : usr=96.11%, sys=3.67%, ctx=24, majf=0, minf=175 00:45:39.510 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:39.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:39.510 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:39.510 issued rwts: total=2318,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:39.510 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:39.510 filename0: (groupid=0, jobs=1): err= 0: pid=3418236: Tue Oct 1 17:45:36 2024 00:45:39.510 read: IOPS=200, BW=25.0MiB/s (26.2MB/s)(252MiB/10049msec) 00:45:39.510 slat (nsec): min=5749, max=37088, avg=6554.88, stdev=1204.11 00:45:39.510 clat (usec): min=11872, max=58039, avg=14953.25, stdev=2244.80 00:45:39.510 lat (usec): min=11879, max=58046, avg=14959.81, stdev=2244.81 00:45:39.510 clat percentiles (usec): 00:45:39.510 | 1.00th=[12387], 5.00th=[13042], 10.00th=[13435], 20.00th=[13960], 00:45:39.510 | 30.00th=[14353], 40.00th=[14615], 50.00th=[14877], 60.00th=[15139], 00:45:39.510 | 70.00th=[15401], 80.00th=[15795], 90.00th=[16188], 95.00th=[16712], 00:45:39.510 | 99.00th=[17695], 99.50th=[18220], 99.90th=[54264], 99.95th=[56361], 00:45:39.510 | 99.99th=[57934] 00:45:39.510 bw ( KiB/s): min=23040, max=26624, per=30.47%, avg=25728.00, stdev=763.50, samples=20 00:45:39.510 iops : min= 180, max= 208, avg=201.00, stdev= 5.96, samples=20 00:45:39.510 lat (msec) : 20=99.75%, 50=0.05%, 100=0.20% 00:45:39.510 cpu : usr=95.48%, sys=4.29%, ctx=22, majf=0, minf=142 00:45:39.510 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:39.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:39.510 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:39.510 issued rwts: total=2012,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:39.510 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:39.510 00:45:39.510 Run status group 0 (all jobs): 00:45:39.510 READ: bw=82.5MiB/s (86.5MB/s), 25.0MiB/s-28.8MiB/s (26.2MB/s-30.2MB/s), io=829MiB (869MB), run=10048-10050msec 00:45:39.510 17:45:36 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:45:39.510 17:45:36 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:45:39.510 17:45:36 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:45:39.510 17:45:36 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:39.510 17:45:36 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:45:39.510 17:45:36 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:39.510 17:45:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:39.510 17:45:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:39.510 17:45:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:39.510 17:45:36 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:39.510 17:45:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:39.510 17:45:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:39.510 17:45:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:39.510 00:45:39.510 real 0m11.037s 00:45:39.510 user 0m45.484s 00:45:39.510 sys 0m1.712s 00:45:39.510 17:45:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:39.510 17:45:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:39.510 ************************************ 00:45:39.510 END TEST fio_dif_digest 00:45:39.510 ************************************ 00:45:39.510 17:45:36 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:45:39.510 17:45:36 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:45:39.510 17:45:36 nvmf_dif -- nvmf/common.sh@514 -- # nvmfcleanup 00:45:39.510 17:45:36 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:45:39.510 17:45:36 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:39.510 17:45:36 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:45:39.510 17:45:36 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:39.510 17:45:36 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:39.510 rmmod nvme_tcp 00:45:39.510 rmmod nvme_fabrics 00:45:39.510 rmmod nvme_keyring 00:45:39.510 17:45:36 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:39.510 17:45:36 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:45:39.510 17:45:36 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:45:39.510 17:45:36 nvmf_dif -- nvmf/common.sh@515 -- # '[' -n 3407537 ']' 00:45:39.510 17:45:36 nvmf_dif -- nvmf/common.sh@516 -- # killprocess 3407537 00:45:39.510 17:45:36 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 3407537 ']' 00:45:39.510 17:45:36 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 3407537 00:45:39.510 17:45:36 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:45:39.510 17:45:36 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:39.511 17:45:36 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3407537 00:45:39.511 17:45:36 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:45:39.511 17:45:36 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:45:39.511 17:45:36 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3407537' 00:45:39.511 killing process with pid 3407537 00:45:39.511 17:45:36 nvmf_dif -- common/autotest_common.sh@969 -- # kill 3407537 00:45:39.511 17:45:36 nvmf_dif -- common/autotest_common.sh@974 -- # wait 3407537 00:45:39.511 17:45:36 nvmf_dif -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:45:39.511 17:45:36 nvmf_dif -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:45:41.426 Waiting for block devices as requested 00:45:41.426 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:45:41.426 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:45:41.426 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:45:41.426 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:45:41.426 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:45:41.687 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:45:41.687 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:45:41.687 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:45:41.948 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:45:41.948 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:45:41.948 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:45:42.209 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:45:42.209 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:45:42.209 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:45:42.469 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:45:42.469 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:45:42.469 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:45:42.730 17:45:41 nvmf_dif -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:45:42.730 17:45:41 nvmf_dif -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:45:42.730 17:45:41 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:45:42.730 17:45:41 nvmf_dif -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:45:42.730 17:45:41 nvmf_dif -- nvmf/common.sh@789 -- # iptables-save 00:45:42.730 17:45:41 nvmf_dif -- nvmf/common.sh@789 -- # iptables-restore 00:45:42.730 17:45:41 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:42.730 17:45:41 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:45:42.730 17:45:41 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:42.731 17:45:41 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:42.731 17:45:41 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:45.278 17:45:43 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:45:45.278 00:45:45.278 real 1m16.545s 00:45:45.278 user 8m5.187s 00:45:45.278 sys 0m20.994s 00:45:45.278 17:45:43 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:45.278 17:45:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:45.278 ************************************ 00:45:45.278 END TEST nvmf_dif 00:45:45.278 ************************************ 00:45:45.278 17:45:43 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:45:45.278 17:45:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:45:45.278 17:45:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:45.278 17:45:43 -- common/autotest_common.sh@10 -- # set +x 00:45:45.278 ************************************ 00:45:45.278 START TEST nvmf_abort_qd_sizes 00:45:45.278 ************************************ 00:45:45.278 17:45:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:45:45.278 * Looking for test storage... 00:45:45.278 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:45:45.278 17:45:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:45:45.278 17:45:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:45:45.278 17:45:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lcov --version 00:45:45.278 17:45:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:45:45.278 17:45:43 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:45.278 17:45:43 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:45.278 17:45:43 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:45.278 17:45:43 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:45:45.278 17:45:43 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:45:45.278 17:45:43 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:45:45.278 17:45:43 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:45:45.278 17:45:43 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:45:45.278 17:45:43 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:45:45.278 17:45:43 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:45:45.278 17:45:43 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:45.278 17:45:43 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:45:45.278 17:45:43 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:45:45.278 17:45:43 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:45.278 17:45:43 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:45.278 17:45:43 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:45:45.278 17:45:43 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:45:45.278 17:45:43 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:45.278 17:45:43 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:45:45.278 17:45:43 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:45:45.278 17:45:43 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:45:45.278 17:45:43 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:45:45.278 17:45:43 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:45.278 17:45:43 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:45:45.278 17:45:43 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:45:45.278 17:45:43 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:45.278 17:45:43 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:45.278 17:45:43 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:45:45.278 17:45:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:45:45.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:45.279 --rc genhtml_branch_coverage=1 00:45:45.279 --rc genhtml_function_coverage=1 00:45:45.279 --rc genhtml_legend=1 00:45:45.279 --rc geninfo_all_blocks=1 00:45:45.279 --rc geninfo_unexecuted_blocks=1 00:45:45.279 00:45:45.279 ' 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:45:45.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:45.279 --rc genhtml_branch_coverage=1 00:45:45.279 --rc genhtml_function_coverage=1 00:45:45.279 --rc genhtml_legend=1 00:45:45.279 --rc geninfo_all_blocks=1 00:45:45.279 --rc geninfo_unexecuted_blocks=1 00:45:45.279 00:45:45.279 ' 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:45:45.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:45.279 --rc genhtml_branch_coverage=1 00:45:45.279 --rc genhtml_function_coverage=1 00:45:45.279 --rc genhtml_legend=1 00:45:45.279 --rc geninfo_all_blocks=1 00:45:45.279 --rc geninfo_unexecuted_blocks=1 00:45:45.279 00:45:45.279 ' 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:45:45.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:45.279 --rc genhtml_branch_coverage=1 00:45:45.279 --rc genhtml_function_coverage=1 00:45:45.279 --rc genhtml_legend=1 00:45:45.279 --rc geninfo_all_blocks=1 00:45:45.279 --rc geninfo_unexecuted_blocks=1 00:45:45.279 00:45:45.279 ' 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:45.279 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # prepare_net_devs 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # local -g is_hw=no 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # remove_spdk_ns 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:45:45.279 17:45:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:45:51.869 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:45:51.869 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:45:51.869 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:45:51.869 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:45:51.869 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:45:51.869 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:45:51.869 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:45:51.869 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:45:51.869 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:45:51.869 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:45:51.869 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:45:51.869 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:45:51.869 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:45:51.869 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:45:51.869 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:45:51.869 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:45:51.869 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:45:51.869 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:45:51.869 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:45:51.869 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:45:51.869 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:45:51.869 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:45:51.869 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:45:51.869 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:45:51.869 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:45:51.869 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:45:51.869 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:45:51.869 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:45:51.869 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:45:51.869 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:45:51.870 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:45:51.870 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:45:51.870 Found net devices under 0000:4b:00.0: cvl_0_0 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:45:51.870 Found net devices under 0000:4b:00.1: cvl_0_1 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # is_hw=yes 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:45:51.870 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:45:52.130 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:45:52.130 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:45:52.130 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:45:52.130 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:45:52.130 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:45:52.130 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:45:52.130 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:45:52.130 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:45:52.130 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:52.130 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.693 ms 00:45:52.130 00:45:52.130 --- 10.0.0.2 ping statistics --- 00:45:52.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:52.130 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:45:52.130 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:45:52.130 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:52.130 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:45:52.130 00:45:52.130 --- 10.0.0.1 ping statistics --- 00:45:52.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:52.130 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:45:52.130 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:52.130 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # return 0 00:45:52.130 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:45:52.130 17:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:45:55.431 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:45:55.431 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:45:55.431 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:45:55.431 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:45:55.431 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:45:55.691 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:45:55.691 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:45:55.691 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:45:55.691 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:45:55.691 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:45:55.691 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:45:55.691 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:45:55.691 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:45:55.691 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:45:55.691 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:45:55.691 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:45:55.691 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:45:55.951 17:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:55.951 17:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:45:55.951 17:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:45:55.951 17:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:55.951 17:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:45:55.951 17:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:45:56.211 17:45:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:45:56.211 17:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:45:56.211 17:45:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:56.211 17:45:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:45:56.211 17:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # nvmfpid=3427342 00:45:56.211 17:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # waitforlisten 3427342 00:45:56.211 17:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:45:56.211 17:45:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 3427342 ']' 00:45:56.211 17:45:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:56.211 17:45:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:56.211 17:45:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:56.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:56.212 17:45:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:56.212 17:45:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:45:56.212 [2024-10-01 17:45:54.596740] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:45:56.212 [2024-10-01 17:45:54.596790] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:56.212 [2024-10-01 17:45:54.662867] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:45:56.212 [2024-10-01 17:45:54.695825] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:56.212 [2024-10-01 17:45:54.695862] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:56.212 [2024-10-01 17:45:54.695870] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:56.212 [2024-10-01 17:45:54.695877] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:56.212 [2024-10-01 17:45:54.695883] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:56.212 [2024-10-01 17:45:54.696038] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:45:56.212 [2024-10-01 17:45:54.696098] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:45:56.212 [2024-10-01 17:45:54.696264] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:45:56.212 [2024-10-01 17:45:54.696265] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:45:57.150 17:45:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:57.150 17:45:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:45:57.150 17:45:55 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:45:57.150 17:45:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:57.150 17:45:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:45:57.150 17:45:55 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:57.150 17:45:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:45:57.150 17:45:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:45:57.150 17:45:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:45:57.150 17:45:55 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:45:57.150 17:45:55 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:45:57.150 17:45:55 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:45:57.150 17:45:55 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:45:57.150 17:45:55 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:45:57.150 17:45:55 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:45:57.150 17:45:55 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:45:57.150 17:45:55 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:45:57.150 17:45:55 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:45:57.150 17:45:55 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:45:57.150 17:45:55 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:45:57.150 17:45:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:45:57.150 17:45:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:45:57.150 17:45:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:45:57.150 17:45:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:45:57.150 17:45:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:57.150 17:45:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:45:57.150 ************************************ 00:45:57.150 START TEST spdk_target_abort 00:45:57.150 ************************************ 00:45:57.150 17:45:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:45:57.150 17:45:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:45:57.150 17:45:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:45:57.150 17:45:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:57.150 17:45:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:57.473 spdk_targetn1 00:45:57.473 17:45:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:57.473 17:45:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:45:57.473 17:45:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:57.473 17:45:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:57.473 [2024-10-01 17:45:55.787960] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:57.473 17:45:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:57.473 17:45:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:45:57.473 17:45:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:57.473 17:45:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:57.473 17:45:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:57.473 17:45:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:45:57.473 17:45:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:57.473 17:45:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:57.473 17:45:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:57.473 17:45:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:45:57.473 17:45:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:57.473 17:45:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:57.473 [2024-10-01 17:45:55.828286] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:57.473 17:45:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:57.473 17:45:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:45:57.473 17:45:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:45:57.473 17:45:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:45:57.473 17:45:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:45:57.473 17:45:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:45:57.473 17:45:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:45:57.473 17:45:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:45:57.473 17:45:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:45:57.473 17:45:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:45:57.473 17:45:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:57.473 17:45:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:45:57.473 17:45:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:57.473 17:45:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:45:57.473 17:45:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:57.473 17:45:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:45:57.473 17:45:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:57.473 17:45:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:45:57.473 17:45:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:57.473 17:45:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:57.473 17:45:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:45:57.473 17:45:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:57.733 [2024-10-01 17:45:56.040049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:344 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:45:57.733 [2024-10-01 17:45:56.040074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:002c p:1 m:0 dnr:0 00:45:57.733 [2024-10-01 17:45:56.050411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:760 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:45:57.733 [2024-10-01 17:45:56.050427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0061 p:1 m:0 dnr:0 00:45:57.733 [2024-10-01 17:45:56.063449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:1128 len:8 PRP1 0x2000078c8000 PRP2 0x0 00:45:57.733 [2024-10-01 17:45:56.063465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:008e p:1 m:0 dnr:0 00:45:57.733 [2024-10-01 17:45:56.096559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2344 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:45:57.733 [2024-10-01 17:45:56.096575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:45:57.733 [2024-10-01 17:45:56.119911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:3248 len:8 PRP1 0x2000078be000 PRP2 0x0 00:45:57.733 [2024-10-01 17:45:56.119927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0097 p:0 m:0 dnr:0 00:45:57.733 [2024-10-01 17:45:56.120204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:3256 len:8 PRP1 0x2000078c8000 PRP2 0x0 00:45:57.733 [2024-10-01 17:45:56.120214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0099 p:0 m:0 dnr:0 00:46:01.029 Initializing NVMe Controllers 00:46:01.029 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:46:01.029 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:01.029 Initialization complete. Launching workers. 00:46:01.029 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12839, failed: 6 00:46:01.029 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3584, failed to submit 9261 00:46:01.029 success 769, unsuccessful 2815, failed 0 00:46:01.029 17:45:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:01.029 17:45:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:01.029 [2024-10-01 17:45:59.372091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:175 nsid:1 lba:288 len:8 PRP1 0x200007c5c000 PRP2 0x0 00:46:01.029 [2024-10-01 17:45:59.372130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:175 cdw0:0 sqhd:0035 p:1 m:0 dnr:0 00:46:01.029 [2024-10-01 17:45:59.528127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:184 nsid:1 lba:4072 len:8 PRP1 0x200007c4a000 PRP2 0x0 00:46:01.029 [2024-10-01 17:45:59.528154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:184 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:46:02.416 [2024-10-01 17:46:00.785106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:186 nsid:1 lba:32128 len:8 PRP1 0x200007c3a000 PRP2 0x0 00:46:02.416 [2024-10-01 17:46:00.785148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:186 cdw0:0 sqhd:00bd p:0 m:0 dnr:0 00:46:04.330 Initializing NVMe Controllers 00:46:04.330 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:46:04.330 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:04.330 Initialization complete. Launching workers. 00:46:04.330 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8428, failed: 3 00:46:04.330 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1234, failed to submit 7197 00:46:04.330 success 337, unsuccessful 897, failed 0 00:46:04.330 17:46:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:04.330 17:46:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:04.330 [2024-10-01 17:46:02.876929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:187 nsid:1 lba:23736 len:8 PRP1 0x2000078d0000 PRP2 0x0 00:46:04.330 [2024-10-01 17:46:02.876961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:187 cdw0:0 sqhd:00cb p:0 m:0 dnr:0 00:46:05.272 [2024-10-01 17:46:03.460691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:177 nsid:1 lba:88856 len:8 PRP1 0x200007912000 PRP2 0x0 00:46:05.272 [2024-10-01 17:46:03.460713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:177 cdw0:0 sqhd:0092 p:0 m:0 dnr:0 00:46:06.656 [2024-10-01 17:46:04.861630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:164 nsid:1 lba:245224 len:8 PRP1 0x2000078de000 PRP2 0x0 00:46:06.656 [2024-10-01 17:46:04.861661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:164 cdw0:0 sqhd:00f6 p:0 m:0 dnr:0 00:46:07.227 Initializing NVMe Controllers 00:46:07.227 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:46:07.227 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:07.227 Initialization complete. Launching workers. 00:46:07.227 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 41914, failed: 3 00:46:07.227 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2623, failed to submit 39294 00:46:07.227 success 618, unsuccessful 2005, failed 0 00:46:07.227 17:46:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:46:07.227 17:46:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:07.227 17:46:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:07.227 17:46:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:07.228 17:46:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:46:07.228 17:46:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:07.228 17:46:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:09.139 17:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:09.139 17:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3427342 00:46:09.139 17:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 3427342 ']' 00:46:09.139 17:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 3427342 00:46:09.139 17:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:46:09.139 17:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:46:09.139 17:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3427342 00:46:09.139 17:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:46:09.139 17:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:46:09.139 17:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3427342' 00:46:09.139 killing process with pid 3427342 00:46:09.139 17:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 3427342 00:46:09.139 17:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 3427342 00:46:09.401 00:46:09.401 real 0m12.252s 00:46:09.401 user 0m50.105s 00:46:09.401 sys 0m1.812s 00:46:09.401 17:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:09.401 17:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:09.401 ************************************ 00:46:09.401 END TEST spdk_target_abort 00:46:09.401 ************************************ 00:46:09.401 17:46:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:46:09.401 17:46:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:46:09.401 17:46:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:46:09.401 17:46:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:09.401 ************************************ 00:46:09.401 START TEST kernel_target_abort 00:46:09.401 ************************************ 00:46:09.401 17:46:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:46:09.401 17:46:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:46:09.401 17:46:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@767 -- # local ip 00:46:09.401 17:46:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates=() 00:46:09.401 17:46:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # local -A ip_candidates 00:46:09.401 17:46:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:46:09.401 17:46:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:46:09.401 17:46:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:46:09.401 17:46:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:46:09.401 17:46:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:46:09.401 17:46:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:46:09.401 17:46:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:46:09.401 17:46:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:46:09.401 17:46:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:46:09.401 17:46:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:46:09.401 17:46:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:09.401 17:46:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:09.401 17:46:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:46:09.401 17:46:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # local block nvme 00:46:09.401 17:46:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:46:09.401 17:46:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # modprobe nvmet 00:46:09.401 17:46:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:46:09.401 17:46:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:46:12.702 Waiting for block devices as requested 00:46:12.702 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:46:12.702 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:46:12.702 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:46:12.702 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:46:12.702 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:46:12.963 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:46:12.963 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:46:12.963 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:46:13.224 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:46:13.224 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:46:13.486 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:46:13.486 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:46:13.486 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:46:13.486 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:46:13.746 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:46:13.746 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:46:13.746 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:46:14.007 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:46:14.007 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:46:14.007 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:46:14.007 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:46:14.007 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:46:14.007 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:46:14.007 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:46:14.007 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:46:14.007 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:46:14.268 No valid GPT data, bailing 00:46:14.268 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:46:14.268 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:46:14.268 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:46:14.268 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:46:14.268 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:46:14.268 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:14.268 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:14.268 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:46:14.268 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:46:14.268 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:46:14.268 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:46:14.268 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:46:14.268 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:46:14.268 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo tcp 00:46:14.268 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 4420 00:46:14.268 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo ipv4 00:46:14.268 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:46:14.268 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:46:14.268 00:46:14.268 Discovery Log Number of Records 2, Generation counter 2 00:46:14.268 =====Discovery Log Entry 0====== 00:46:14.268 trtype: tcp 00:46:14.268 adrfam: ipv4 00:46:14.268 subtype: current discovery subsystem 00:46:14.268 treq: not specified, sq flow control disable supported 00:46:14.268 portid: 1 00:46:14.268 trsvcid: 4420 00:46:14.268 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:46:14.268 traddr: 10.0.0.1 00:46:14.268 eflags: none 00:46:14.268 sectype: none 00:46:14.268 =====Discovery Log Entry 1====== 00:46:14.268 trtype: tcp 00:46:14.268 adrfam: ipv4 00:46:14.268 subtype: nvme subsystem 00:46:14.268 treq: not specified, sq flow control disable supported 00:46:14.268 portid: 1 00:46:14.268 trsvcid: 4420 00:46:14.268 subnqn: nqn.2016-06.io.spdk:testnqn 00:46:14.268 traddr: 10.0.0.1 00:46:14.268 eflags: none 00:46:14.268 sectype: none 00:46:14.268 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:46:14.268 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:46:14.268 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:46:14.268 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:46:14.268 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:46:14.268 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:46:14.268 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:46:14.268 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:46:14.268 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:46:14.269 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:14.269 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:46:14.269 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:14.269 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:46:14.269 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:14.269 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:46:14.269 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:14.269 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:46:14.269 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:14.269 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:14.269 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:14.269 17:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:17.569 Initializing NVMe Controllers 00:46:17.569 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:46:17.569 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:17.569 Initialization complete. Launching workers. 00:46:17.569 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67592, failed: 0 00:46:17.569 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67592, failed to submit 0 00:46:17.569 success 0, unsuccessful 67592, failed 0 00:46:17.569 17:46:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:17.569 17:46:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:20.870 Initializing NVMe Controllers 00:46:20.870 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:46:20.870 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:20.870 Initialization complete. Launching workers. 00:46:20.870 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 108419, failed: 0 00:46:20.870 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27290, failed to submit 81129 00:46:20.870 success 0, unsuccessful 27290, failed 0 00:46:20.870 17:46:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:20.870 17:46:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:23.414 Initializing NVMe Controllers 00:46:23.414 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:46:23.414 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:23.414 Initialization complete. Launching workers. 00:46:23.414 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 102340, failed: 0 00:46:23.414 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25570, failed to submit 76770 00:46:23.414 success 0, unsuccessful 25570, failed 0 00:46:23.674 17:46:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:46:23.674 17:46:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:46:23.674 17:46:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # echo 0 00:46:23.674 17:46:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:23.674 17:46:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:23.674 17:46:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:46:23.674 17:46:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:23.675 17:46:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:46:23.675 17:46:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:46:23.675 17:46:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:46:26.981 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:46:26.981 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:46:26.981 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:46:26.981 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:46:26.981 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:46:26.981 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:46:26.981 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:46:26.981 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:46:26.981 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:46:26.981 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:46:26.981 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:46:26.981 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:46:26.981 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:46:26.981 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:46:26.981 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:46:26.981 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:46:28.991 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:46:28.991 00:46:28.991 real 0m19.639s 00:46:28.991 user 0m9.782s 00:46:28.991 sys 0m5.608s 00:46:28.991 17:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:28.991 17:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:28.991 ************************************ 00:46:28.991 END TEST kernel_target_abort 00:46:28.991 ************************************ 00:46:28.991 17:46:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:46:28.991 17:46:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:46:28.991 17:46:27 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # nvmfcleanup 00:46:28.991 17:46:27 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:46:28.991 17:46:27 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:46:28.991 17:46:27 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:46:28.991 17:46:27 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:46:28.991 17:46:27 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:46:28.991 rmmod nvme_tcp 00:46:28.991 rmmod nvme_fabrics 00:46:29.252 rmmod nvme_keyring 00:46:29.252 17:46:27 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:46:29.252 17:46:27 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:46:29.252 17:46:27 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:46:29.252 17:46:27 nvmf_abort_qd_sizes -- nvmf/common.sh@515 -- # '[' -n 3427342 ']' 00:46:29.252 17:46:27 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # killprocess 3427342 00:46:29.252 17:46:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 3427342 ']' 00:46:29.252 17:46:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 3427342 00:46:29.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3427342) - No such process 00:46:29.252 17:46:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 3427342 is not found' 00:46:29.252 Process with pid 3427342 is not found 00:46:29.252 17:46:27 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:46:29.252 17:46:27 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:46:32.553 Waiting for block devices as requested 00:46:32.553 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:46:32.553 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:46:32.553 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:46:32.553 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:46:32.553 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:46:32.553 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:46:32.815 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:46:32.815 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:46:32.815 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:46:33.076 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:46:33.076 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:46:33.337 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:46:33.337 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:46:33.337 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:46:33.337 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:46:33.597 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:46:33.597 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:46:33.857 17:46:32 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:46:33.857 17:46:32 nvmf_abort_qd_sizes -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:46:33.857 17:46:32 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:46:33.857 17:46:32 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-save 00:46:33.857 17:46:32 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:46:33.857 17:46:32 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-restore 00:46:33.857 17:46:32 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:46:33.857 17:46:32 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:46:33.857 17:46:32 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:33.857 17:46:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:46:33.857 17:46:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:36.403 17:46:34 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:46:36.403 00:46:36.403 real 0m51.049s 00:46:36.403 user 1m5.059s 00:46:36.403 sys 0m18.086s 00:46:36.403 17:46:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:36.403 17:46:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:36.403 ************************************ 00:46:36.403 END TEST nvmf_abort_qd_sizes 00:46:36.403 ************************************ 00:46:36.403 17:46:34 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:46:36.403 17:46:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:46:36.403 17:46:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:46:36.403 17:46:34 -- common/autotest_common.sh@10 -- # set +x 00:46:36.403 ************************************ 00:46:36.403 START TEST keyring_file 00:46:36.403 ************************************ 00:46:36.404 17:46:34 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:46:36.404 * Looking for test storage... 00:46:36.404 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:46:36.404 17:46:34 keyring_file -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:46:36.404 17:46:34 keyring_file -- common/autotest_common.sh@1681 -- # lcov --version 00:46:36.404 17:46:34 keyring_file -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:46:36.404 17:46:34 keyring_file -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:46:36.404 17:46:34 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:36.404 17:46:34 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:36.404 17:46:34 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:36.404 17:46:34 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:46:36.404 17:46:34 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:46:36.404 17:46:34 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:46:36.404 17:46:34 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:46:36.404 17:46:34 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:46:36.404 17:46:34 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:46:36.404 17:46:34 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:46:36.404 17:46:34 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:36.404 17:46:34 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:46:36.404 17:46:34 keyring_file -- scripts/common.sh@345 -- # : 1 00:46:36.404 17:46:34 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:36.404 17:46:34 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:36.404 17:46:34 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:46:36.404 17:46:34 keyring_file -- scripts/common.sh@353 -- # local d=1 00:46:36.404 17:46:34 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:36.404 17:46:34 keyring_file -- scripts/common.sh@355 -- # echo 1 00:46:36.404 17:46:34 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:46:36.404 17:46:34 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:46:36.404 17:46:34 keyring_file -- scripts/common.sh@353 -- # local d=2 00:46:36.404 17:46:34 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:36.404 17:46:34 keyring_file -- scripts/common.sh@355 -- # echo 2 00:46:36.404 17:46:34 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:46:36.404 17:46:34 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:36.404 17:46:34 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:36.404 17:46:34 keyring_file -- scripts/common.sh@368 -- # return 0 00:46:36.404 17:46:34 keyring_file -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:36.404 17:46:34 keyring_file -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:46:36.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:36.404 --rc genhtml_branch_coverage=1 00:46:36.404 --rc genhtml_function_coverage=1 00:46:36.404 --rc genhtml_legend=1 00:46:36.404 --rc geninfo_all_blocks=1 00:46:36.404 --rc geninfo_unexecuted_blocks=1 00:46:36.404 00:46:36.404 ' 00:46:36.404 17:46:34 keyring_file -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:46:36.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:36.404 --rc genhtml_branch_coverage=1 00:46:36.404 --rc genhtml_function_coverage=1 00:46:36.404 --rc genhtml_legend=1 00:46:36.404 --rc geninfo_all_blocks=1 00:46:36.404 --rc geninfo_unexecuted_blocks=1 00:46:36.404 00:46:36.404 ' 00:46:36.404 17:46:34 keyring_file -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:46:36.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:36.404 --rc genhtml_branch_coverage=1 00:46:36.404 --rc genhtml_function_coverage=1 00:46:36.404 --rc genhtml_legend=1 00:46:36.404 --rc geninfo_all_blocks=1 00:46:36.404 --rc geninfo_unexecuted_blocks=1 00:46:36.404 00:46:36.404 ' 00:46:36.404 17:46:34 keyring_file -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:46:36.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:36.404 --rc genhtml_branch_coverage=1 00:46:36.404 --rc genhtml_function_coverage=1 00:46:36.404 --rc genhtml_legend=1 00:46:36.404 --rc geninfo_all_blocks=1 00:46:36.404 --rc geninfo_unexecuted_blocks=1 00:46:36.404 00:46:36.404 ' 00:46:36.404 17:46:34 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:46:36.404 17:46:34 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:46:36.404 17:46:34 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:46:36.404 17:46:34 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:36.404 17:46:34 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:36.404 17:46:34 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:36.404 17:46:34 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:36.404 17:46:34 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:36.404 17:46:34 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:36.404 17:46:34 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:36.404 17:46:34 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:36.404 17:46:34 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:36.404 17:46:34 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:36.404 17:46:34 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:46:36.404 17:46:34 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:46:36.404 17:46:34 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:36.404 17:46:34 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:36.404 17:46:34 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:46:36.404 17:46:34 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:36.404 17:46:34 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:36.404 17:46:34 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:46:36.404 17:46:34 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:36.404 17:46:34 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:36.404 17:46:34 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:36.404 17:46:34 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:36.404 17:46:34 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:36.404 17:46:34 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:36.404 17:46:34 keyring_file -- paths/export.sh@5 -- # export PATH 00:46:36.404 17:46:34 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:36.404 17:46:34 keyring_file -- nvmf/common.sh@51 -- # : 0 00:46:36.404 17:46:34 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:36.404 17:46:34 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:36.404 17:46:34 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:36.404 17:46:34 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:36.404 17:46:34 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:36.404 17:46:34 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:46:36.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:46:36.404 17:46:34 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:36.404 17:46:34 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:36.404 17:46:34 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:36.404 17:46:34 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:46:36.404 17:46:34 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:46:36.404 17:46:34 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:46:36.404 17:46:34 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:46:36.404 17:46:34 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:46:36.404 17:46:34 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:46:36.404 17:46:34 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:46:36.404 17:46:34 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:46:36.404 17:46:34 keyring_file -- keyring/common.sh@17 -- # name=key0 00:46:36.404 17:46:34 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:46:36.404 17:46:34 keyring_file -- keyring/common.sh@17 -- # digest=0 00:46:36.404 17:46:34 keyring_file -- keyring/common.sh@18 -- # mktemp 00:46:36.404 17:46:34 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.6fHcW4H4hn 00:46:36.404 17:46:34 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:46:36.404 17:46:34 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:46:36.404 17:46:34 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:46:36.405 17:46:34 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:46:36.405 17:46:34 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:46:36.405 17:46:34 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:46:36.405 17:46:34 keyring_file -- nvmf/common.sh@731 -- # python - 00:46:36.405 17:46:34 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.6fHcW4H4hn 00:46:36.405 17:46:34 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.6fHcW4H4hn 00:46:36.405 17:46:34 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.6fHcW4H4hn 00:46:36.405 17:46:34 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:46:36.405 17:46:34 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:46:36.405 17:46:34 keyring_file -- keyring/common.sh@17 -- # name=key1 00:46:36.405 17:46:34 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:46:36.405 17:46:34 keyring_file -- keyring/common.sh@17 -- # digest=0 00:46:36.405 17:46:34 keyring_file -- keyring/common.sh@18 -- # mktemp 00:46:36.405 17:46:34 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.IBOVtRgFSm 00:46:36.405 17:46:34 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:46:36.405 17:46:34 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:46:36.405 17:46:34 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:46:36.405 17:46:34 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:46:36.405 17:46:34 keyring_file -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:46:36.405 17:46:34 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:46:36.405 17:46:34 keyring_file -- nvmf/common.sh@731 -- # python - 00:46:36.405 17:46:34 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.IBOVtRgFSm 00:46:36.405 17:46:34 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.IBOVtRgFSm 00:46:36.405 17:46:34 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.IBOVtRgFSm 00:46:36.405 17:46:34 keyring_file -- keyring/file.sh@30 -- # tgtpid=3437368 00:46:36.405 17:46:34 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3437368 00:46:36.405 17:46:34 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:46:36.405 17:46:34 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3437368 ']' 00:46:36.405 17:46:34 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:36.405 17:46:34 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:46:36.405 17:46:34 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:36.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:36.405 17:46:34 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:46:36.405 17:46:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:36.405 [2024-10-01 17:46:34.883198] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:46:36.405 [2024-10-01 17:46:34.883273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3437368 ] 00:46:36.405 [2024-10-01 17:46:34.949056] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:36.667 [2024-10-01 17:46:34.989111] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:46:36.667 17:46:35 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:46:36.667 17:46:35 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:46:36.667 17:46:35 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:46:36.667 17:46:35 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:36.667 17:46:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:36.667 [2024-10-01 17:46:35.173307] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:36.667 null0 00:46:36.667 [2024-10-01 17:46:35.205355] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:46:36.667 [2024-10-01 17:46:35.205754] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:46:36.928 17:46:35 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:36.928 17:46:35 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:46:36.928 17:46:35 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:46:36.928 17:46:35 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:46:36.928 17:46:35 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:46:36.928 17:46:35 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:46:36.928 17:46:35 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:46:36.928 17:46:35 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:46:36.928 17:46:35 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:46:36.928 17:46:35 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:36.928 17:46:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:36.928 [2024-10-01 17:46:35.237429] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:46:36.928 request: 00:46:36.928 { 00:46:36.928 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:46:36.928 "secure_channel": false, 00:46:36.928 "listen_address": { 00:46:36.928 "trtype": "tcp", 00:46:36.928 "traddr": "127.0.0.1", 00:46:36.928 "trsvcid": "4420" 00:46:36.928 }, 00:46:36.928 "method": "nvmf_subsystem_add_listener", 00:46:36.928 "req_id": 1 00:46:36.928 } 00:46:36.928 Got JSON-RPC error response 00:46:36.928 response: 00:46:36.928 { 00:46:36.928 "code": -32602, 00:46:36.928 "message": "Invalid parameters" 00:46:36.928 } 00:46:36.928 17:46:35 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:46:36.928 17:46:35 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:46:36.928 17:46:35 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:46:36.928 17:46:35 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:46:36.928 17:46:35 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:46:36.928 17:46:35 keyring_file -- keyring/file.sh@47 -- # bperfpid=3437555 00:46:36.928 17:46:35 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3437555 /var/tmp/bperf.sock 00:46:36.928 17:46:35 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:46:36.928 17:46:35 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3437555 ']' 00:46:36.928 17:46:35 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:46:36.928 17:46:35 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:46:36.928 17:46:35 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:46:36.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:46:36.928 17:46:35 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:46:36.928 17:46:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:36.928 [2024-10-01 17:46:35.295679] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:46:36.928 [2024-10-01 17:46:35.295727] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3437555 ] 00:46:36.929 [2024-10-01 17:46:35.370111] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:36.929 [2024-10-01 17:46:35.401316] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:46:37.870 17:46:36 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:46:37.870 17:46:36 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:46:37.870 17:46:36 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.6fHcW4H4hn 00:46:37.870 17:46:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.6fHcW4H4hn 00:46:37.870 17:46:36 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.IBOVtRgFSm 00:46:37.870 17:46:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.IBOVtRgFSm 00:46:37.870 17:46:36 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:46:37.870 17:46:36 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:46:37.870 17:46:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:37.870 17:46:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:37.870 17:46:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:38.131 17:46:36 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.6fHcW4H4hn == \/\t\m\p\/\t\m\p\.\6\f\H\c\W\4\H\4\h\n ]] 00:46:38.131 17:46:36 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:46:38.131 17:46:36 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:46:38.131 17:46:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:38.131 17:46:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:38.131 17:46:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:38.392 17:46:36 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.IBOVtRgFSm == \/\t\m\p\/\t\m\p\.\I\B\O\V\t\R\g\F\S\m ]] 00:46:38.392 17:46:36 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:46:38.392 17:46:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:38.392 17:46:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:38.392 17:46:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:38.392 17:46:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:38.392 17:46:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:38.392 17:46:36 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:46:38.392 17:46:36 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:46:38.392 17:46:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:38.653 17:46:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:38.653 17:46:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:38.653 17:46:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:38.653 17:46:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:38.653 17:46:37 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:46:38.653 17:46:37 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:38.653 17:46:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:38.913 [2024-10-01 17:46:37.248939] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:38.913 nvme0n1 00:46:38.913 17:46:37 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:46:38.913 17:46:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:38.913 17:46:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:38.913 17:46:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:38.913 17:46:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:38.913 17:46:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:39.174 17:46:37 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:46:39.174 17:46:37 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:46:39.174 17:46:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:39.174 17:46:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:39.174 17:46:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:39.174 17:46:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:39.174 17:46:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:39.174 17:46:37 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:46:39.174 17:46:37 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:46:39.434 Running I/O for 1 seconds... 00:46:40.377 16521.00 IOPS, 64.54 MiB/s 00:46:40.377 Latency(us) 00:46:40.377 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:40.377 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:46:40.377 nvme0n1 : 1.01 16532.56 64.58 0.00 0.00 7713.00 6444.37 16493.23 00:46:40.377 =================================================================================================================== 00:46:40.377 Total : 16532.56 64.58 0.00 0.00 7713.00 6444.37 16493.23 00:46:40.377 { 00:46:40.377 "results": [ 00:46:40.377 { 00:46:40.377 "job": "nvme0n1", 00:46:40.377 "core_mask": "0x2", 00:46:40.377 "workload": "randrw", 00:46:40.377 "percentage": 50, 00:46:40.377 "status": "finished", 00:46:40.377 "queue_depth": 128, 00:46:40.377 "io_size": 4096, 00:46:40.377 "runtime": 1.007043, 00:46:40.377 "iops": 16532.56117166794, 00:46:40.377 "mibps": 64.5803170768279, 00:46:40.377 "io_failed": 0, 00:46:40.377 "io_timeout": 0, 00:46:40.377 "avg_latency_us": 7713.003277874547, 00:46:40.377 "min_latency_us": 6444.373333333333, 00:46:40.377 "max_latency_us": 16493.226666666666 00:46:40.377 } 00:46:40.377 ], 00:46:40.377 "core_count": 1 00:46:40.377 } 00:46:40.377 17:46:38 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:46:40.377 17:46:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:46:40.641 17:46:38 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:46:40.641 17:46:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:40.641 17:46:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:40.641 17:46:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:40.641 17:46:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:40.641 17:46:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:40.641 17:46:39 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:46:40.641 17:46:39 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:46:40.641 17:46:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:40.641 17:46:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:40.641 17:46:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:40.641 17:46:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:40.641 17:46:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:40.903 17:46:39 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:46:40.903 17:46:39 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:46:40.903 17:46:39 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:46:40.903 17:46:39 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:46:40.903 17:46:39 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:46:40.903 17:46:39 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:46:40.903 17:46:39 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:46:40.903 17:46:39 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:46:40.903 17:46:39 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:46:40.903 17:46:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:46:41.163 [2024-10-01 17:46:39.478058] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:46:41.164 [2024-10-01 17:46:39.478997] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa9b110 (107): Transport endpoint is not connected 00:46:41.164 [2024-10-01 17:46:39.479990] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa9b110 (9): Bad file descriptor 00:46:41.164 [2024-10-01 17:46:39.480992] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:46:41.164 [2024-10-01 17:46:39.481002] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:46:41.164 [2024-10-01 17:46:39.481008] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:46:41.164 [2024-10-01 17:46:39.481013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:46:41.164 request: 00:46:41.164 { 00:46:41.164 "name": "nvme0", 00:46:41.164 "trtype": "tcp", 00:46:41.164 "traddr": "127.0.0.1", 00:46:41.164 "adrfam": "ipv4", 00:46:41.164 "trsvcid": "4420", 00:46:41.164 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:41.164 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:41.164 "prchk_reftag": false, 00:46:41.164 "prchk_guard": false, 00:46:41.164 "hdgst": false, 00:46:41.164 "ddgst": false, 00:46:41.164 "psk": "key1", 00:46:41.164 "allow_unrecognized_csi": false, 00:46:41.164 "method": "bdev_nvme_attach_controller", 00:46:41.164 "req_id": 1 00:46:41.164 } 00:46:41.164 Got JSON-RPC error response 00:46:41.164 response: 00:46:41.164 { 00:46:41.164 "code": -5, 00:46:41.164 "message": "Input/output error" 00:46:41.164 } 00:46:41.164 17:46:39 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:46:41.164 17:46:39 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:46:41.164 17:46:39 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:46:41.164 17:46:39 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:46:41.164 17:46:39 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:46:41.164 17:46:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:41.164 17:46:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:41.164 17:46:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:41.164 17:46:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:41.164 17:46:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:41.164 17:46:39 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:46:41.164 17:46:39 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:46:41.164 17:46:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:41.164 17:46:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:41.164 17:46:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:41.164 17:46:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:41.164 17:46:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:41.425 17:46:39 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:46:41.425 17:46:39 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:46:41.425 17:46:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:46:41.685 17:46:40 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:46:41.685 17:46:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:46:41.685 17:46:40 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:46:41.685 17:46:40 keyring_file -- keyring/file.sh@78 -- # jq length 00:46:41.685 17:46:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:41.946 17:46:40 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:46:41.946 17:46:40 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.6fHcW4H4hn 00:46:41.946 17:46:40 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.6fHcW4H4hn 00:46:41.946 17:46:40 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:46:41.946 17:46:40 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.6fHcW4H4hn 00:46:41.946 17:46:40 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:46:41.946 17:46:40 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:46:41.946 17:46:40 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:46:41.946 17:46:40 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:46:41.946 17:46:40 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.6fHcW4H4hn 00:46:41.946 17:46:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.6fHcW4H4hn 00:46:42.207 [2024-10-01 17:46:40.524728] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.6fHcW4H4hn': 0100660 00:46:42.207 [2024-10-01 17:46:40.524750] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:46:42.207 request: 00:46:42.207 { 00:46:42.207 "name": "key0", 00:46:42.207 "path": "/tmp/tmp.6fHcW4H4hn", 00:46:42.207 "method": "keyring_file_add_key", 00:46:42.207 "req_id": 1 00:46:42.207 } 00:46:42.207 Got JSON-RPC error response 00:46:42.207 response: 00:46:42.207 { 00:46:42.207 "code": -1, 00:46:42.207 "message": "Operation not permitted" 00:46:42.207 } 00:46:42.207 17:46:40 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:46:42.207 17:46:40 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:46:42.207 17:46:40 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:46:42.207 17:46:40 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:46:42.207 17:46:40 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.6fHcW4H4hn 00:46:42.207 17:46:40 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.6fHcW4H4hn 00:46:42.207 17:46:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.6fHcW4H4hn 00:46:42.207 17:46:40 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.6fHcW4H4hn 00:46:42.207 17:46:40 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:46:42.207 17:46:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:42.207 17:46:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:42.207 17:46:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:42.207 17:46:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:42.207 17:46:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:42.469 17:46:40 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:46:42.469 17:46:40 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:42.469 17:46:40 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:46:42.469 17:46:40 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:42.469 17:46:40 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:46:42.469 17:46:40 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:46:42.469 17:46:40 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:46:42.469 17:46:40 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:46:42.469 17:46:40 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:42.469 17:46:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:42.729 [2024-10-01 17:46:41.050067] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.6fHcW4H4hn': No such file or directory 00:46:42.729 [2024-10-01 17:46:41.050081] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:46:42.729 [2024-10-01 17:46:41.050094] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:46:42.729 [2024-10-01 17:46:41.050099] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:46:42.729 [2024-10-01 17:46:41.050110] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:46:42.729 [2024-10-01 17:46:41.050115] bdev_nvme.c:6447:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:46:42.729 request: 00:46:42.729 { 00:46:42.729 "name": "nvme0", 00:46:42.729 "trtype": "tcp", 00:46:42.729 "traddr": "127.0.0.1", 00:46:42.729 "adrfam": "ipv4", 00:46:42.729 "trsvcid": "4420", 00:46:42.729 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:42.729 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:42.729 "prchk_reftag": false, 00:46:42.729 "prchk_guard": false, 00:46:42.729 "hdgst": false, 00:46:42.729 "ddgst": false, 00:46:42.729 "psk": "key0", 00:46:42.729 "allow_unrecognized_csi": false, 00:46:42.729 "method": "bdev_nvme_attach_controller", 00:46:42.729 "req_id": 1 00:46:42.729 } 00:46:42.729 Got JSON-RPC error response 00:46:42.729 response: 00:46:42.729 { 00:46:42.729 "code": -19, 00:46:42.729 "message": "No such device" 00:46:42.729 } 00:46:42.729 17:46:41 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:46:42.729 17:46:41 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:46:42.729 17:46:41 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:46:42.729 17:46:41 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:46:42.729 17:46:41 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:46:42.729 17:46:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:46:42.729 17:46:41 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:46:42.729 17:46:41 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:46:42.729 17:46:41 keyring_file -- keyring/common.sh@17 -- # name=key0 00:46:42.729 17:46:41 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:46:42.729 17:46:41 keyring_file -- keyring/common.sh@17 -- # digest=0 00:46:42.730 17:46:41 keyring_file -- keyring/common.sh@18 -- # mktemp 00:46:42.730 17:46:41 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.lJ6RJQ7Cnv 00:46:42.730 17:46:41 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:46:42.730 17:46:41 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:46:42.730 17:46:41 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:46:42.730 17:46:41 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:46:42.730 17:46:41 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:46:42.730 17:46:41 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:46:42.730 17:46:41 keyring_file -- nvmf/common.sh@731 -- # python - 00:46:42.990 17:46:41 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.lJ6RJQ7Cnv 00:46:42.991 17:46:41 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.lJ6RJQ7Cnv 00:46:42.991 17:46:41 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.lJ6RJQ7Cnv 00:46:42.991 17:46:41 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lJ6RJQ7Cnv 00:46:42.991 17:46:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lJ6RJQ7Cnv 00:46:42.991 17:46:41 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:42.991 17:46:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:43.252 nvme0n1 00:46:43.252 17:46:41 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:46:43.252 17:46:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:43.252 17:46:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:43.252 17:46:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:43.252 17:46:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:43.252 17:46:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:43.513 17:46:41 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:46:43.514 17:46:41 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:46:43.514 17:46:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:46:43.514 17:46:42 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:46:43.514 17:46:42 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:46:43.514 17:46:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:43.514 17:46:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:43.514 17:46:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:43.776 17:46:42 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:46:43.776 17:46:42 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:46:43.776 17:46:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:43.776 17:46:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:43.776 17:46:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:43.776 17:46:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:43.776 17:46:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:44.037 17:46:42 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:46:44.037 17:46:42 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:46:44.037 17:46:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:46:44.037 17:46:42 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:46:44.037 17:46:42 keyring_file -- keyring/file.sh@105 -- # jq length 00:46:44.037 17:46:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:44.298 17:46:42 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:46:44.298 17:46:42 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lJ6RJQ7Cnv 00:46:44.298 17:46:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lJ6RJQ7Cnv 00:46:44.558 17:46:42 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.IBOVtRgFSm 00:46:44.558 17:46:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.IBOVtRgFSm 00:46:44.818 17:46:43 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:44.818 17:46:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:44.818 nvme0n1 00:46:44.818 17:46:43 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:46:44.818 17:46:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:46:45.079 17:46:43 keyring_file -- keyring/file.sh@113 -- # config='{ 00:46:45.079 "subsystems": [ 00:46:45.079 { 00:46:45.079 "subsystem": "keyring", 00:46:45.079 "config": [ 00:46:45.079 { 00:46:45.079 "method": "keyring_file_add_key", 00:46:45.079 "params": { 00:46:45.079 "name": "key0", 00:46:45.079 "path": "/tmp/tmp.lJ6RJQ7Cnv" 00:46:45.079 } 00:46:45.079 }, 00:46:45.079 { 00:46:45.079 "method": "keyring_file_add_key", 00:46:45.079 "params": { 00:46:45.079 "name": "key1", 00:46:45.079 "path": "/tmp/tmp.IBOVtRgFSm" 00:46:45.079 } 00:46:45.079 } 00:46:45.079 ] 00:46:45.079 }, 00:46:45.079 { 00:46:45.079 "subsystem": "iobuf", 00:46:45.079 "config": [ 00:46:45.079 { 00:46:45.079 "method": "iobuf_set_options", 00:46:45.079 "params": { 00:46:45.079 "small_pool_count": 8192, 00:46:45.079 "large_pool_count": 1024, 00:46:45.079 "small_bufsize": 8192, 00:46:45.079 "large_bufsize": 135168 00:46:45.079 } 00:46:45.079 } 00:46:45.079 ] 00:46:45.079 }, 00:46:45.079 { 00:46:45.079 "subsystem": "sock", 00:46:45.079 "config": [ 00:46:45.079 { 00:46:45.079 "method": "sock_set_default_impl", 00:46:45.079 "params": { 00:46:45.079 "impl_name": "posix" 00:46:45.079 } 00:46:45.079 }, 00:46:45.079 { 00:46:45.079 "method": "sock_impl_set_options", 00:46:45.079 "params": { 00:46:45.079 "impl_name": "ssl", 00:46:45.079 "recv_buf_size": 4096, 00:46:45.079 "send_buf_size": 4096, 00:46:45.079 "enable_recv_pipe": true, 00:46:45.079 "enable_quickack": false, 00:46:45.079 "enable_placement_id": 0, 00:46:45.079 "enable_zerocopy_send_server": true, 00:46:45.079 "enable_zerocopy_send_client": false, 00:46:45.079 "zerocopy_threshold": 0, 00:46:45.079 "tls_version": 0, 00:46:45.079 "enable_ktls": false 00:46:45.079 } 00:46:45.079 }, 00:46:45.079 { 00:46:45.079 "method": "sock_impl_set_options", 00:46:45.079 "params": { 00:46:45.079 "impl_name": "posix", 00:46:45.079 "recv_buf_size": 2097152, 00:46:45.079 "send_buf_size": 2097152, 00:46:45.079 "enable_recv_pipe": true, 00:46:45.079 "enable_quickack": false, 00:46:45.079 "enable_placement_id": 0, 00:46:45.079 "enable_zerocopy_send_server": true, 00:46:45.079 "enable_zerocopy_send_client": false, 00:46:45.079 "zerocopy_threshold": 0, 00:46:45.079 "tls_version": 0, 00:46:45.079 "enable_ktls": false 00:46:45.079 } 00:46:45.079 } 00:46:45.079 ] 00:46:45.079 }, 00:46:45.079 { 00:46:45.079 "subsystem": "vmd", 00:46:45.079 "config": [] 00:46:45.079 }, 00:46:45.079 { 00:46:45.079 "subsystem": "accel", 00:46:45.079 "config": [ 00:46:45.079 { 00:46:45.079 "method": "accel_set_options", 00:46:45.079 "params": { 00:46:45.079 "small_cache_size": 128, 00:46:45.079 "large_cache_size": 16, 00:46:45.079 "task_count": 2048, 00:46:45.079 "sequence_count": 2048, 00:46:45.079 "buf_count": 2048 00:46:45.079 } 00:46:45.079 } 00:46:45.079 ] 00:46:45.080 }, 00:46:45.080 { 00:46:45.080 "subsystem": "bdev", 00:46:45.080 "config": [ 00:46:45.080 { 00:46:45.080 "method": "bdev_set_options", 00:46:45.080 "params": { 00:46:45.080 "bdev_io_pool_size": 65535, 00:46:45.080 "bdev_io_cache_size": 256, 00:46:45.080 "bdev_auto_examine": true, 00:46:45.080 "iobuf_small_cache_size": 128, 00:46:45.080 "iobuf_large_cache_size": 16 00:46:45.080 } 00:46:45.080 }, 00:46:45.080 { 00:46:45.080 "method": "bdev_raid_set_options", 00:46:45.080 "params": { 00:46:45.080 "process_window_size_kb": 1024, 00:46:45.080 "process_max_bandwidth_mb_sec": 0 00:46:45.080 } 00:46:45.080 }, 00:46:45.080 { 00:46:45.080 "method": "bdev_iscsi_set_options", 00:46:45.080 "params": { 00:46:45.080 "timeout_sec": 30 00:46:45.080 } 00:46:45.080 }, 00:46:45.080 { 00:46:45.080 "method": "bdev_nvme_set_options", 00:46:45.080 "params": { 00:46:45.080 "action_on_timeout": "none", 00:46:45.080 "timeout_us": 0, 00:46:45.080 "timeout_admin_us": 0, 00:46:45.080 "keep_alive_timeout_ms": 10000, 00:46:45.080 "arbitration_burst": 0, 00:46:45.080 "low_priority_weight": 0, 00:46:45.080 "medium_priority_weight": 0, 00:46:45.080 "high_priority_weight": 0, 00:46:45.080 "nvme_adminq_poll_period_us": 10000, 00:46:45.080 "nvme_ioq_poll_period_us": 0, 00:46:45.080 "io_queue_requests": 512, 00:46:45.080 "delay_cmd_submit": true, 00:46:45.080 "transport_retry_count": 4, 00:46:45.080 "bdev_retry_count": 3, 00:46:45.080 "transport_ack_timeout": 0, 00:46:45.080 "ctrlr_loss_timeout_sec": 0, 00:46:45.080 "reconnect_delay_sec": 0, 00:46:45.080 "fast_io_fail_timeout_sec": 0, 00:46:45.080 "disable_auto_failback": false, 00:46:45.080 "generate_uuids": false, 00:46:45.080 "transport_tos": 0, 00:46:45.080 "nvme_error_stat": false, 00:46:45.080 "rdma_srq_size": 0, 00:46:45.080 "io_path_stat": false, 00:46:45.080 "allow_accel_sequence": false, 00:46:45.080 "rdma_max_cq_size": 0, 00:46:45.080 "rdma_cm_event_timeout_ms": 0, 00:46:45.080 "dhchap_digests": [ 00:46:45.080 "sha256", 00:46:45.080 "sha384", 00:46:45.080 "sha512" 00:46:45.080 ], 00:46:45.080 "dhchap_dhgroups": [ 00:46:45.080 "null", 00:46:45.080 "ffdhe2048", 00:46:45.080 "ffdhe3072", 00:46:45.080 "ffdhe4096", 00:46:45.080 "ffdhe6144", 00:46:45.080 "ffdhe8192" 00:46:45.080 ] 00:46:45.080 } 00:46:45.080 }, 00:46:45.080 { 00:46:45.080 "method": "bdev_nvme_attach_controller", 00:46:45.080 "params": { 00:46:45.080 "name": "nvme0", 00:46:45.080 "trtype": "TCP", 00:46:45.080 "adrfam": "IPv4", 00:46:45.080 "traddr": "127.0.0.1", 00:46:45.080 "trsvcid": "4420", 00:46:45.080 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:45.080 "prchk_reftag": false, 00:46:45.080 "prchk_guard": false, 00:46:45.080 "ctrlr_loss_timeout_sec": 0, 00:46:45.080 "reconnect_delay_sec": 0, 00:46:45.080 "fast_io_fail_timeout_sec": 0, 00:46:45.080 "psk": "key0", 00:46:45.080 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:45.080 "hdgst": false, 00:46:45.080 "ddgst": false 00:46:45.080 } 00:46:45.080 }, 00:46:45.080 { 00:46:45.080 "method": "bdev_nvme_set_hotplug", 00:46:45.080 "params": { 00:46:45.080 "period_us": 100000, 00:46:45.080 "enable": false 00:46:45.080 } 00:46:45.080 }, 00:46:45.080 { 00:46:45.080 "method": "bdev_wait_for_examine" 00:46:45.080 } 00:46:45.080 ] 00:46:45.080 }, 00:46:45.080 { 00:46:45.080 "subsystem": "nbd", 00:46:45.080 "config": [] 00:46:45.080 } 00:46:45.080 ] 00:46:45.080 }' 00:46:45.080 17:46:43 keyring_file -- keyring/file.sh@115 -- # killprocess 3437555 00:46:45.080 17:46:43 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3437555 ']' 00:46:45.080 17:46:43 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3437555 00:46:45.080 17:46:43 keyring_file -- common/autotest_common.sh@955 -- # uname 00:46:45.080 17:46:43 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:46:45.080 17:46:43 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3437555 00:46:45.340 17:46:43 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:46:45.340 17:46:43 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:46:45.340 17:46:43 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3437555' 00:46:45.340 killing process with pid 3437555 00:46:45.340 17:46:43 keyring_file -- common/autotest_common.sh@969 -- # kill 3437555 00:46:45.340 Received shutdown signal, test time was about 1.000000 seconds 00:46:45.340 00:46:45.340 Latency(us) 00:46:45.340 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:45.340 =================================================================================================================== 00:46:45.340 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:46:45.340 17:46:43 keyring_file -- common/autotest_common.sh@974 -- # wait 3437555 00:46:45.340 17:46:43 keyring_file -- keyring/file.sh@118 -- # bperfpid=3439066 00:46:45.340 17:46:43 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3439066 /var/tmp/bperf.sock 00:46:45.340 17:46:43 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3439066 ']' 00:46:45.340 17:46:43 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:46:45.340 17:46:43 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:46:45.340 17:46:43 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:46:45.340 17:46:43 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:46:45.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:46:45.340 17:46:43 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:46:45.340 17:46:43 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:46:45.340 "subsystems": [ 00:46:45.340 { 00:46:45.340 "subsystem": "keyring", 00:46:45.340 "config": [ 00:46:45.340 { 00:46:45.340 "method": "keyring_file_add_key", 00:46:45.340 "params": { 00:46:45.340 "name": "key0", 00:46:45.340 "path": "/tmp/tmp.lJ6RJQ7Cnv" 00:46:45.340 } 00:46:45.340 }, 00:46:45.340 { 00:46:45.340 "method": "keyring_file_add_key", 00:46:45.340 "params": { 00:46:45.340 "name": "key1", 00:46:45.340 "path": "/tmp/tmp.IBOVtRgFSm" 00:46:45.340 } 00:46:45.340 } 00:46:45.340 ] 00:46:45.340 }, 00:46:45.340 { 00:46:45.340 "subsystem": "iobuf", 00:46:45.340 "config": [ 00:46:45.340 { 00:46:45.340 "method": "iobuf_set_options", 00:46:45.340 "params": { 00:46:45.340 "small_pool_count": 8192, 00:46:45.340 "large_pool_count": 1024, 00:46:45.340 "small_bufsize": 8192, 00:46:45.340 "large_bufsize": 135168 00:46:45.340 } 00:46:45.340 } 00:46:45.340 ] 00:46:45.340 }, 00:46:45.340 { 00:46:45.340 "subsystem": "sock", 00:46:45.340 "config": [ 00:46:45.340 { 00:46:45.340 "method": "sock_set_default_impl", 00:46:45.340 "params": { 00:46:45.340 "impl_name": "posix" 00:46:45.340 } 00:46:45.340 }, 00:46:45.340 { 00:46:45.340 "method": "sock_impl_set_options", 00:46:45.340 "params": { 00:46:45.340 "impl_name": "ssl", 00:46:45.340 "recv_buf_size": 4096, 00:46:45.340 "send_buf_size": 4096, 00:46:45.340 "enable_recv_pipe": true, 00:46:45.340 "enable_quickack": false, 00:46:45.340 "enable_placement_id": 0, 00:46:45.340 "enable_zerocopy_send_server": true, 00:46:45.340 "enable_zerocopy_send_client": false, 00:46:45.340 "zerocopy_threshold": 0, 00:46:45.340 "tls_version": 0, 00:46:45.340 "enable_ktls": false 00:46:45.340 } 00:46:45.340 }, 00:46:45.340 { 00:46:45.340 "method": "sock_impl_set_options", 00:46:45.340 "params": { 00:46:45.340 "impl_name": "posix", 00:46:45.340 "recv_buf_size": 2097152, 00:46:45.340 "send_buf_size": 2097152, 00:46:45.340 "enable_recv_pipe": true, 00:46:45.340 "enable_quickack": false, 00:46:45.340 "enable_placement_id": 0, 00:46:45.340 "enable_zerocopy_send_server": true, 00:46:45.340 "enable_zerocopy_send_client": false, 00:46:45.340 "zerocopy_threshold": 0, 00:46:45.340 "tls_version": 0, 00:46:45.340 "enable_ktls": false 00:46:45.340 } 00:46:45.340 } 00:46:45.340 ] 00:46:45.340 }, 00:46:45.340 { 00:46:45.340 "subsystem": "vmd", 00:46:45.340 "config": [] 00:46:45.340 }, 00:46:45.340 { 00:46:45.340 "subsystem": "accel", 00:46:45.340 "config": [ 00:46:45.340 { 00:46:45.340 "method": "accel_set_options", 00:46:45.340 "params": { 00:46:45.340 "small_cache_size": 128, 00:46:45.340 "large_cache_size": 16, 00:46:45.340 "task_count": 2048, 00:46:45.340 "sequence_count": 2048, 00:46:45.340 "buf_count": 2048 00:46:45.340 } 00:46:45.340 } 00:46:45.340 ] 00:46:45.340 }, 00:46:45.340 { 00:46:45.340 "subsystem": "bdev", 00:46:45.340 "config": [ 00:46:45.340 { 00:46:45.340 "method": "bdev_set_options", 00:46:45.340 "params": { 00:46:45.340 "bdev_io_pool_size": 65535, 00:46:45.340 "bdev_io_cache_size": 256, 00:46:45.340 "bdev_auto_examine": true, 00:46:45.340 "iobuf_small_cache_size": 128, 00:46:45.340 "iobuf_large_cache_size": 16 00:46:45.340 } 00:46:45.340 }, 00:46:45.340 { 00:46:45.340 "method": "bdev_raid_set_options", 00:46:45.340 "params": { 00:46:45.340 "process_window_size_kb": 1024, 00:46:45.340 "process_max_bandwidth_mb_sec": 0 00:46:45.340 } 00:46:45.340 }, 00:46:45.340 { 00:46:45.340 "method": "bdev_iscsi_set_options", 00:46:45.340 "params": { 00:46:45.340 "timeout_sec": 30 00:46:45.340 } 00:46:45.340 }, 00:46:45.340 { 00:46:45.340 "method": "bdev_nvme_set_options", 00:46:45.340 "params": { 00:46:45.340 "action_on_timeout": "none", 00:46:45.340 "timeout_us": 0, 00:46:45.340 "timeout_admin_us": 0, 00:46:45.340 "keep_alive_timeout_ms": 10000, 00:46:45.340 "arbitration_burst": 0, 00:46:45.340 "low_priority_weight": 0, 00:46:45.340 "medium_priority_weight": 0, 00:46:45.340 "high_priority_weight": 0, 00:46:45.340 "nvme_adminq_poll_period_us": 10000, 00:46:45.340 "nvme_ioq_poll_period_us": 0, 00:46:45.340 "io_queue_requests": 512, 00:46:45.340 "delay_cmd_submit": true, 00:46:45.340 "transport_retry_count": 4, 00:46:45.340 "bdev_retry_count": 3, 00:46:45.340 "transport_ack_timeout": 0, 00:46:45.340 "ctrlr_loss_timeout_sec": 0, 00:46:45.340 "reconnect_delay_sec": 0, 00:46:45.340 "fast_io_fail_timeout_sec": 0, 00:46:45.340 "disable_auto_failback": false, 00:46:45.340 "generate_uuids": false, 00:46:45.340 "transport_tos": 0, 00:46:45.340 "nvme_error_stat": false, 00:46:45.340 "rdma_srq_size": 0, 00:46:45.340 "io_path_stat": false, 00:46:45.340 "allow_accel_sequence": false, 00:46:45.340 "rdma_max_cq_size": 0, 00:46:45.340 "rdma_cm_event_timeout_ms": 0, 00:46:45.340 "dhchap_digests": [ 00:46:45.340 "sha256", 00:46:45.340 "sha384", 00:46:45.340 "sha512" 00:46:45.340 ], 00:46:45.340 "dhchap_dhgroups": [ 00:46:45.340 "null", 00:46:45.340 "ffdhe2048", 00:46:45.340 "ffdhe3072", 00:46:45.340 "ffdhe4096", 00:46:45.340 "ffdhe6144", 00:46:45.340 "ffdhe8192" 00:46:45.340 ] 00:46:45.340 } 00:46:45.340 }, 00:46:45.340 { 00:46:45.340 "method": "bdev_nvme_attach_controller", 00:46:45.340 "params": { 00:46:45.340 "name": "nvme0", 00:46:45.340 "trtype": "TCP", 00:46:45.340 "adrfam": "IPv4", 00:46:45.340 "traddr": "127.0.0.1", 00:46:45.340 "trsvcid": "4420", 00:46:45.340 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:45.340 "prchk_reftag": false, 00:46:45.340 "prchk_guard": false, 00:46:45.340 "ctrlr_loss_timeout_sec": 0, 00:46:45.340 "reconnect_delay_sec": 0, 00:46:45.340 "fast_io_fail_timeout_sec": 0, 00:46:45.340 "psk": "key0", 00:46:45.340 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:45.340 "hdgst": false, 00:46:45.340 "ddgst": false 00:46:45.340 } 00:46:45.340 }, 00:46:45.340 { 00:46:45.340 "method": "bdev_nvme_set_hotplug", 00:46:45.340 "params": { 00:46:45.340 "period_us": 100000, 00:46:45.340 "enable": false 00:46:45.340 } 00:46:45.340 }, 00:46:45.341 { 00:46:45.341 "method": "bdev_wait_for_examine" 00:46:45.341 } 00:46:45.341 ] 00:46:45.341 }, 00:46:45.341 { 00:46:45.341 "subsystem": "nbd", 00:46:45.341 "config": [] 00:46:45.341 } 00:46:45.341 ] 00:46:45.341 }' 00:46:45.341 17:46:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:45.341 [2024-10-01 17:46:43.818244] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:46:45.341 [2024-10-01 17:46:43.818304] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3439066 ] 00:46:45.601 [2024-10-01 17:46:43.893486] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:45.601 [2024-10-01 17:46:43.921168] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:46:45.601 [2024-10-01 17:46:44.058363] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:46.171 17:46:44 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:46:46.171 17:46:44 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:46:46.171 17:46:44 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:46:46.171 17:46:44 keyring_file -- keyring/file.sh@121 -- # jq length 00:46:46.171 17:46:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:46.432 17:46:44 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:46:46.432 17:46:44 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:46:46.432 17:46:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:46.432 17:46:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:46.432 17:46:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:46.432 17:46:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:46.432 17:46:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:46.432 17:46:44 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:46:46.432 17:46:44 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:46:46.432 17:46:44 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:46.432 17:46:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:46.433 17:46:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:46.433 17:46:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:46.433 17:46:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:46.692 17:46:45 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:46:46.692 17:46:45 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:46:46.692 17:46:45 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:46:46.692 17:46:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:46:46.953 17:46:45 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:46:46.953 17:46:45 keyring_file -- keyring/file.sh@1 -- # cleanup 00:46:46.953 17:46:45 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.lJ6RJQ7Cnv /tmp/tmp.IBOVtRgFSm 00:46:46.953 17:46:45 keyring_file -- keyring/file.sh@20 -- # killprocess 3439066 00:46:46.953 17:46:45 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3439066 ']' 00:46:46.953 17:46:45 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3439066 00:46:46.953 17:46:45 keyring_file -- common/autotest_common.sh@955 -- # uname 00:46:46.953 17:46:45 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:46:46.953 17:46:45 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3439066 00:46:46.953 17:46:45 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:46:46.953 17:46:45 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:46:46.953 17:46:45 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3439066' 00:46:46.953 killing process with pid 3439066 00:46:46.953 17:46:45 keyring_file -- common/autotest_common.sh@969 -- # kill 3439066 00:46:46.953 Received shutdown signal, test time was about 1.000000 seconds 00:46:46.953 00:46:46.953 Latency(us) 00:46:46.953 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:46.953 =================================================================================================================== 00:46:46.953 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:46:46.953 17:46:45 keyring_file -- common/autotest_common.sh@974 -- # wait 3439066 00:46:46.953 17:46:45 keyring_file -- keyring/file.sh@21 -- # killprocess 3437368 00:46:46.953 17:46:45 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3437368 ']' 00:46:46.953 17:46:45 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3437368 00:46:46.953 17:46:45 keyring_file -- common/autotest_common.sh@955 -- # uname 00:46:46.953 17:46:45 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:46:46.953 17:46:45 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3437368 00:46:47.214 17:46:45 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:46:47.214 17:46:45 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:46:47.214 17:46:45 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3437368' 00:46:47.214 killing process with pid 3437368 00:46:47.214 17:46:45 keyring_file -- common/autotest_common.sh@969 -- # kill 3437368 00:46:47.214 17:46:45 keyring_file -- common/autotest_common.sh@974 -- # wait 3437368 00:46:47.214 00:46:47.214 real 0m11.297s 00:46:47.214 user 0m27.673s 00:46:47.214 sys 0m2.614s 00:46:47.214 17:46:45 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:47.214 17:46:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:47.476 ************************************ 00:46:47.476 END TEST keyring_file 00:46:47.476 ************************************ 00:46:47.476 17:46:45 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:46:47.476 17:46:45 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:46:47.476 17:46:45 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:46:47.476 17:46:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:46:47.476 17:46:45 -- common/autotest_common.sh@10 -- # set +x 00:46:47.476 ************************************ 00:46:47.476 START TEST keyring_linux 00:46:47.476 ************************************ 00:46:47.476 17:46:45 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:46:47.476 Joined session keyring: 61078332 00:46:47.476 * Looking for test storage... 00:46:47.476 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:46:47.476 17:46:45 keyring_linux -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:46:47.476 17:46:45 keyring_linux -- common/autotest_common.sh@1681 -- # lcov --version 00:46:47.476 17:46:45 keyring_linux -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:46:47.476 17:46:46 keyring_linux -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:46:47.476 17:46:46 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:47.476 17:46:46 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:47.476 17:46:46 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:47.476 17:46:46 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:46:47.476 17:46:46 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:46:47.476 17:46:46 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:46:47.476 17:46:46 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:46:47.476 17:46:46 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:46:47.476 17:46:46 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:46:47.476 17:46:46 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:46:47.476 17:46:46 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:47.476 17:46:46 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:46:47.476 17:46:46 keyring_linux -- scripts/common.sh@345 -- # : 1 00:46:47.476 17:46:46 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:47.476 17:46:46 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:47.476 17:46:46 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:46:47.738 17:46:46 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:46:47.738 17:46:46 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:47.738 17:46:46 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:46:47.738 17:46:46 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:46:47.738 17:46:46 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:46:47.738 17:46:46 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:46:47.738 17:46:46 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:47.738 17:46:46 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:46:47.738 17:46:46 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:46:47.738 17:46:46 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:47.738 17:46:46 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:47.738 17:46:46 keyring_linux -- scripts/common.sh@368 -- # return 0 00:46:47.738 17:46:46 keyring_linux -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:47.738 17:46:46 keyring_linux -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:46:47.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:47.738 --rc genhtml_branch_coverage=1 00:46:47.738 --rc genhtml_function_coverage=1 00:46:47.738 --rc genhtml_legend=1 00:46:47.738 --rc geninfo_all_blocks=1 00:46:47.738 --rc geninfo_unexecuted_blocks=1 00:46:47.738 00:46:47.738 ' 00:46:47.738 17:46:46 keyring_linux -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:46:47.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:47.738 --rc genhtml_branch_coverage=1 00:46:47.738 --rc genhtml_function_coverage=1 00:46:47.738 --rc genhtml_legend=1 00:46:47.738 --rc geninfo_all_blocks=1 00:46:47.738 --rc geninfo_unexecuted_blocks=1 00:46:47.738 00:46:47.738 ' 00:46:47.738 17:46:46 keyring_linux -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:46:47.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:47.738 --rc genhtml_branch_coverage=1 00:46:47.738 --rc genhtml_function_coverage=1 00:46:47.738 --rc genhtml_legend=1 00:46:47.738 --rc geninfo_all_blocks=1 00:46:47.738 --rc geninfo_unexecuted_blocks=1 00:46:47.738 00:46:47.738 ' 00:46:47.738 17:46:46 keyring_linux -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:46:47.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:47.738 --rc genhtml_branch_coverage=1 00:46:47.738 --rc genhtml_function_coverage=1 00:46:47.738 --rc genhtml_legend=1 00:46:47.738 --rc geninfo_all_blocks=1 00:46:47.738 --rc geninfo_unexecuted_blocks=1 00:46:47.738 00:46:47.738 ' 00:46:47.738 17:46:46 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:46:47.738 17:46:46 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:46:47.738 17:46:46 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:46:47.738 17:46:46 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:47.738 17:46:46 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:47.738 17:46:46 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:47.738 17:46:46 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:47.738 17:46:46 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:47.738 17:46:46 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:47.738 17:46:46 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:47.738 17:46:46 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:47.738 17:46:46 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:47.738 17:46:46 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:47.738 17:46:46 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:46:47.738 17:46:46 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:46:47.738 17:46:46 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:47.738 17:46:46 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:47.738 17:46:46 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:46:47.738 17:46:46 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:47.738 17:46:46 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:47.738 17:46:46 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:46:47.738 17:46:46 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:47.738 17:46:46 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:47.738 17:46:46 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:47.739 17:46:46 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:47.739 17:46:46 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:47.739 17:46:46 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:47.739 17:46:46 keyring_linux -- paths/export.sh@5 -- # export PATH 00:46:47.739 17:46:46 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:47.739 17:46:46 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:46:47.739 17:46:46 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:47.739 17:46:46 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:47.739 17:46:46 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:47.739 17:46:46 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:47.739 17:46:46 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:47.739 17:46:46 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:46:47.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:46:47.739 17:46:46 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:47.739 17:46:46 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:47.739 17:46:46 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:47.739 17:46:46 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:46:47.739 17:46:46 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:46:47.739 17:46:46 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:46:47.739 17:46:46 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:46:47.739 17:46:46 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:46:47.739 17:46:46 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:46:47.739 17:46:46 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:46:47.739 17:46:46 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:46:47.739 17:46:46 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:46:47.739 17:46:46 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:46:47.739 17:46:46 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:46:47.739 17:46:46 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:46:47.739 17:46:46 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:46:47.739 17:46:46 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:46:47.739 17:46:46 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:46:47.739 17:46:46 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:46:47.739 17:46:46 keyring_linux -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:46:47.739 17:46:46 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:46:47.739 17:46:46 keyring_linux -- nvmf/common.sh@731 -- # python - 00:46:47.739 17:46:46 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:46:47.739 17:46:46 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:46:47.739 /tmp/:spdk-test:key0 00:46:47.739 17:46:46 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:46:47.739 17:46:46 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:46:47.739 17:46:46 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:46:47.739 17:46:46 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:46:47.739 17:46:46 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:46:47.739 17:46:46 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:46:47.739 17:46:46 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:46:47.739 17:46:46 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:46:47.739 17:46:46 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:46:47.739 17:46:46 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:46:47.739 17:46:46 keyring_linux -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:46:47.739 17:46:46 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:46:47.739 17:46:46 keyring_linux -- nvmf/common.sh@731 -- # python - 00:46:47.739 17:46:46 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:46:47.739 17:46:46 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:46:47.739 /tmp/:spdk-test:key1 00:46:47.739 17:46:46 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3439728 00:46:47.739 17:46:46 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3439728 00:46:47.739 17:46:46 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:46:47.739 17:46:46 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 3439728 ']' 00:46:47.739 17:46:46 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:47.739 17:46:46 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:46:47.739 17:46:46 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:47.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:47.739 17:46:46 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:46:47.739 17:46:46 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:46:47.739 [2024-10-01 17:46:46.242504] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:46:47.739 [2024-10-01 17:46:46.242584] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3439728 ] 00:46:47.999 [2024-10-01 17:46:46.306841] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:47.999 [2024-10-01 17:46:46.346336] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:46:48.570 17:46:47 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:46:48.570 17:46:47 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:46:48.570 17:46:47 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:46:48.570 17:46:47 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:48.570 17:46:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:46:48.570 [2024-10-01 17:46:47.027088] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:48.570 null0 00:46:48.570 [2024-10-01 17:46:47.059130] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:46:48.570 [2024-10-01 17:46:47.059523] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:46:48.570 17:46:47 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:48.570 17:46:47 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:46:48.570 594916484 00:46:48.570 17:46:47 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:46:48.570 1046497038 00:46:48.570 17:46:47 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3439810 00:46:48.570 17:46:47 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3439810 /var/tmp/bperf.sock 00:46:48.570 17:46:47 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:46:48.570 17:46:47 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 3439810 ']' 00:46:48.570 17:46:47 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:46:48.570 17:46:47 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:46:48.570 17:46:47 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:46:48.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:46:48.570 17:46:47 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:46:48.570 17:46:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:46:48.831 [2024-10-01 17:46:47.137435] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:46:48.831 [2024-10-01 17:46:47.137485] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3439810 ] 00:46:48.831 [2024-10-01 17:46:47.211872] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:48.831 [2024-10-01 17:46:47.240164] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:46:48.831 17:46:47 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:46:48.831 17:46:47 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:46:48.831 17:46:47 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:46:48.831 17:46:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:46:49.091 17:46:47 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:46:49.091 17:46:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:46:49.352 17:46:47 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:46:49.352 17:46:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:46:49.352 [2024-10-01 17:46:47.800989] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:49.352 nvme0n1 00:46:49.613 17:46:47 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:46:49.613 17:46:47 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:46:49.613 17:46:47 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:46:49.613 17:46:47 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:46:49.613 17:46:47 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:46:49.613 17:46:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:49.613 17:46:48 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:46:49.613 17:46:48 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:46:49.613 17:46:48 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:46:49.613 17:46:48 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:46:49.613 17:46:48 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:49.613 17:46:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:49.613 17:46:48 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:46:49.874 17:46:48 keyring_linux -- keyring/linux.sh@25 -- # sn=594916484 00:46:49.874 17:46:48 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:46:49.874 17:46:48 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:46:49.874 17:46:48 keyring_linux -- keyring/linux.sh@26 -- # [[ 594916484 == \5\9\4\9\1\6\4\8\4 ]] 00:46:49.874 17:46:48 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 594916484 00:46:49.874 17:46:48 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:46:49.874 17:46:48 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:46:49.874 Running I/O for 1 seconds... 00:46:50.813 16855.00 IOPS, 65.84 MiB/s 00:46:50.813 Latency(us) 00:46:50.814 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:50.814 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:46:50.814 nvme0n1 : 1.01 16854.92 65.84 0.00 0.00 7563.11 6444.37 16493.23 00:46:50.814 =================================================================================================================== 00:46:50.814 Total : 16854.92 65.84 0.00 0.00 7563.11 6444.37 16493.23 00:46:50.814 { 00:46:50.814 "results": [ 00:46:50.814 { 00:46:50.814 "job": "nvme0n1", 00:46:50.814 "core_mask": "0x2", 00:46:50.814 "workload": "randread", 00:46:50.814 "status": "finished", 00:46:50.814 "queue_depth": 128, 00:46:50.814 "io_size": 4096, 00:46:50.814 "runtime": 1.007658, 00:46:50.814 "iops": 16854.924984468937, 00:46:50.814 "mibps": 65.83955072058178, 00:46:50.814 "io_failed": 0, 00:46:50.814 "io_timeout": 0, 00:46:50.814 "avg_latency_us": 7563.110617051341, 00:46:50.814 "min_latency_us": 6444.373333333333, 00:46:50.814 "max_latency_us": 16493.226666666666 00:46:50.814 } 00:46:50.814 ], 00:46:50.814 "core_count": 1 00:46:50.814 } 00:46:51.074 17:46:49 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:46:51.074 17:46:49 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:46:51.074 17:46:49 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:46:51.074 17:46:49 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:46:51.074 17:46:49 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:46:51.074 17:46:49 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:46:51.074 17:46:49 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:46:51.074 17:46:49 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:51.334 17:46:49 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:46:51.334 17:46:49 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:46:51.334 17:46:49 keyring_linux -- keyring/linux.sh@23 -- # return 00:46:51.334 17:46:49 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:46:51.334 17:46:49 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:46:51.334 17:46:49 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:46:51.334 17:46:49 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:46:51.334 17:46:49 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:46:51.334 17:46:49 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:46:51.334 17:46:49 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:46:51.334 17:46:49 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:46:51.334 17:46:49 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:46:51.334 [2024-10-01 17:46:49.878167] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:46:51.334 [2024-10-01 17:46:49.878896] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf1fea0 (107): Transport endpoint is not connected 00:46:51.334 [2024-10-01 17:46:49.879892] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf1fea0 (9): Bad file descriptor 00:46:51.334 [2024-10-01 17:46:49.880895] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:46:51.334 [2024-10-01 17:46:49.880902] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:46:51.334 [2024-10-01 17:46:49.880908] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:46:51.334 [2024-10-01 17:46:49.880914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:46:51.595 request: 00:46:51.595 { 00:46:51.595 "name": "nvme0", 00:46:51.595 "trtype": "tcp", 00:46:51.595 "traddr": "127.0.0.1", 00:46:51.595 "adrfam": "ipv4", 00:46:51.595 "trsvcid": "4420", 00:46:51.595 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:51.595 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:51.595 "prchk_reftag": false, 00:46:51.595 "prchk_guard": false, 00:46:51.595 "hdgst": false, 00:46:51.595 "ddgst": false, 00:46:51.595 "psk": ":spdk-test:key1", 00:46:51.595 "allow_unrecognized_csi": false, 00:46:51.595 "method": "bdev_nvme_attach_controller", 00:46:51.595 "req_id": 1 00:46:51.595 } 00:46:51.595 Got JSON-RPC error response 00:46:51.595 response: 00:46:51.595 { 00:46:51.595 "code": -5, 00:46:51.595 "message": "Input/output error" 00:46:51.595 } 00:46:51.595 17:46:49 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:46:51.595 17:46:49 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:46:51.595 17:46:49 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:46:51.595 17:46:49 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:46:51.595 17:46:49 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:46:51.595 17:46:49 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:46:51.595 17:46:49 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:46:51.595 17:46:49 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:46:51.595 17:46:49 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:46:51.595 17:46:49 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:46:51.595 17:46:49 keyring_linux -- keyring/linux.sh@33 -- # sn=594916484 00:46:51.595 17:46:49 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 594916484 00:46:51.595 1 links removed 00:46:51.595 17:46:49 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:46:51.595 17:46:49 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:46:51.595 17:46:49 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:46:51.595 17:46:49 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:46:51.595 17:46:49 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:46:51.595 17:46:49 keyring_linux -- keyring/linux.sh@33 -- # sn=1046497038 00:46:51.595 17:46:49 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1046497038 00:46:51.595 1 links removed 00:46:51.595 17:46:49 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3439810 00:46:51.595 17:46:49 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 3439810 ']' 00:46:51.595 17:46:49 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 3439810 00:46:51.595 17:46:49 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:46:51.595 17:46:49 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:46:51.595 17:46:49 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3439810 00:46:51.595 17:46:49 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:46:51.595 17:46:49 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:46:51.595 17:46:49 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3439810' 00:46:51.595 killing process with pid 3439810 00:46:51.595 17:46:49 keyring_linux -- common/autotest_common.sh@969 -- # kill 3439810 00:46:51.595 Received shutdown signal, test time was about 1.000000 seconds 00:46:51.595 00:46:51.595 Latency(us) 00:46:51.595 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:51.595 =================================================================================================================== 00:46:51.595 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:46:51.595 17:46:49 keyring_linux -- common/autotest_common.sh@974 -- # wait 3439810 00:46:51.595 17:46:50 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3439728 00:46:51.595 17:46:50 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 3439728 ']' 00:46:51.595 17:46:50 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 3439728 00:46:51.595 17:46:50 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:46:51.595 17:46:50 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:46:51.595 17:46:50 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3439728 00:46:51.856 17:46:50 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:46:51.856 17:46:50 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:46:51.856 17:46:50 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3439728' 00:46:51.856 killing process with pid 3439728 00:46:51.856 17:46:50 keyring_linux -- common/autotest_common.sh@969 -- # kill 3439728 00:46:51.856 17:46:50 keyring_linux -- common/autotest_common.sh@974 -- # wait 3439728 00:46:51.856 00:46:51.856 real 0m4.541s 00:46:51.856 user 0m8.194s 00:46:51.856 sys 0m1.363s 00:46:51.856 17:46:50 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:51.856 17:46:50 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:46:51.856 ************************************ 00:46:51.856 END TEST keyring_linux 00:46:51.856 ************************************ 00:46:52.117 17:46:50 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:46:52.117 17:46:50 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:46:52.117 17:46:50 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:46:52.117 17:46:50 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:46:52.117 17:46:50 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:46:52.117 17:46:50 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:46:52.117 17:46:50 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:46:52.117 17:46:50 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:46:52.117 17:46:50 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:46:52.117 17:46:50 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:46:52.117 17:46:50 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:46:52.117 17:46:50 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:46:52.117 17:46:50 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:46:52.117 17:46:50 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:46:52.117 17:46:50 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:46:52.117 17:46:50 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:46:52.117 17:46:50 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:46:52.117 17:46:50 -- common/autotest_common.sh@724 -- # xtrace_disable 00:46:52.117 17:46:50 -- common/autotest_common.sh@10 -- # set +x 00:46:52.117 17:46:50 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:46:52.117 17:46:50 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:46:52.117 17:46:50 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:46:52.117 17:46:50 -- common/autotest_common.sh@10 -- # set +x 00:47:00.258 INFO: APP EXITING 00:47:00.259 INFO: killing all VMs 00:47:00.259 INFO: killing vhost app 00:47:00.259 WARN: no vhost pid file found 00:47:00.259 INFO: EXIT DONE 00:47:02.805 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:47:02.805 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:47:02.805 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:47:02.805 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:47:02.805 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:47:02.805 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:47:02.805 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:47:02.805 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:47:02.805 0000:65:00.0 (144d a80a): Already using the nvme driver 00:47:02.805 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:47:02.805 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:47:02.805 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:47:02.805 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:47:02.805 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:47:02.805 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:47:02.805 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:47:02.805 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:47:06.106 Cleaning 00:47:06.106 Removing: /var/run/dpdk/spdk0/config 00:47:06.106 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:47:06.106 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:47:06.106 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:47:06.106 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:47:06.106 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:47:06.106 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:47:06.106 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:47:06.106 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:47:06.106 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:47:06.106 Removing: /var/run/dpdk/spdk0/hugepage_info 00:47:06.106 Removing: /var/run/dpdk/spdk1/config 00:47:06.106 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:47:06.106 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:47:06.106 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:47:06.106 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:47:06.106 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:47:06.106 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:47:06.106 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:47:06.107 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:47:06.107 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:47:06.107 Removing: /var/run/dpdk/spdk1/hugepage_info 00:47:06.107 Removing: /var/run/dpdk/spdk2/config 00:47:06.107 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:47:06.107 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:47:06.107 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:47:06.107 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:47:06.107 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:47:06.107 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:47:06.107 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:47:06.107 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:47:06.107 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:47:06.107 Removing: /var/run/dpdk/spdk2/hugepage_info 00:47:06.107 Removing: /var/run/dpdk/spdk3/config 00:47:06.107 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:47:06.107 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:47:06.107 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:47:06.107 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:47:06.107 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:47:06.107 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:47:06.107 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:47:06.107 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:47:06.107 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:47:06.107 Removing: /var/run/dpdk/spdk3/hugepage_info 00:47:06.107 Removing: /var/run/dpdk/spdk4/config 00:47:06.107 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:47:06.107 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:47:06.107 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:47:06.107 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:47:06.107 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:47:06.107 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:47:06.107 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:47:06.107 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:47:06.107 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:47:06.107 Removing: /var/run/dpdk/spdk4/hugepage_info 00:47:06.107 Removing: /dev/shm/bdev_svc_trace.1 00:47:06.107 Removing: /dev/shm/nvmf_trace.0 00:47:06.107 Removing: /dev/shm/spdk_tgt_trace.pid2774130 00:47:06.107 Removing: /var/run/dpdk/spdk0 00:47:06.107 Removing: /var/run/dpdk/spdk1 00:47:06.107 Removing: /var/run/dpdk/spdk2 00:47:06.107 Removing: /var/run/dpdk/spdk3 00:47:06.107 Removing: /var/run/dpdk/spdk4 00:47:06.107 Removing: /var/run/dpdk/spdk_pid2772462 00:47:06.107 Removing: /var/run/dpdk/spdk_pid2774130 00:47:06.107 Removing: /var/run/dpdk/spdk_pid2774654 00:47:06.107 Removing: /var/run/dpdk/spdk_pid2775697 00:47:06.107 Removing: /var/run/dpdk/spdk_pid2775945 00:47:06.107 Removing: /var/run/dpdk/spdk_pid2777099 00:47:06.107 Removing: /var/run/dpdk/spdk_pid2777104 00:47:06.107 Removing: /var/run/dpdk/spdk_pid2777560 00:47:06.107 Removing: /var/run/dpdk/spdk_pid2778679 00:47:06.107 Removing: /var/run/dpdk/spdk_pid2779170 00:47:06.367 Removing: /var/run/dpdk/spdk_pid2779562 00:47:06.367 Removing: /var/run/dpdk/spdk_pid2779950 00:47:06.367 Removing: /var/run/dpdk/spdk_pid2780369 00:47:06.367 Removing: /var/run/dpdk/spdk_pid2780766 00:47:06.367 Removing: /var/run/dpdk/spdk_pid2780901 00:47:06.367 Removing: /var/run/dpdk/spdk_pid2781160 00:47:06.367 Removing: /var/run/dpdk/spdk_pid2781542 00:47:06.367 Removing: /var/run/dpdk/spdk_pid2782607 00:47:06.367 Removing: /var/run/dpdk/spdk_pid2786270 00:47:06.367 Removing: /var/run/dpdk/spdk_pid2786520 00:47:06.367 Removing: /var/run/dpdk/spdk_pid2786708 00:47:06.367 Removing: /var/run/dpdk/spdk_pid2786712 00:47:06.367 Removing: /var/run/dpdk/spdk_pid2787498 00:47:06.367 Removing: /var/run/dpdk/spdk_pid2787651 00:47:06.367 Removing: /var/run/dpdk/spdk_pid2788256 00:47:06.367 Removing: /var/run/dpdk/spdk_pid2788339 00:47:06.367 Removing: /var/run/dpdk/spdk_pid2788630 00:47:06.367 Removing: /var/run/dpdk/spdk_pid2788768 00:47:06.367 Removing: /var/run/dpdk/spdk_pid2788995 00:47:06.367 Removing: /var/run/dpdk/spdk_pid2789009 00:47:06.367 Removing: /var/run/dpdk/spdk_pid2789629 00:47:06.367 Removing: /var/run/dpdk/spdk_pid2789806 00:47:06.367 Removing: /var/run/dpdk/spdk_pid2790206 00:47:06.368 Removing: /var/run/dpdk/spdk_pid2794719 00:47:06.368 Removing: /var/run/dpdk/spdk_pid2800099 00:47:06.368 Removing: /var/run/dpdk/spdk_pid2811856 00:47:06.368 Removing: /var/run/dpdk/spdk_pid2812547 00:47:06.368 Removing: /var/run/dpdk/spdk_pid2817610 00:47:06.368 Removing: /var/run/dpdk/spdk_pid2817966 00:47:06.368 Removing: /var/run/dpdk/spdk_pid2823013 00:47:06.368 Removing: /var/run/dpdk/spdk_pid2830023 00:47:06.368 Removing: /var/run/dpdk/spdk_pid2833198 00:47:06.368 Removing: /var/run/dpdk/spdk_pid2846079 00:47:06.368 Removing: /var/run/dpdk/spdk_pid2856852 00:47:06.368 Removing: /var/run/dpdk/spdk_pid2859041 00:47:06.368 Removing: /var/run/dpdk/spdk_pid2860055 00:47:06.368 Removing: /var/run/dpdk/spdk_pid2880752 00:47:06.368 Removing: /var/run/dpdk/spdk_pid2885628 00:47:06.368 Removing: /var/run/dpdk/spdk_pid2985364 00:47:06.368 Removing: /var/run/dpdk/spdk_pid2991884 00:47:06.368 Removing: /var/run/dpdk/spdk_pid2998980 00:47:06.368 Removing: /var/run/dpdk/spdk_pid3006077 00:47:06.368 Removing: /var/run/dpdk/spdk_pid3006163 00:47:06.368 Removing: /var/run/dpdk/spdk_pid3007151 00:47:06.368 Removing: /var/run/dpdk/spdk_pid3008171 00:47:06.368 Removing: /var/run/dpdk/spdk_pid3009195 00:47:06.368 Removing: /var/run/dpdk/spdk_pid3009817 00:47:06.368 Removing: /var/run/dpdk/spdk_pid3009872 00:47:06.368 Removing: /var/run/dpdk/spdk_pid3010165 00:47:06.368 Removing: /var/run/dpdk/spdk_pid3010219 00:47:06.368 Removing: /var/run/dpdk/spdk_pid3010221 00:47:06.368 Removing: /var/run/dpdk/spdk_pid3011226 00:47:06.368 Removing: /var/run/dpdk/spdk_pid3012227 00:47:06.368 Removing: /var/run/dpdk/spdk_pid3013239 00:47:06.368 Removing: /var/run/dpdk/spdk_pid3013908 00:47:06.368 Removing: /var/run/dpdk/spdk_pid3013932 00:47:06.368 Removing: /var/run/dpdk/spdk_pid3014248 00:47:06.368 Removing: /var/run/dpdk/spdk_pid3015443 00:47:06.368 Removing: /var/run/dpdk/spdk_pid3016752 00:47:06.368 Removing: /var/run/dpdk/spdk_pid3026734 00:47:06.368 Removing: /var/run/dpdk/spdk_pid3062525 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3067921 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3070015 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3072502 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3072569 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3072855 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3072873 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3073582 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3075600 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3076678 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3077283 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3079791 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3080458 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3081162 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3085930 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3092593 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3092594 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3092595 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3097243 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3101800 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3107629 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3151584 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3156395 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3164179 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3165690 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3167211 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3168865 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3174486 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3179193 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3188161 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3188254 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3193242 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3193368 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3193672 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3194157 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3194251 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3195435 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3197384 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3199379 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3201366 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3203144 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3205075 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3213019 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3213689 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3214778 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3216132 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3222325 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3225429 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3231712 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3238197 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3247990 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3256304 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3256334 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3279713 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3280402 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3281083 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3281790 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3282829 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3283528 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3284264 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3285012 00:47:06.628 Removing: /var/run/dpdk/spdk_pid3290060 00:47:06.889 Removing: /var/run/dpdk/spdk_pid3290288 00:47:06.889 Removing: /var/run/dpdk/spdk_pid3297333 00:47:06.889 Removing: /var/run/dpdk/spdk_pid3297689 00:47:06.889 Removing: /var/run/dpdk/spdk_pid3304085 00:47:06.889 Removing: /var/run/dpdk/spdk_pid3309225 00:47:06.889 Removing: /var/run/dpdk/spdk_pid3321061 00:47:06.889 Removing: /var/run/dpdk/spdk_pid3321732 00:47:06.889 Removing: /var/run/dpdk/spdk_pid3326780 00:47:06.889 Removing: /var/run/dpdk/spdk_pid3327132 00:47:06.889 Removing: /var/run/dpdk/spdk_pid3332036 00:47:06.889 Removing: /var/run/dpdk/spdk_pid3338777 00:47:06.889 Removing: /var/run/dpdk/spdk_pid3341658 00:47:06.889 Removing: /var/run/dpdk/spdk_pid3353650 00:47:06.889 Removing: /var/run/dpdk/spdk_pid3364610 00:47:06.889 Removing: /var/run/dpdk/spdk_pid3366507 00:47:06.889 Removing: /var/run/dpdk/spdk_pid3367569 00:47:06.889 Removing: /var/run/dpdk/spdk_pid3386858 00:47:06.889 Removing: /var/run/dpdk/spdk_pid3391405 00:47:06.889 Removing: /var/run/dpdk/spdk_pid3394593 00:47:06.889 Removing: /var/run/dpdk/spdk_pid3402019 00:47:06.889 Removing: /var/run/dpdk/spdk_pid3402031 00:47:06.889 Removing: /var/run/dpdk/spdk_pid3407755 00:47:06.889 Removing: /var/run/dpdk/spdk_pid3410088 00:47:06.889 Removing: /var/run/dpdk/spdk_pid3412332 00:47:06.889 Removing: /var/run/dpdk/spdk_pid3413753 00:47:06.889 Removing: /var/run/dpdk/spdk_pid3416563 00:47:06.890 Removing: /var/run/dpdk/spdk_pid3417938 00:47:06.890 Removing: /var/run/dpdk/spdk_pid3427701 00:47:06.890 Removing: /var/run/dpdk/spdk_pid3428367 00:47:06.890 Removing: /var/run/dpdk/spdk_pid3428985 00:47:06.890 Removing: /var/run/dpdk/spdk_pid3431649 00:47:06.890 Removing: /var/run/dpdk/spdk_pid3432317 00:47:06.890 Removing: /var/run/dpdk/spdk_pid3432898 00:47:06.890 Removing: /var/run/dpdk/spdk_pid3437368 00:47:06.890 Removing: /var/run/dpdk/spdk_pid3437555 00:47:06.890 Removing: /var/run/dpdk/spdk_pid3439066 00:47:06.890 Removing: /var/run/dpdk/spdk_pid3439728 00:47:06.890 Removing: /var/run/dpdk/spdk_pid3439810 00:47:06.890 Clean 00:47:06.890 17:47:05 -- common/autotest_common.sh@1451 -- # return 0 00:47:06.890 17:47:05 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:47:06.890 17:47:05 -- common/autotest_common.sh@730 -- # xtrace_disable 00:47:06.890 17:47:05 -- common/autotest_common.sh@10 -- # set +x 00:47:07.150 17:47:05 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:47:07.150 17:47:05 -- common/autotest_common.sh@730 -- # xtrace_disable 00:47:07.150 17:47:05 -- common/autotest_common.sh@10 -- # set +x 00:47:07.150 17:47:05 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:47:07.150 17:47:05 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:47:07.150 17:47:05 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:47:07.150 17:47:05 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:47:07.150 17:47:05 -- spdk/autotest.sh@394 -- # hostname 00:47:07.150 17:47:05 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:47:07.150 geninfo: WARNING: invalid characters removed from testname! 00:47:33.732 17:47:31 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:35.644 17:47:34 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:37.553 17:47:35 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:39.479 17:47:37 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:40.975 17:47:39 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:42.880 17:47:41 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:44.790 17:47:42 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:47:44.790 17:47:42 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:47:44.790 17:47:42 -- common/autotest_common.sh@1681 -- $ lcov --version 00:47:44.790 17:47:42 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:47:44.790 17:47:42 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:47:44.790 17:47:42 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:47:44.790 17:47:42 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:47:44.790 17:47:42 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:47:44.790 17:47:42 -- scripts/common.sh@336 -- $ IFS=.-: 00:47:44.790 17:47:42 -- scripts/common.sh@336 -- $ read -ra ver1 00:47:44.790 17:47:42 -- scripts/common.sh@337 -- $ IFS=.-: 00:47:44.791 17:47:42 -- scripts/common.sh@337 -- $ read -ra ver2 00:47:44.791 17:47:42 -- scripts/common.sh@338 -- $ local 'op=<' 00:47:44.791 17:47:42 -- scripts/common.sh@340 -- $ ver1_l=2 00:47:44.791 17:47:42 -- scripts/common.sh@341 -- $ ver2_l=1 00:47:44.791 17:47:42 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:47:44.791 17:47:42 -- scripts/common.sh@344 -- $ case "$op" in 00:47:44.791 17:47:42 -- scripts/common.sh@345 -- $ : 1 00:47:44.791 17:47:42 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:47:44.791 17:47:42 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:44.791 17:47:42 -- scripts/common.sh@365 -- $ decimal 1 00:47:44.791 17:47:43 -- scripts/common.sh@353 -- $ local d=1 00:47:44.791 17:47:43 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:47:44.791 17:47:43 -- scripts/common.sh@355 -- $ echo 1 00:47:44.791 17:47:43 -- scripts/common.sh@365 -- $ ver1[v]=1 00:47:44.791 17:47:43 -- scripts/common.sh@366 -- $ decimal 2 00:47:44.791 17:47:43 -- scripts/common.sh@353 -- $ local d=2 00:47:44.791 17:47:43 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:47:44.791 17:47:43 -- scripts/common.sh@355 -- $ echo 2 00:47:44.791 17:47:43 -- scripts/common.sh@366 -- $ ver2[v]=2 00:47:44.791 17:47:43 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:47:44.791 17:47:43 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:47:44.791 17:47:43 -- scripts/common.sh@368 -- $ return 0 00:47:44.791 17:47:43 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:44.791 17:47:43 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:47:44.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:44.791 --rc genhtml_branch_coverage=1 00:47:44.791 --rc genhtml_function_coverage=1 00:47:44.791 --rc genhtml_legend=1 00:47:44.791 --rc geninfo_all_blocks=1 00:47:44.791 --rc geninfo_unexecuted_blocks=1 00:47:44.791 00:47:44.791 ' 00:47:44.791 17:47:43 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:47:44.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:44.791 --rc genhtml_branch_coverage=1 00:47:44.791 --rc genhtml_function_coverage=1 00:47:44.791 --rc genhtml_legend=1 00:47:44.791 --rc geninfo_all_blocks=1 00:47:44.791 --rc geninfo_unexecuted_blocks=1 00:47:44.791 00:47:44.791 ' 00:47:44.791 17:47:43 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:47:44.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:44.791 --rc genhtml_branch_coverage=1 00:47:44.791 --rc genhtml_function_coverage=1 00:47:44.791 --rc genhtml_legend=1 00:47:44.791 --rc geninfo_all_blocks=1 00:47:44.791 --rc geninfo_unexecuted_blocks=1 00:47:44.791 00:47:44.791 ' 00:47:44.791 17:47:43 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:47:44.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:44.791 --rc genhtml_branch_coverage=1 00:47:44.791 --rc genhtml_function_coverage=1 00:47:44.791 --rc genhtml_legend=1 00:47:44.791 --rc geninfo_all_blocks=1 00:47:44.791 --rc geninfo_unexecuted_blocks=1 00:47:44.791 00:47:44.791 ' 00:47:44.791 17:47:43 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:47:44.791 17:47:43 -- scripts/common.sh@15 -- $ shopt -s extglob 00:47:44.791 17:47:43 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:47:44.791 17:47:43 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:44.791 17:47:43 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:44.791 17:47:43 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:44.791 17:47:43 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:44.791 17:47:43 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:44.791 17:47:43 -- paths/export.sh@5 -- $ export PATH 00:47:44.791 17:47:43 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:44.791 17:47:43 -- common/autobuild_common.sh@478 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:47:44.791 17:47:43 -- common/autobuild_common.sh@479 -- $ date +%s 00:47:44.791 17:47:43 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727797663.XXXXXX 00:47:44.791 17:47:43 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727797663.qHj2Sj 00:47:44.791 17:47:43 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:47:44.791 17:47:43 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:47:44.791 17:47:43 -- common/autobuild_common.sh@486 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:47:44.791 17:47:43 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:47:44.791 17:47:43 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:47:44.791 17:47:43 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:47:44.791 17:47:43 -- common/autobuild_common.sh@495 -- $ get_config_params 00:47:44.791 17:47:43 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:47:44.791 17:47:43 -- common/autotest_common.sh@10 -- $ set +x 00:47:44.791 17:47:43 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:47:44.791 17:47:43 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:47:44.791 17:47:43 -- pm/common@17 -- $ local monitor 00:47:44.791 17:47:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:44.791 17:47:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:44.791 17:47:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:44.791 17:47:43 -- pm/common@21 -- $ date +%s 00:47:44.791 17:47:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:44.791 17:47:43 -- pm/common@21 -- $ date +%s 00:47:44.791 17:47:43 -- pm/common@25 -- $ sleep 1 00:47:44.791 17:47:43 -- pm/common@21 -- $ date +%s 00:47:44.791 17:47:43 -- pm/common@21 -- $ date +%s 00:47:44.791 17:47:43 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1727797663 00:47:44.791 17:47:43 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1727797663 00:47:44.791 17:47:43 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1727797663 00:47:44.791 17:47:43 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1727797663 00:47:44.791 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1727797663_collect-cpu-load.pm.log 00:47:44.791 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1727797663_collect-vmstat.pm.log 00:47:44.791 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1727797663_collect-cpu-temp.pm.log 00:47:44.791 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1727797663_collect-bmc-pm.bmc.pm.log 00:47:45.732 17:47:44 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:47:45.732 17:47:44 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:47:45.732 17:47:44 -- spdk/autopackage.sh@14 -- $ timing_finish 00:47:45.732 17:47:44 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:47:45.732 17:47:44 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:47:45.732 17:47:44 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:47:45.732 17:47:44 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:47:45.732 17:47:44 -- pm/common@29 -- $ signal_monitor_resources TERM 00:47:45.732 17:47:44 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:47:45.732 17:47:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:45.732 17:47:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:47:45.732 17:47:44 -- pm/common@44 -- $ pid=3453321 00:47:45.732 17:47:44 -- pm/common@50 -- $ kill -TERM 3453321 00:47:45.732 17:47:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:45.732 17:47:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:47:45.732 17:47:44 -- pm/common@44 -- $ pid=3453322 00:47:45.732 17:47:44 -- pm/common@50 -- $ kill -TERM 3453322 00:47:45.732 17:47:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:45.732 17:47:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:47:45.732 17:47:44 -- pm/common@44 -- $ pid=3453324 00:47:45.732 17:47:44 -- pm/common@50 -- $ kill -TERM 3453324 00:47:45.732 17:47:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:45.732 17:47:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:47:45.732 17:47:44 -- pm/common@44 -- $ pid=3453351 00:47:45.732 17:47:44 -- pm/common@50 -- $ sudo -E kill -TERM 3453351 00:47:45.732 + [[ -n 2671859 ]] 00:47:45.732 + sudo kill 2671859 00:47:45.744 [Pipeline] } 00:47:45.760 [Pipeline] // stage 00:47:45.765 [Pipeline] } 00:47:45.779 [Pipeline] // timeout 00:47:45.784 [Pipeline] } 00:47:45.798 [Pipeline] // catchError 00:47:45.803 [Pipeline] } 00:47:45.817 [Pipeline] // wrap 00:47:45.823 [Pipeline] } 00:47:45.836 [Pipeline] // catchError 00:47:45.845 [Pipeline] stage 00:47:45.847 [Pipeline] { (Epilogue) 00:47:45.860 [Pipeline] catchError 00:47:45.861 [Pipeline] { 00:47:45.874 [Pipeline] echo 00:47:45.876 Cleanup processes 00:47:45.882 [Pipeline] sh 00:47:46.170 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:47:46.171 3453467 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:47:46.171 3454017 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:47:46.185 [Pipeline] sh 00:47:46.472 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:47:46.472 ++ grep -v 'sudo pgrep' 00:47:46.472 ++ awk '{print $1}' 00:47:46.472 + sudo kill -9 3453467 00:47:46.485 [Pipeline] sh 00:47:46.772 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:47:59.010 [Pipeline] sh 00:47:59.297 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:47:59.298 Artifacts sizes are good 00:47:59.315 [Pipeline] archiveArtifacts 00:47:59.325 Archiving artifacts 00:47:59.574 [Pipeline] sh 00:47:59.880 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:47:59.895 [Pipeline] cleanWs 00:47:59.905 [WS-CLEANUP] Deleting project workspace... 00:47:59.905 [WS-CLEANUP] Deferred wipeout is used... 00:47:59.913 [WS-CLEANUP] done 00:47:59.915 [Pipeline] } 00:47:59.937 [Pipeline] // catchError 00:47:59.950 [Pipeline] sh 00:48:00.316 + logger -p user.info -t JENKINS-CI 00:48:00.327 [Pipeline] } 00:48:00.343 [Pipeline] // stage 00:48:00.349 [Pipeline] } 00:48:00.366 [Pipeline] // node 00:48:00.373 [Pipeline] End of Pipeline 00:48:00.416 Finished: SUCCESS